By James K. Baker1,2,3, Bradley J. Baker1,4, Xuedong Huang5, Raj Reddy3, Tom Mitchell3, Ivan Garibay2, Bhiksha Raj3, Rita Singh3 and Michael Georgiopoulos2
Artificial intelligence, particularly deep learning with neural networks, has had many dramatic successes in recent years. However, these dramatic demonstrations obfuscate the fact that there are still many large gaps between the capabilities of current AI systems and true intelligence. In addition, the research benchmarks by which the relative performance of AI are measured also fail to measure these important gaps. Indeed, some common criteria for determining whether a machine learning system exhibits "intelligence" also ignore these gaps. For example, one criterion that has been used for many decades is that if machine learning system can do as well as a human on a task that indicates intelligence when done by a human, then the machine learning system is said to exhibit "intelligence." This criterion has been met by chess-playing programs for several decades. In the current decade, it has been met on a wide variety of tasks, including many real-world applications.
The gaps in this criterion of "machine intelligence" are shown by the following more ambitious criteria:
To make these goals possible, we are proposing research that breaks the number one unwritten rule of AI research: "hands off during training." Indeed, it will be very difficult to meet the criteria listed above without human assistance. The concepts of "interpretability" and "sensibility" are defined in terms of human reaction. Even humans often fail to have Socratic wisdom, as Socrates himself pointed out. However, intelligent people meet all these criteria to some degree and current machine learning systems generally do not.
More specifically, we propose systems in which humans and AI systems cooperate in the training of the human+AI systems. We call this methodology "Human-Assisted Training of Artificial Intelligence" or HAT-AI for short.
From another perspective, HAT-AI represents a radical change in the long-term direction of AI. Rather than envisioning a future in which autonomous AI systems first achieve Artificial General Intelligence and then eventually achieve Artificial Super Intelligence, humans and machines should begin working cooperatively immediately.
A cooperative human+AI system changes the goal for artificial general intelligence. As a minimum, a cooperative human+AI system should at least match the performance of either a human working alone or an artificial system working alone. In other words, a fully functioning cooperative human+AI system should have general intelligence since a human working alone has general intelligence. This point of view of cooperative intelligent systems gives a very different, less dystopian, vision of future systems with general and then super intelligence.
Human-assisted training for artificial intelligence is just one, although perhaps the most controversial, of many proposals within the broader concept of "cooperative AI." Machine learning as part of a cooperative team of humans and computers is one of the methods of machine learning described by Mitchell as The Future of Machine Learning. This website hopes to be a bridge between the present and that future. Other concepts in cooperative AI include ethical and moral considerations and many other issues as well many other tools and approaches in addition to human-assisted training.
Affiliations:
© D5AI LLC, 2020
The text in this work is licensed under a Creative Commons Attribution 4.0 International License.
Some of the ideas presented here are covered by issued or pending patents. No license to such patents is created or implied by publication or reference to herein.