Human-Assisted Training for Artificial Intelligence

Introduction to HAT-AI

By James K. Baker1,2,3, Bradley J. Baker1,4, Xuedong Huang5, Raj Reddy3, Tom Mitchell3, Ivan Garibay2, Bhiksha Raj3, Rita Singh3 and Michael Georgiopoulos2

Artificial intelligence, particularly deep learning with neural networks, has had many dramatic successes in recent years. However, these dramatic demonstrations obfuscate the fact that there are still many large gaps between the capabilities of current AI systems and true intelligence. In addition, the research benchmarks by which the relative performance of AI are measured also fail to measure these important gaps. Indeed, some common criteria for determining whether a machine learning system exhibits "intelligence" also ignore these gaps. For example, one criterion that has been used for many decades is that if machine learning system can do as well as a human on a task that indicates intelligence when done by a human, then the machine learning system is said to exhibit "intelligence." This criterion has been met by chess-playing programs for several decades. In the current decade, it has been met on a wide variety of tasks, including many real-world applications.

The gaps in this criterion of "machine intelligence" are shown by the following more ambitious criteria:

1) Sensibility: An intelligent system should make "sensible" decisions:
     1a) In pattern recognition, a mistake is not sensible if a human would say "That's stupid! No one would make that mistake."
     1b) Such mistakes are common, for example, under adversarial attacks.
     1c) Evaluating this criterion requires human judgment. We are deliberately specifying a criterion that requires a human to evaluate it.
2) Socratic wisdom: An intelligent system should "know what it doesn't know."
     2a) The system should be able to estimate the likelihood that it is wrong.
     2b) The system should also be able to evaluate the reliability of internal variables, such as activations of inner nodes in a DNN.
     2c) Socratic wisdom is related to the ability for introspection, another indicator of intelligence.
     2d) Introspection and Socratic wisdom may be implemented use judgment nodes.
3) Interpretability: At least some of the decisions and the inner workings of an intelligent system should be interpretable by a human observer.
     3a) This criterion also requires human evaluation.
     3b) More generally, an intelligent system should be able to communication its knowledge to other intelligent systems.
     3c) As with humans, a large part of the inner workings may be at an "unconscious" level, not available to be communicated.
     3d) Intrepretability may be enhanced by using knowledge sharing.
     3e) Interpretabilty may be enhanced by using judgment nodes.

To make these goals possible, we are proposing research that breaks the number one unwritten rule of AI research: "hands off during training." Indeed, it will be very difficult to meet the criteria listed above without human assistance. The concepts of "interpretability" and "sensibility" are defined in terms of human reaction. Even humans often fail to have Socratic wisdom, as Socrates himself pointed out. However, intelligent people meet all these criteria to some degree and current machine learning systems generally do not.

More specifically, we propose systems in which humans and AI systems cooperate in the training of the human+AI systems. We call this methodology "Human-Assisted Training of Artificial Intelligence" or HAT-AI for short.

From another perspective, HAT-AI represents a radical change in the long-term direction of AI. Rather than envisioning a future in which autonomous AI systems first achieve Artificial General Intelligence and then eventually achieve Artificial Super Intelligence, humans and machines should begin working cooperatively immediately.

A cooperative human+AI system changes the goal for artificial general intelligence. As a minimum, a cooperative human+AI system should at least match the performance of either a human working alone or an artificial system working alone. In other words, a fully functioning cooperative human+AI system should have general intelligence since a human working alone has general intelligence. This point of view of cooperative intelligent systems gives a very different, less dystopian, vision of future systems with general and then super intelligence.

Human-assisted training for artificial intelligence is just one, although perhaps the most controversial, of many proposals within the broader concept of "cooperative AI." Machine learning as part of a cooperative team of humans and computers is one of the methods of machine learning described by Mitchell as The Future of Machine Learning. This website hopes to be a bridge between the present and that future. Other concepts in cooperative AI include ethical and moral considerations and many other issues as well many other tools and approaches in addition to human-assisted training.

Affiliations:

  1. D5AI LLC
  2. University of Central Florida
  3. Carnegie Mellon University
  4. University of Massachusetts
  5. Microsoft

Guide to Website:

© D5AI LLC, 2020


The text in this work is licensed under a Creative Commons Attribution 4.0 International License.

Some of the ideas presented here are covered by issued or pending patents. No license to such patents is created or implied by publication or reference to herein.