Supervised and semi-supervised learning are the dominant types of learning models in modern artificial intelligence(AI) solutions. As a result, AI agents regularly inherit patterns of human thinking and cognitive decision making processes that have little to do with statistical reasoning and more to do with subjective cognition. In the past, I’ve written extensible about bias as one of those subjective cognitive factors that can permeate the knowledge of AI systems. Today, I would like to cover another essential element of human reasoning that can have an impact in the knowledge acquired by AI solutions: heuristics.
Conceptually, heuristics are a cognitive mechanism that helps us find correct but imperfect answers to complex questions. Renown psychologists and novel prize winner Danny Kahneman and his long time collaborator Amos Tversky shocked the world in the 1990s when they showed that heuristics often substitute statistical reasoning when comes to make decisions. Gerard Gigerenzer is another top psychologist that have dived into the science behind heuristics. Gigerenzer’s book Simple Heuristics that Make Us Smart is one of the bibles of this topic.
Cognitive psychology recognizes many types of heuristics that are part of human knowledge and decision making processes. A classic example is the Substitution Heuristic that causes a person to substitute a complex questions with a simpler ones in order to provide an initial answer. For example, imagine an oncologist examining a cancer patient trying to answer a question such as: How likely is this tumor to metastasized? can come up with a probability by answering the heuristic question: How sick does the patient looks like today? The heuristic question-answer is not based on a detailed evaluation of the patient’s symptoms but rather on the subjective opinion of the physician.
Another example of heuristic is what is known is psychology as the recognition heuristic which states that we tend to associate a hypothesis with a subject if we recognize it. For instance, a subject who is asked which of two (relatively big) cities is larger and recognizes one of them should guess that the one he/she recognizes is the largest one.
One of my favorite heuristics is what is known in economics as the Law of Small Numbers that states that many conclusions in experiments have been determined without sampling a large enough dataset. In order to validate an experiment, many times, experts choose samples simply too small to arrive to a meaningful conclusion but that doesn’t seem to preclude them from asserting the results as valid.
There are many other well-known heuristics in the cognitive psychology space such as Confidence-Doubt or Cause-Chance. Over the years, heuristics have stamped their fingerprint all over human knowledge and data. Now, the same heuristic-based knowledge is becoming the source of information we use to train AI agents. As a result, many AI systems reflect cognitive patters that are highly illogical and have no correlation to statistical analysis principles.
Addressing heuristic-based thinking in AI agents is far from trivial. For starters, the experts in charge of training AI systems are, after all, humans and therefore vulnerable to heuristic analysis which makes for an ironic vicious circle. However, the good news is that most sign of heuristics can be identified by running statistical or unsupervised learning models that inspect the datasets for heuristic patterns. We should also factor in that there are many scenarios in which heuristics can have a positive impact in AI systems. That will be the subject of a future post…