The Black Swan is one of the most fascinating problems in modern cognitive theory. The term was coined by one of my favorite authors: Nassin Nicholas Taleb in its best seller book of the same title. The rapid emergence of artificial intelligence(AI) has drastically increase the relevance of the Black Swan problem in the context of AI systems.
In a nutshell, Black Swan are random and unexpected phenomenon that carry a big or disproportional impact. Examples of Black Swans are all around us: September 11 2001, the publication of a mega-best-seller by an unknown author, the emergence of Facebook, the demise of Long Term Capital Management are some notorious examples of Black Swans. In his book, Taleb uses three basic axioms to describe a Black Swan:
1 — Black Swans are outlier phenomenon from the perspective of regular expectations.
2 — Black Swans carry an extreme impact.
3 — After the Black Swan takes place, we try to rationalize an explanation for it and even venture to plan and predict its next occurrence.
From Taleb’s perspective, human history can be almost explained as a sequence of a relatively small number of Black Swans. From an environmental perspective, Black Swan can yield positive (Facebook) or negative( 9/11) results but always carrying an extreme change in the environment around it.
Black Swans and Artificial Intelligence
As a cognitive phenomenon, Black Swans are extremely important in the design and evolution of AI systems. AI is based on human knowledge and no events affect and evolve human knowledge as Black Swans,
Trying to understand the role of Black Swans in AI, I’ve divided this essay into two main parts: this post covers some of the important characteristics of AI Black Swans while the following post will explore some potential best practices to design AI systems that can cohabit in Black Swan environments.
If we go back to the three key characteristics of Black Swans: rarity, extreme impact and retrospective predictability, it is easy to spot some concepts that are relevant in AI systems.
Black Swans, by definition, are based on and create uncertainly. The current generation of AI techniques is fundamentally based on the certainty of knowledge. The human brain has a remarkable ability to adapt to uncertainly but that capability doesn’t have yet materialized in AI models. How can we best design AI systems that handle uncertainty? A good starting point is to always assume that the rules we design for the knowledge of an AI agent are never complete and that what the knowledge we don’t posses at the time is equally important that the knowledge we have. In other words, What we don’t know matters as much as what we know.
Black Swans raise the importance of knowledge models in AI systems to another level. By definition, AI agents will not be able to predict Black Swans but we can do a better job building knowledge models that account for extreme and relatively unknown circumstances. Many cognitive experts refer to this term as antiknowledge.
Unsupervised Learning and Unlearning
In the context of AI, Black Swans confirm the importance of unsupervised learning models. Even though supervised techniques are prevalent today, the future of AI relies on unsupervised models. In a Black Swan world, we should not only focus on AI models that are more resilient to that phenomenon but that can also learn and even benefit from positive Black Swans. More about that in a future post…