The Black Swan in Artificial Intelligence Part II

Last week, I published the first part of this article that focuses on the implication of random events in artificial intelligence(AI) models. Specifically, we focused on mega-random events known as Black Swans that share three key characteristics:

1 — The are random-outlier phenomenon’s from the perspective of regular knowledge.

2 — They carry a big impact.

3 — Despite their randomness, we try to find a retrospective explanation or even predict similar events in the future.

The first part of this essay focused on the relevance and impact of Black Swans on AI systems. This part presents some ideas and best practices that can help AI agents to perform in a Black Swan ecosystem.

Positive vs. Negative Black Swans

From the perspective of AI systems, a Black Swan is a random event that falls outside the realm of knowledge of a trained AI agent. However, that definition doesn’t necessarily entails that Black Swans will have a negative impact on AI systems. Random events that unveiled the discovery of a new drug or a new profitable trading strategy can have massively positive consequences. In an ideal scenario, AI agents should maximize their resiliency to negative Black Swans while also optimizing its exposure to positive Black Swans. Sounds simple, doesn’t it? ;)

Is Not All About Resiliency

Resiliency or robustness should be an architecture principle of any software system, not just AI agents. However, in the case of AI, we are not only referring to resiliency in the form of infrastructure or software errors but in the from of new knowledge and behavior that challenges the universe an AI agent. From that standpoint, AI agents need to be able to contain negative behavior influenced by unexpected forms of knowledge created by Black Swans.

Resiliency is not the only aspect to consider when designing AI agents in pro Black Swan environments. Absorbing new forms of knowledge and spotting positive side effects from unexpected events is also essential in order to benefit from positive Black Swans. However, how does that translate into specific AI techniques? Let’s explore a few ideas that might help when designing AI systems in environments propense to Black Swans.

AI Ideas to Survive Black Swans

1 — Induce Knowledge Chaos

I like competitive neural networks because they automatically introduce friction in an AI system. Using AI techniques such as competitive neural networks to challenge the knowledge pf AI agents could start automatically building resilient behaviors into the systems which will make it more likely to survive negative Black Swans.

2 — Leverage Continuous Unsupervised or Semi — Supervised Analysis on New Knowledge

To be exposed to positive Black Swans, AI agents should be able to acquire new forms of knowledge and identify its positive effects on areas of the environment. In many supervised models, that exposure to new knowledge is constrained to new training data which, by definition, does not factor in Black Swans. To address that limitation, AI agents could leverage unsupervised or semi-supervised that continuously evaluate new knowledge and identify positive effects.

3 — Build Knowledge by Subtraction

Most knowledge and hypothesis built into AI systems is communicated via fixed training data sets. However, remember that, in Black Swan environments, what you don’t know is often as important as what you know. Knowledge by subtraction is a cognitive technique that allow us to form new knowledge by negating some of our current hypothesis. For instance, suppose we train an AI agent on a hypothesis that bonds are an effective hedge mechanism for stocks (bonds tends to go up when stocks go down and vice versa). Our AI agent will start trading under market conditions that support our hypothesis but suddenly will encounter one of the many events in financial markets that could cause both stocks and bonds to go up or down simultaneously. By encountering that single event, our AI agent should be able to build new knowledge that negates its original hypothesis. That technique is called knowledge by susbtraction.

In cognitive theory, knowledge by subtraction is one of the most effective forms of knowledge. While our AI agent might need infinite observations to validate the original hypothesis that bonds can hedge stocks, a single observation is sufficient to disprove it. By being exposed to knowledge by subtraction, AI agents can be more resilient to negative Black Swans and prone to benefit from positive Black Swans.

Written by

CEO of IntoTheBlock, Chief Scientist at Invector Labs, Guest lecturer at Columbia University, Angel Investor, Author, Speaker.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store