What’s Missing to Make AI Conversations More real

Last week, I was reading about Semantic Machines, a Berkley-based startup that is building a new conversational platform to make AI-powered voice and text interactions “more human”. Specifically Semantic Machines is building technologies that resemble human memory in order to improve AI-driven conversations.

Memory is one of the cognitive aspects at the center o human intelligence and one that is notoriously missing from conversational AI platforms. Despite their unquestionable sophistication, natural language processing(NLP) platforms such as Facebook’s Wit.ai, Google’s API.ai or Microsoft LUIS operate in a very similar way: they process natural language sentences, detect entities and intests and execute relevant actions. You don’t need to be neuroscientist to realize that there are many aspects of human conversations that don’t quite match that model .Looking at the work developed by Semantic Machines, I started thinking about other cognitive aspects of human dialogs that could enhance conversational platforms.

1 — Memory

Let’s start with the cognitive element at the center of Semantic Machines’ work: Memory. The brain’s neocortex continuously uses memory in order to respond to cognitive inputs. That’s how we are able to recognize words in a text without analyzing character by character or are instinctively react to specific phrases remembering an analogy from a past experience. In the context of conversational platforms, memory can be used to enrich and personalize user-system interactions beyond recognizing the intent of a phrase.

2 — Judgment

Judgment is a fundamental aspect of human intelligence and, consequently, of human conversations. From basic stereotypes to new opinions based on pre-existing biases or prejudices, judgments are an omnipresent elements in our daily dialogs. Obviously, we should not expect AI conversations to become judgmental by default but understanding implicit judgments in sentences can help to improve the dialogs between humans and systems.

3 — Metaphors

Metaphors are one of the hardest cognitive phenomenon’s to explain but one that plays a key role in human conversations. Metaphors are, essentially, the ability of linking concepts to other concepts from unrelated domains. Developing cross domain context knowledge will be key to create and identify metaphors into AI-NLP systems.

4 — Context

Conversational AI platforms are actively improving their understanding and usage of contextual information but they still have along way to go in order to resemble human reasoning. Contextual elements are always present in human conversations Location, time, social setting, previous conversations are just some of the hundreds of contextual data points surrounding any human conversation. Paradoxically, the usage of contextual information might be one of the easiest aspects to improve within the existing generation of conversational platforms.

5 — Conversational Learning

Supervised training is the main mechanism we use to make conversational bots smarter. However, AI training techniques fall short o replicating human learning process. As it turns out, we are very good at acquiring knowledge as part of conversational interactions. Building conversational applications that can build new knowledge as part of the exchanges in a dialog and apply that new knowledge in new conversations is a drastic improvement compare to the current generation of conversational applications.

CEO of IntoTheBlock, Chief Scientist at Invector Labs, I write The Sequence Newsletter, Guest lecturer at Columbia University, Angel Investor, Author, Speaker.