In the first part of this essay, we discussed the arguments about the weak vs. strong artificial intelligence(AI) debate. Although there are several definition of boths schools of thoughts, there are mostly based on the ability of AI systems to think(strong AI) or simulate thinking( weak AI). In the current market, I believe most people agree that weak AI is being materialized wither the present generation of AI technologies. However, there are still many doubts about the potential of strong AI.
The skepticism about strong AI has sparked arguments from ranging classic mathematical theory such as Goldel’s Incompleteness Theorem to pure technical limitations of AI platforms. However, the main are o debate remains on the intersection of biology, neuroscience and philosophy and has to do with the consciousness of AI systems.
What is Consciousness?
There are many definitions and debates about consciousness. Certainly enough to dissuade most sane people to pursue the argument of its role in AI systems ;) Most definitions of consciousness involve self-awareness or the ability for an entity to be aware of its mental states. Yet, when it comes to AI, self-awareness and metal states and not clearly defined either so we can quickly start going down a rabbit hole.
In order to be applicable to AI, a theory of consciousness needs to be more pragmatic and technical and less, let’s say, philosophical. My favorite definition of consciousness that follows these principle comes from the laureate physicist Michio Kaku, professor of theoretical physics at University of New York and co-founder of the string field theory. A few years ago, Dr. Kaku presented what he called the “space-time theory of consciousness” to bring together the definition of conscious ness from fields such as biology and neuroscience. In his theory, Dr. Kaku defines consciousness as follows:
“Consciousness is the process of creating a model of the world using multiple feedback loops in various parameters (ex: temperature, space, time , and in relation to others), in order to accomplish a goal( ex: find mates, food, shelter)”
The space-time definition of consciousness is directly applicable to AI because it is based on the ability of the brain to create models of the world based not only on space (like animals) but in relationship to time (backwards and forwards). From that perspective, Dr. Kaku defines human consciousness as “a form of consciousness that creates a model of the world and then simulates it in time, by evaluating the past to simulate the future. In other words, human consciousness is directly related with our ability to plan fro the future.
In addition to its core definition, the space-time theory of consciousness includes several types of consciousness:
— Level 0: Includes organisms such as plants with limited mobility which create a model of its space using a handful of parameters such as temperature.
— Level 1: Organisms like reptiles that are mobile and have a nervous system. These organisms use many more additional parameters to form a model of its space.
— Level II: Organisms such as mammals that create models of the world not only based on space but in relation to others.
— Level III: In this level we have human consciousness that can create models of the world not only based on space and other humans but also based on time and the future.
Are AI Systems Conscious?
The short, and maybe surprising answer, is YES. Applying Dr. Kaku’s space-time theory of consciousness to AI systems, it is obvious that AI agents can exhibit some basic forms of consciousness . Factoring the capabilities of the current generation of AI technologies, I would place the consciousness of AI agents at Level I (reptiles) or basic Level II.
We will discuss elaborate on this argument in a future post…