Gödel, Consciousness and the Weak vs. Strong AI Debate
Some of the biggest debates in AI can find its roots in some mathematical and evolutionary theories.
With all the technological hype about artificial intelligence(AI), I find it sometimes healthy to go back to its philosophical roots. From all the philosophical debates surrounding AI, none is more important that the weak vs. strong AI problem.
In AI theory, weak AI is often associated with the ability of systems to appear intelligent while strong AI is linked to the ability of machines to think. By thinking I mean really thinking and not just simulated thinking. This dilemma is often referred to as the “Strong AI Hypothesis”.
Weak AI is a Given
In a world exploring with digital assistants and algorithms beating GO World Champions and Dota2 teams, the question of whether machines can act intelligently seems silly. In constrained environments (ex: medical research, GO, travel) we have been able to build plenty of AI systems that can act as it they were intelligen6. While most experts agree that weak AI is definitely possible, there is still tremendous skepticism when comes to strong AI.
Can Machines Think?
These questions have hunted computer scientists and philosophers since the publication of Alan Turing’s famous paper “Computing Machinery and Intelligence” in 1950. The question also seems a bit unfair when most scientists can’t even agree on a formal definition of thinking.
To illustrate the confusion around the strong AI hypothesis, we can use some humor from the well-known computer scientists Edsger Dijkstra who in a 1984 paper compared the question of whether machines can think with questions such as “can submarines swim?” or “can airplanes fly?”. While those questions seem similar, most English speakers will agree that airplanes can, in fact, fly but submarines can’t swim. Why is that? I’ll leave that debate to you and the dictionary ;) The meta-point of this comparison is that without a universal definition of thinking…