With all the technological hype about artificial intelligence(AI), I find it sometimes healthy to go back to its philosophical roots. From all the philosophical debates surrounding AI, none is more important that the weak vs. strong AI problem.
In AI theory, weak AI is often associated with the ability of systems to appear intelligence while strong AI is liked to the ability of machines to think. By thinking I mean really thinking and not just simulated thinking. This dilemma is often referred to as the “Strong AI Hypothesis”.
Weak AI is a Given
In a world exploring with digital assistants and algorithms beating GO and Poker champions, the question of whether machines can act intelligently seems silly. In constrained environments (ex: medical research, GO, travel) we have been able to build plenty of AI systems that can act as it they were intelligence. Therefore, most experts agree that weak AI is definitely possible but many also share tremendous skepticism when comes to strong AI.
Can Machines Think?
This questions has hunted computer scientists and philosophers since the publication of Alan Turing’s famous paper “Computing Machinerary and Intelligence” in 1950. The question also seems a bit unfair when most scientists can’t even agree on a formal definition of thinking.
To illustrate the confusion around the strong AI hypothesis, we can use some humors from the well-known computer scientists Edgger Dijkstra who in a 1984 paper compared the question of whether machines can think with questions such as “can submarines fly?” or “can airplanes fly”. While those questions seem similar, most English speakers will agree that airplanes can, in fact, fly but submarines can’t swim. Why is that? I’ll leave that debate to you and the dictionary ;) The meta-point of this comparison is that without a universal definition of thinking it seems irrelevant to obsess about whether machines can think.
Godel Incompleteness Theorem
One of the best known objections to the strong AI hypothesis comes from mathematics. In 1931, mathematician Kurt Godel demonstrated that deduction has its limits by proving his famous incompleteness theorem. Godel’s theorem states that any formal theory strong enough to do arithmetic (such as AI) there are true statements that have no proof within that theory.
The incompleteness theorem has long been used as an objection to strong AI. The proponents of this theory argue that strong AI agents won’t be able to really think because they are limited by the incompleteness theorem while human thinking is clearly not. That argument has sparked a lot of controversy and has been rejected by many strong AI practitioners. The most used argument by the strong AI school is that it is impossible to determine if human thinking is subjected to Godel’s theorem because any proof will require formalizing human knowledge which we know to be impossible.
The Consciousness Argument
My favorite argument in the strong AI debate is about conciousness. Can machines really think or just simulate thinking? If machines are able to think in the future that means that they will need to be conscious (meaning aware of its state and actions) as conciousness is the cornerstone of human thinking. Can machines really be conscious? More about that in a future post….