In you follow the artificial intelligence(AI) news ( or any tech news for that matter) you should have heard about the silly debate between Elon Musk and Mark Zuckerberg about the potential catastrophic effects of unregulated AI.
If two tweets can be considered a debate, then this one started with a tweet from the Facebook CEO about Musk’s recent remarks at the National Governor’s Association in which he referred to AI as the biggest existential threat faced by humanity. In his tweet, Zuckerberg mentioned that comment’s like Musk’s seemed irresponsible to what Musk replied that Zuckerberg seem to have “limited understanding” of the space.
Needles to say that the simple exchange has been blown out of proportions by the press. These days there is nothing that feeds sensational tech blogging/journalism like the apocalyptic versions of AI taking over the world. Typically, I prefer to stay away from those type of debates but, in this case, I thought I’d present a not-very-well-known AI theory that helps to explain why both Zuckerberg and Musk are right about their positions.
The theory I am referring to comes from British philosopher and AI thought leader Nick Bostrom who currently serves as Director of the AI Research Centre at Oxford University. Bostrom is famous within AI circles for his theory about super-intelligence that attempts to quantify the probabilities of machine intelligence evolving into forms of intelligence vastly superior than human’s. I am not planning to review Bostrom’s entire theory in this post but there are a couple of points that I believe are very relevant to the Musk-Zuckerberg debate.
In his theory, Bostrom presents several paths to achieve what he calls super-intelligence (intelligence that is superior to the collective knowledge of humanity). AI is certainly one path to super-intelligence but not the only one, Bostrom argues. Brain emulation, computer-brain interfaces (like Musk’s Neuralink) or biological cognition are other equally viable options to achieve super-intelligence. With all those vehicles, Bostrom divides the path to super-intelligence in three main stages: human-intelligence, mankind-intelligence and super-intelligence.
The human-intelligence phase represents the time where machines will achieve human equivalent intelligence. That process is likely to be very slow and full of challenges. This is the phase that we are currently experiencing. More importantly, there is no certainty that machines will ever achieve human-like intelligence. From that perspective, Zuckerberg’s camp is right.
The second part of Bostrom’s theory refers to the fact that if/when machines achieve human-intelligence then there should be a short time before they surpass the collective intelligence of humanity. After that level is reached, machines will be in a position to achieve strong levels of super-intelligence or a level of intelligence vastly superior to the combined intelligence of mankind. Here is the best part, Bostrom believes that the transition from civilization-intelligence to super-intelligence could either happen at a low pace or really fast.
A slow takeoff towards super-intelligence will allow governments to correctly control and regulate machine intelligence levels. However, that option might not exist if the transition happens in a matter of minutes. A fast takeoff scenario could be disastrous for humanity as will prevent humans or other forms of machine intelligence to respond accordingly. The fast takeoff to super-intelligence scenario aligns better with Musk’s views of the world.
Nick Bostrom’s theory of super-intelligence is absolutely brilliant but also controversial. As a result, Bostrom has both admirers and detractors within the AI community. However, the multi-phase approach to super-intelligence presented in his theory provides a simple explanation to settle the Musk-Zuck debate. I plan to write more about the super-intelligence theory in the next few days.