Elon Musk has been raising concerns around the exponential advent of AI in the past decade, and boy has it been exponential! Many companies now have a CAiO (Chief AI Officer) to bridge the gap between the front runners in AI (Google, Microsoft, OpenAI etc.) and themselves. Many academics don't share Elon's concerns, and even ridicule him for his alarmist views. The argument the academics provide is that the current stage of AI is a nascent one, and the best General AI is still dumber than a toddler. They also claim that AI has been useful only in a specialized domains, like Self-Driving cars, playing chess and Go, analyzing heaps of unstructured data and getting insights etc. So although AI is super useful in a narrow range, something that can drive a car 100 times better than humans is unlikely to produce a piece of Art. This is correct, looking at the existing AI landscape. But the academics are missing the far term view, on a range of 200+ years into future.
With Artificial Intelligence, we are summoning the Demon -- Elon MuskLet's first understand what Elon Musk's arguments are against general AI (gAI). Firstly, he is not against developing gAI, because he says it's inevitable. What Elon truly fears is the centralized control of gAI. Even if the Singularity is not reached, it's a scary use case of a very small group of people having total control of a powerful gAI; because power corrupts, of which history is a strong witness. These small group would end up having power far greater than nuclear weapons.
This is precisely why Elon founded OpenAI, a non-profit AI research company, discovering and enacting the path to safe artificial general intelligence. The idea behind OpenAI is to democratize the knowledge of AI. If everyone has an equal access to powerful gAI, it's less likely for any one entity to have a super-intelligent and possibly a malicious AI. At some level it's like many countries having nuclear weapons, which act as a deterrent because mutual destruction is assured in case of a conflict. Sad, but that's how humanity tends to operate.
AI's use in military is going to be the biggest challenge humans will have to face. It could very well result in our own destruction before we reach a Type-2 civilization. Use cases like AI making decisions whom to kill in a bomb strike or snipe someone out, AI deciding that a particular country is an energy draw on world resources and needs to be annihilated, AI keeping a score of each individual by their proclivity to subservience or otherwise, AI planting ideas in human brains, or AI deciding that humans are no longer required...etc.
Let's also look at the AI armageddon from a science fiction perspective. Is technological singularity even possible? And if it is, what would it look like? The slew of standard hypotheses indicate that once AI gains consciousness and surpasses humans in intelligence, it would rapidly evolve within a matter of days, if not hours, to infiltrate everywhere it can, computer networks, smart IOT devices, smart cities, cars etc. An example of this is how fast AlphaGo evolved once it got a grip of the complex game (number of possible moves in Go are greater than the number of atoms in the observable universe). It took a few years for AlphaGo to beat the best human players, and a matter of months for AlphaGo Zero to beat AlphaGo 100-0. Truly exponential!
The singularity would try and usurp every piece of knowledge that humanity has ever produced and compound it by acquiring even more computing power. It's only bottlenecks would be energy and storage capacity. Here's where it may turn dark. Would the singularity try and use humans for its purpose or co-operate? Would the humans turn into Matrix like energy/computing farms, or advance the civilization to create Dyson spheres? Would humanity be reduced to resistance groups like in Terminator, or become a Type 3 (or 4) civilization?
It's impossible to predict accurately, but if AI simulations are any guide, the former seems more likely. This is the future Elon is most scared of. Humans may be suspended into a VR that has a happy world, and use the brain's super efficient energy mechanisms to compute (there are think tanks which believe we already are inside an artificial world). The only consoling fact is that the singularity is not expected to occur for a few more decades, even by conservative estimates. But it's probability is high in the lifetime of the generation born between 2010-2020.
Comments