Superintelligent AI Might Be Uncontrollable, But Should ­We Worry?

Roadzen
5 min readFeb 25, 2021

As an international team of computer scientists using theoretical calculations have shown that it would be fundamentally impossible to control a superintelligent AI ­ — whether such an AI will harm humanity is not conclusive. AI advanced enough to menace humanity is probably still a long way away.

Developments in AI have made impressive progress in the past decade, from driving cars, to composing symphonies, or playing chess better than anyone else. Scientists and philosophers have hence been asking the question ­ — whether we would be able to control a superintelligent AI at all, and ensure it will not pose a threat to humanity.

Earlier this year, in January, an international team of researchers, including scientists from the Center for Humans and Machines at the Max Planck Institute for Human Development, showed that it would indeed not be possible to control a superintelligent AI.

The study Superintelligence cannot be contained: Lessons from Computability Theory published in the Journal of Artificial Intelligence Research, says that there’s no way to contain such an algorithm without building a sort of ‘containment algorithm’ that simulates the dangerous algorithm’s behavior and blocks it from doing anything harmful. But because the containment algorithm would need to be at least as powerful as the first algorithm, the scientists declared the problem is impossible to solve.

A problem without a solution

“A super-intelligent machine that controls the world sounds like science fiction,” said study coauthor Manuel Cebrian, Leader of the Digital Mobilization Group at the Center for Humans and Machines of the Max Planck Institute for Human Development. “But there are already machines that perform certain important tasks independently without programmers fully understanding how they learned it. The question therefore arises whether this could at some point become uncontrollable and dangerous for humanity.”

“If you break the problem down to basic rules from theoretical computer science, it turns out that an algorithm that would command an AI not to destroy the world could inadvertently halt its own operations. If this happened, you would not know whether the containment algorithm is still analyzing the threat, or whether it has stopped to contain the harmful AI. In effect, this makes the containment algorithm unusable”, said Iyad Rahwan, Director of the Center for Humans and Machines.

Based on these calculations the containment problem is incomputable, and hence no single algorithm can find a solution for determining whether an AI would produce harm to the world. Further, the research shows that we might not even know when superintelligent machines have arrived, because deciding whether a machine exhibits intelligence superior to humans is within the same realm as the containment problem.

So, if an all-powerful algorithm somehow decided that it should hurt people or, end humanity altogether, how could it be prevented from carrying out this doomsday scenario?

The doomsday scenario can wait

All of this remains a theoretical debate for now, since AI advanced enough to threaten humankind is probably generations away. AI has been around for over 60 years; however, it is only recently that it has come to be considered as a technology that can have a dramatic effect on so many aspects of human development.

The field of AI, is being constantly reshaped by new developments and changing goalposts, but in general, Artificial Intelligence is described as the science of creating intelligent machines capable of performing real-time tasks at the level of a human expert.

Currently, the AI systems in operation, including the ones being developed for self-driving cars, do all their learning before they are deployed and then stop forever. So, as of now AI does not have free will and is certainly not conscious, or sentient — two assumptions people tend to make about advanced, futuristic technologies.

Today’s AI systems are built around machine learning technologies, which includes deep learning and neural networks. An algorithm is presented with loads of training data that can make automated decisions based on the vast quantities of data fed to it. The decisions are generalizations based upon examples, and definitions already supplied by humans during training of the system.

The performance of these systems is not only dependent on the availability of such training examples, but is also tightly coupled to the domain within which the system operates, such as insurance underwriting, product recommendation etc. The more defined the domain, the better the AI will perform. Because the coverage of training data will be greater and therefore more representative of situations the AI will encounter during deployment.

An innovative future with AI

In the long term we will see a shift towards strong AI, with systems’ intellectual capability being indistinguishable from human intelligence. AI must overcome its current, and basic dependency upon human supervision, for it to become applicable to broader domains containing less predictable situations.

Unsupervised training of AI systems is therefore a very active area of research likely to yield advances in the coming years. Unsupervised training still requires large volumes of data to be presented to the AI at training time. This could, enable the AI system to identify anomalous or unusual behaviors that might reflect greater risk applicable to domains like insurance underwriting. It is also a good fit for scenarios where large data volumes can be collected from the internet, or from on-board cameras on vehicles, but where annotation of such volumes is not feasible.

Another active area of research is weakly-supervised learning, in which only partially or inaccurately annotated data can be used to train the AI. Nevertheless, long term challenges for AI remain where very little training data exists, or where data annotation at scale is prohibitively expensive or impractical for privacy reasons.

As businesses increasingly incorporate AI into their systems and processes, they will need customized insurance to protect them from a range of potential new risks. From an operational perspective, insurers are already using AI to deliver value in more efficient ways. But AI is set to impact all stages of the insurance value chain — from the first enquiries, to the settlement of claims, all the way to risk prevention.

New entrants, disruptors and established players will all have to rethink their roles and interdependencies. While AI offers huge potential, insurers must be aware of the risks associated with using this new and fast developing technology, and respond to future scenarios by developing appropriate models and products.

--

--