Technology

Humanity will never be able to control a super-intelligent AI

Humanity will never be able to control a super-intelligent AI

The concept of man-made intelligence overthrowing has been debated for several decades, and scientists have only issued their judgment on whether we will be able to monitor high-level machine super-intelligence. Scientists at the Max Planck Society, a well-known European academic agency, claim that humans would never be able to control a super-intelligent artificial intelligence capable of saving or killing mankind.

According to findings published last week in the Journal of Artificial Intelligence Research. The challenge, scientists at Max Planck say, is that there is no way to contain such an algorithm without hardware that is much more sophisticated than anything we can create today.

Scientists say – It’d be impossible to control superintelligence AI. Theoretical calculations suggest it would be impossible to build an algorithm that could control such machines.

The catch is that managing a super-intelligence well beyond human understanding would entail a simulation of the super-intelligence that we can analyze. But if we can’t grasp it, it’s hard to construct a simulation like that.

Laws such as ‘do no damage to humans’ cannot be set if we do not grasp the kind of situations that the AI is going to come up with, the writers of the new paper say. When a computer machine is running at a degree beyond the reach of our programmers, we can no longer set limitations.

Humanity will never be able to control a super-intelligent AI 1

Internal Investigation

The team concentrated mainly on the topic of restraint. When an all-powerful algorithm has somehow decided that it can harm humans or, in a more “Terminator” way, destroy life entirely, how can we keep it from acting?

They propose to create a kind of “containment algorithm” that simulates the behavior of the unsafe algorithm and stops it from doing something harmful—but since the containment algorithm will have to be at least as efficient as the first to do so, the scientists have found the problem difficult to solve.

Theoretical Argument

It’s just a scientific debate. AI mature enough to challenge humanity is probably still a long way away, but really clever minds are working hard on it. That makes it the ideal subject to debate in advance, of course—we would like to know the risk before it comes.

“A super-intelligent machine that controls the world sounds like science fiction,” co-author Manuel Cebrian, head of the Digital Movement Project at the Center for Humans and Machines at the Max Planck Institute for Human Development, said in a press release. “But there are computers that carry out certain critical tasks individually, without programmers completely knowing how they learned it. The concern then emerges as to whether this might at any stage become uncontrollable and harmful for mankind.”

If we’re trying to go forward with artificial intelligence, we do not even know until a super-intelligence outside our reach comes, that’s the incomprehensibility. That means that we need to start posing some tough questions about the directions we’re moving in.

“A super-intelligent machine that controls the world sounds like science fiction,” says computer scientist Manuel Cebrian of the Max Planck Institute for Human Progress. “But there are already machines that perform certain important tasks independently without programmers fully understanding how they learned it.”

Along with AI’s impressive successes and ongoing rapid growth into new realms, there is greater concern about the ethical challenges affecting advanced AI systems. “The question therefore arises whether this could at some point become uncontrollable and dangerous for humanity.”