Technology

Paper Claims AI Could Be a Civilization-Destroying “Great Filter”

Paper Claims AI Could Be a Civilization-Destroying “Great Filter”

Why haven’t aliens contacted us yet if they exist? According to a new article, they — or, in the future, we — may be wiped out by super-strong artificial intelligence, victims of our own desire to build a superior species.

This potential solution to the Fermi paradox — in which physicist Enrico Fermi and subsequent generations ask, “Where is everybody?” — comes from National Intelligence University researcher Mark M. Bailey, who argues in a new yet-to-be-peer-reviewed paper that advanced AI may be the type of catastrophic risk that could wipe out entire civilizations.

Bailey mentions superhuman AI as a hypothetical “Great Filter,” a potential solution to the Fermi dilemma in which some awful and unknown threat, manmade or natural, wipes out intelligent life before it can contact others.

“For anyone concerned with global catastrophic risk, one sobering question remains,” Bailey says. “Is the Great Filter a thing of the past, or is it still a challenge we must face?”

We humans, according to the researcher, are “terrible at intuitively estimating long-term risk,” and considering how many warnings have already been made about AI — and its eventual endpoint, artificial general intelligence or AGI — it’s likely, he says, that we may be conjuring our own demise.

“One way to examine the AI problem is through the lens of the second species argument,” the study goes on to say. “This concept considers the possibility that advanced artificial intelligence will effectively behave as a second intelligent species with whom we will inevitably share this planet.” Given how things fared the last time this happened — when modern humans coexisted with Neanderthals — the probable implications are bleak.”

The potential of near-god-like artificial superintelligence (ASI), in which an AGI surpasses human intelligence, is even worse, according to Bailey, because “any AI that can improve its own code would likely be motivated to do so.”

“In this scenario,” the author hypothesizes, “humans would relinquish their position as the dominant intelligent species on the planet, with potentially disastrous consequences.” “Like the Neanderthals, our control over our future, and even our very existence, may end with the introduction of a more intelligent competitor.”

Of course, there has yet to be direct evidence that extraterrestrial AIs wiped out natural life in any alien civilizations, but in Bailey’s opinion, “the discovery of artificial extraterrestrial intelligence without concurrent evidence of a pre-existing biological intelligence would certainly move the needle.”

Of course, this increases the possibility that there are malevolent AIs roaming the universe after their creators have been eliminated. Bailey thinks that “actively signaling our existence in a way detectable to such an extraterrestrial AI may not be in our best interest” since “any competitive extraterrestrial AI may be inclined to seek resources elsewhere — including Earth.”

“While it may appear to be science fiction,” Bailey adds, “it is likely that an out-of-control… technology like AI would be a likely candidate for the Great Filter — whether organic to our planet or of extraterrestrial origin.” “We must ask ourselves, ‘How are we going to prepare for this possibility?'”