One-Third of AI Researchers Believe That in This Century, AI Could Have “Catastrophic” Effects Comparable to Nuclear War

One-Third of AI Researchers Believe That in This Century, AI Could Have “Catastrophic” Effects Comparable to Nuclear War

A third of artificial intelligence (AI) experts and academics surveyed said they thought the technology could result in a catastrophe comparable to full-scale nuclear war.

Researchers who co-authored at least two publications in computational linguistics between 2019 and 2022 were eligible to take the poll. It sought to learn how the industry felt about hot-button issues related to artificial general intelligence (AGI), which is the capacity of an AI to think like a human, and the effects that researchers predict AI would have on society at large. A preprint manuscript with no peer review has been released with the findings.

As the publication points out, AGI is a contentious subject in the industry. There are strong disagreements about whether we are moving in that direction, whether we should be aiming for it at all, and what would happen when mankind reaches that point.

Einstein Is Right Again – Gravity Has Not Changed Across The Universe
One-Third of AI Researchers Believe That in This Century, AI Could Have “Catastrophic” Effects Comparable to Nuclear War

The researchers stated in their study that “the community as a whole knows that it’s a contentious subject, and now (thanks to this survey) we can know that we know that it’s contentious.” AGI should be a major worry for natural language processing at all, according to 58 percent of respondents, while 57 percent felt that recent research had pushed us in that direction.

What’s fascinating is how AI experts expect that AGI would affect society as a whole.

According to the study results, “73 percent of respondents agree that labor automation via AI might reasonably lead to revolutionary societal upheaval in this century, on at least the magnitude of the Industrial Revolution.”

A non-trivial 36% of respondents, however, concurred that it is conceivable that AI could have disastrous effects in this century, “on the level of all-out nuclear war.”

When a sizable section of a field thinks something could end up destroying humanity, it’s hardly the most comforting thing. However, some respondents voiced their disapproval of the phrase “all-out nuclear war” in the feedback section, stating that they “would agree with less extreme phrasings of the question.”

The team concluded that this “suggests that our figure of 36% is an underestimate of respondents who are truly concerned about adverse implications of AI systems.”

Researchers unanimously agreed that natural language processing has “a favorable overall impact on the world, both up to the current day (89 percent) and going into the future (87 percent),” while being (perhaps with good reason) concerned about potential catastrophic repercussions of AGI.

Although the opinions are at odds with one another, a sizable minority of respondents—23%—agreed with both Q6-2 and Q3-4—”that NLP has an overall positive impact on the world,” according to the researchers—”suggesting that they may think NLP’s potential for positive impact is so great that it even outweighs plausible threats to civilization.”

Among other findings, 74 percent of AI researchers think the commercial sector has too much influence on the field, and 60 percent think NLP researchers should be very concerned about the environmental impact of training huge models.