Skip to main content

AI Survey Exaggerates Apocalyptic Risks

A speculative survey about AI’s future may have been biased toward an alarmist perspective

Pen making check mark on negative, frowny face feedback option on survey

The headlines in early January didn’t mince words, and all were variations on one theme: researchers think there’s a 5 percent chance artificial intelligence could wipe out humanity.

That was the sobering finding of a paper posted on the preprint server arXiv.org. In it, the authors reported the results of a survey of 2,778 researchers who had presented and published work at high-profile AI research conferences and journals—the biggest such poll to date in a once-obscure field that has suddenly found itself navigating core issues of humanity’s future. “People are interested in what AI researchers think about these things,” says Katja Grace, co-lead author of the paper and lead researcher at AI Impacts, the organization that conducted the survey. “They have an important role in the conversation about what happens with AI.”

But some AI researchers say they’re concerned the survey results were biased toward an alarmist perspective. AI Impacts has been partially funded by several organizations, such as Open Philanthropy, that promote effective altruism—an emerging philosophical movement that is popular in Silicon Valley and known for its doom-laden outlook on AI’s future interactions with humanity. These funding links, along with the framing of questions within the survey, have led some AI researchers to speak up about the limitations of using speculative poll results to evaluate AI’s true threat.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Effective altruism, or EA, is presented by its backers as an “intellectual project” aimed at using resources for the greatest possible benefit to human lives. The movement has increasingly focused on AI as one of humanity’s existential threats, on par with nuclear weapons. But critics say this preoccupation with speculative future scenarios distracts society from the discussion, research and regulation of the risks AI already poses today—including those involving discrimination, privacy and labor rights, among other pressing problems.

The recent survey, AI Impacts’ third such poll of the field since 2016, asked researchers to estimate the probability of AI causing the “extinction” of humanity (or “similarly permanent and severe disempowerment” of the species). Half  of respondents predicted a probability of 5 percent or more.

But framing survey queries this way inherently promotes the idea that AI poses an existential threat, argues Thomas G. Dietterich, former president of the Association for the Advancement of Artificial Intelligence (AAAI). Dietterich was one of about 20,000 researchers who were asked to take part—but after he read through the questions, he declined.

“As in previous years, many of the questions are asked from the AI-doomer, existential-risk perspective,” he says. In particular, some of the survey’s questions directly asked respondents to assume that high-level machine intelligence, which it defined as a machine able to outperform a human on every possible task, will eventually be built. And that’s not something every AI researcher sees as a given, Dietterich notes. For these questions, he says, almost any result could be used to support alarming conclusions about AI’s potential future.

“I liked some of the questions in this survey,” Dietterich says. “But I still think the focus is on ‘How muchshould we worry?’ rather than on doing a careful risk analysis and setting policy to mitigate the relevant risks.”

Others, such as machine-learning researcher Tim van Erven of the University of Amsterdam, took part in the survey but later regretted it. “The survey emphasizes baseless speculation about human extinction without specifying by which mechanism” this would happen, van Erven says. The scenarios presented to respondents are not clear about the hypothetical AI’s capabilities or when they would be achieved, he says. “Such vague, hyped-up notions are dangerous because they are being used as a smokescreen ... to draw attention away from mundane but much more urgent issues that are happening right now,” van Erven adds.

Grace, the AI Impacts lead researcher, counters that it’s important to know if most of the surveyed AI researchers believe existential risk is a concern. That information should “not necessarily [be obtained] to the exclusion of all else, but I do think that should definitely have at least one survey,” she says. “The different concerns all add together as an emphasis to be careful about these things.”

The fact that AI Impacts has received funding from an organization called Effective Altruism Funds, along with other backers of EA that have previously supported campaigns on AI's existential risks, has prompted some researchers to suggest the survey’s framing of existential-risk questions may be influenced by the movement.

Nirit Weiss-Blatt, a communications researcher and journalist who has studied effective altruists’ efforts to raise awareness of AI safety concerns, says some in the AI community are uncomfortable with the focus on existential risk—which they claim comes at the expense of other issues. “Nowadays, more and more people are reconsidering letting effective altruism set the agenda for the AI industry and the upcoming AI regulation,” she says. “EA’s reputation is deteriorating, and backlash is coming.”

“I guess to the extent that criticism is that we are EAs, it’s probably hard to head off,” Grace says. “I guess I could probably denounce EA or something. But as far as bias about the topics, I think I’ve writtenone of the best pieces on the counterarguments against thinking AI will drive humanity extinct.” Grace points out that she herself doesn’t know all her colleagues’ beliefs about AI’s existential risks. “I think AI Impacts overall is, in terms of beliefs, more all over the place than people think,” she says.

Defending their research, Grace and her colleagues say they have worked hard to address some of the criticisms levelled at AI Impacts’ studies from previous years—especially the argument that relatively low numbers of respondents hadn’t adequately represent the field. This year the AI Impacts team tried to boost the number of respondents by reaching out to more people and expanding the conferences from which it drew participants.

But some say this dragnet still isn’t wide enough. “I see they’re still not including conferences that think about ethics and AI explicitly, like FAccT [the Association for Computing Machinery (ACM) Conference on Fairness, Accountability, and Transparency] or AIES [the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society],” says Margaret Mitchell, chief ethics scientist at AI company Hugging Face. “These are the ‘top AI venues’ for AI and ethics.”

Mitchell received an invitation to join the survey but didn’t do so. “I generally just don't respond to e-mails from people I don't know asking me to do more work,” she says. She speculates that this kind of situation could help skew survey results. “You're more likely to get people who don't have tons of e-mail to respond to or people who are keen to have their voices heard—so more junior people,” she says. “This may affect hard-to-quantify things like the amount of wisdom captured in the choices that are made.”

But there is also the question of whether a survey asking researchers to make guesses about a far-flung future provides any valuable information about the ground truth of AI risk at all. “I don’t think most of the people answering these surveys are performing a careful risk analysis,” Dietterich says. Nor are they asked to back up their predictions. “If we want to find useful answers to these questions,” he says, “we need to fund research to carefully assess each risk and benefit.”