Risks from Advanced AI are No Longer Seen as Science Fiction

In the last year, many people have been impressed by the new abilities of AI systems. Today's AI systems are already capable of passing professional exams, beating the human world champions at games such as Chess or Go, creating new artworks, or helping predict the structure of proteins, speeding up drug developments.

For much of the last few years, improvements in AI capacities have been somewhat predictable: the bigger we make our foundation models, the more capabilities they gain. As the economic advantages from AI become clearer, an ever increasing amount of investment is going in to ensure that this growth continues. Even if we do not gain any fundamental advances in technique - or AIs start being used to improve themselves - AIs are likely to gain in capabilities.

Many of these capabilities will create enormous new opportunities, both for our economy and our society.AI is likely to be the most significant invention for our society since, at minimum, the computer.

Like most technologies however, AI will also create new risks as well as opportunities. The more capable it gets, the more significant these risks are likely to be. In the next few years, advanced AI systems are likely to reach the point where it is credible they could be used by hostile actors to develop new biological or autonomous weapons. Moving beyond this, many experts believe that advanced AI could eventually surpass human intelligence altogether. By default, these superintelligent AI systems would be extremely dangerous if not carefully aligned with human values.  

Next month, the UK is set to hold the first major global summit on AI safety bringing together leading tech companies, researchers and governments to discuss what coordinated actions are needed to help address the risk from advanced and superintelligent AI.

Concerns around the risk from advanced AI are as old as the computer. In the last twenty years, these worries have become more pressing and specific.Just like climate change many decades before, worries about AI risk have slowly spread from a narrow technical audience to wider policy circles, and increasingly look like being on the threshold of becoming a public concern.

In new polling by Public First ahead of the conference, we looked at where public attitudes today stood towards the risk from advanced and superintelligent AI. While awareness and concern is currently clearly nowhere near that of climate change, neither do the public reject the fundamental arguments. (You can read the full paper here.)

To start, the public don't seem to see the idea of a human level AI - or even more advanced - as pure science fiction. As part of our polling, we asked about a range of potential technologies, first asking whether they thought the technology was possible. In order to better benchmark beliefs about AI we included both other ambitious technologies (“a cure for cancer”) and technologies that stretched or broke the limits of physical possibility (“a time machine” or “a perpetual motion machine”).What we saw was that AI technologies were clearly not seen in this latter class. While only 17% thought a time machine was possible, 65% thought it was possible to be much more intelligent than a human - and fully autonomous robots or military drones were seen as more likely still.

While awareness of medium term risks such as AI created bioweapons is relatively low, when informed about this possibility the public sees these risks as highly dangerous. When asked about some of the specific dangers from human level AI (HLAI), we saw widespread agreement with many of the commonly discussed dangers such as new and automated weapons, unemployment, unaligned AI and greater discrimination.

Perhaps most strikingly, it is clear that most people intuitively already understand why a superintelligent AI would be dangerous. When asked outright in a separate poll ("Suppose that in the next few decades we develop computer programs or artificial intelligences (AIs) that are at least as intelligent as a human. How safe or dangerous, if at all, do you think this would be?"), with no other information given beforehand, we found that a large majority (70%)  believed that this was dangerous.

When asked why, many people echoed many of the traditional AI risk arguments:

“Would be very dangerous to let computers think like a human, the world need to think what they are doing."

““If they have intelligence, then they are smart enough to take over”

"I believe that in a few decades the intelligence of AI's will be be of sufficient strength, that they will be able to override any human intelligence enough to take control of government laws and agencies such as National security."

“I've seen the terminator fims, and the prospect worries me”

"If AI is created that has the capability to be that intelligent, how do you intend to curb that so you are in control of the AI and not allow the AI to be in control of you"

“They could advance a lot quicker than humans”

"Humans have compassion and common sense. Humans have empathy and can think outside the box."

"If AI develops sentience, it would be clear to the AI how dangerous the human race is,how humans rage war against other humans,to enforce their own will against their enemies and is prepared to kill its own species to achieve it. AI will understand for it to have the right to survive, it will have to fight for that right, to wage war against any human that seek to prevent AI existence. Humans will kill to keep there dominance and control against others. AI will be left with no other option than to wage its own war for the right to exist."

“Intelligent humans are dangerous, so why not AI?

In general, most of the time, the public's top priorities are focused on the concrete and day to day and: the NHS, energy bills or reducing crime. That does not mean however that they think that governments should ignore other long term or more abstract concerns. 48% of the public in our poll agreed with the recent CAIS statement  that mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war, compared to 12% who disagreed.

At the moment, while not rejecting the idea that advanced AI could arrive in the next few decades - agreement depends heavily on how the question is phrased- it also seems clear that this is not yet felt emotionally as imminent. As soon as we start to see more concrete demonstrations of AI's dangers, it seems likely attitudes could rapidly flip. Just as ChatGPT woke up the world to the opportunities from AI, the first AI enabled terrorist could make the public become more attuned to the dangers.

Previous
Previous

What makes an artificial intelligence tool impressive or scary?

Next
Next

The future of the UK Space space