AI Risk and Public Opinion - The State of Play
In both UK and the US, the public has mixed feelings about AI: excited by its potential and curious about what it might do, but worried about some of its implications.
While the Government believes that the UK is well-placed to lead and shape the conversation on this, the US public is likely to need to be persuaded on this: just 6% of Americans would say the UK was a leader in AI research with countries like Japan (31%), South Korea (10%) and Germany (9%) all seen as more advanced.
In both countries, people think AI needed to be regulated - but there are significant differences between the two countries in what form this should take. In the US, our respondents were 42% less likely to support regulation being decided by the UN or an international group of governments.
Last week, the UK Government announced that it is planning to hold the first major global summit bringing together countries such as the UK and US, leading AI companies and researchers to “to agree safety measures to evaluate and monitor the most significant risks from AI.”
This follows shortly after leading AI researchers and the leaders at companies including OpenAI, Google DeepMind and Microsoft put their names to a statement that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Among AI insiders, worries about AI risk are no longer the concern of a minority online audience, but the default position of those at the forefront of recent advances.
In the last year, the public prominence of the potential risks from advanced AI systems has increased significantly, as new generative AI tools such as ChatGPT or Midjourney have provided concrete examples of the power of AI. Nevertheless, awareness of the arguments about why AI could be such a significant risk remain relatively low.
In March of this year Public First ran new representative polls of 2,000 adults in the UK and US, asking them their opinions around a range of AI related issues. (You can find more out on the UK poll here, or from the US here.) As part of that, we got a snapshot of where public views are at the time - and some of the challenges that might be faced by the UK in steering its new summit.
To start, it was clear that people in both countries took the advances in AI seriously, had mixed feeling about it - curious, excited and worried - and did not rule out the near term emergence of more advanced AI systems. On average, respondents in both countries said they expected human level AI to arrive between 2030 and 2039.
In both countries, there was agreement that what was allowed with AI couldn’t be left entirely to its users or developers, and that some form of government intervention was likely needed. What the best format for that was less clear, and we saw different levels of enthusiasm depending on how we framed it. In the US, for example, only 19% supported an “independent regulator” deciding what people should be allowed to do with AI, compared to 41% in the UK.
By contrast, when we described it as a “new government regulatory agency similar to the FDA to regulate the use of new AI models”, support went up to 50% - and a similar proposal in the UK saw majority support across every demographic we tested, no matter their political ideology or level of interest in tech.
One proposal by some of those worried about AI risks has been a temporary pause in the development of more advanced AI models. As of March, we saw limited support for this, with only a minority of the public in both countries (US:33%, UK:26%) supporting the slowing of the development of AI. (That said, an even smaller proportion were in favour of accelerating its progress.)
When we asked about how worried people were about the long term risks from AI in the next fifty years, about half of those in each country (UK:49%, US:56%) told use they were worried about AI, and about a third went as far as to say that there was a greater than 1% chance that AI could cause human extinction in the next century. Nevertheless, worries around AI remained a way behind other concerns such as a nuclear war (UK: 79%, US:76%), a global pandemic (UK:74%, US: 71%) or climate change (UK: 77%, US: 66%).
The UK, in particular, might face some challenges in trying to persuade the US to follow its lead on this. While most indexes of relative AI strength put the UK third behind the US and China, it was striking that just 6% of Americans said they saw the UK was a leader in AI research, with countries like Japan (31%), South Korea (10%) and Germany (9%) all seen as more advanced. It was also noticeable that in the US respondents were 42% less likely than those in the UK to support regulation being decided by the UN or an international group of governments.
None of this necessarily directly affects the summit the UK is planning in September, or means that it is a bad idea. Already, the public do not seem opposed to taking action on AII risk - and while it may have been seen as a weird priority just a couple of years ago, it is no already no longer clear that this is the case. Public opinion on AI risk is likely to continue to move as people gain more personal experience with it, and the wider Overton window in policy circles continues to move.
Nevertheless, the more you think substantial interventions are likely to be needed, the more important it is going to be in the long term to take the public with you. Here, lots of unknowns remain:
How is public opinion likely to evolve as AI becomes more embedded in everyday life? Could it follow a similar trajectory to concerns around climate change, or is it always likely to be seen as an overly abstract issue?
What would happen to public views about AI as they are exposed to more specific parts of the argument about why advanced AI could be so dangerous - where do they nod along, and where do they bounce off?
At the moment, concerns around AI risk are fairly non partisan - but how likely is it to become polarised across party lines? (And, if so - in which direction?)
In our preliminary polling, it seems that the public in both the UK and US are cautiously in favour of some sort of AI regulator - but how deep is this support, and what form will this take?
We plan to keep digging into this more, and if you want to discuss further - just get in touch.