Worries about superintelligent AI are worries about control
Three years ago, in the run up to the first AI Safety Summit in Bletchley, we ran polling that showed that the public took the risk from advanced or superintelligent AI very seriously.
Over the last few years, the focus of the summits (AI Seoul 2024, AI Action 2025) has shifted away from core safety issues - but have the views of the public shifted in line? Has lived experience with day to day AIs reduced worries, or are the public still as concerned as ever? We ran a new survey of 500 adults across the UK and US to find out.
The results were pretty unambiguous: around two-thirds of respondents still saw a superintelligent AI as very or somewhat dangerous (64%), compared to just 17% who saw it as very or somewhat safe. Around 50% thought it was a bad idea, compared to 28% who saw it as a good one.
Why do they see it is as dangerous?
This is one question where it is easy to get misled by polling data. If you ask a standard rating or select choice, you often find respondents picking relatively prosaic reasons as to why they are worried: hacking, drones or misinformation.
When you instead ask people to say why they think it might be dangerous in their own words, by contrast, you instead see much more fundamental worries come out: fears of overall loss of control or existential risk.
| Theme | Quote |
|---|---|
| Loss of Control | I feel like it could be too self aware and cause issues |
| My concern is with loss of control. Either to the AI or to individuals or a group. Having that level of ability over the general population would not go well imho | |
| Existential Risk | A superintelligent AI could surpass human intelligence and the goals of the AI may not align with us leading a potential threat of humanity. |
| If it is much more intelligent than humans then it could end up superseding humans, who knows what it could lead to? | |
| Unpredictability | We humans are not smart enough to protect a superintelligent AI from ourselves, we probably cannot code this right |
| I just think we do not know what impact a superintelligent AI could have on the future of humanity. It is all uncharted territory and I feel like it could be bad news for the human race but once the AI is made, it is too late to go back on it if th… |
By contrast, those who tended to thinking it was safe either just judged that the benefits outweighed the risks, or believed that humans would be able to stay in control.
| Theme | Quote |
|---|---|
| Human Control | You can always pull the plug out |
| I would love to believe that this super intelligent AI was created by only the best and would be able to be shut off in case of an emergency. As long as this AI remains just that and not sentient I think it could be a good thing. | |
| Built-in Safeguards |
I think creating a superintelligent AI could be safe if we put strong controls in place and make sure it follows human values. I believe careful planning can help prevent it from doing any harm. |
| I think it would have controls built in to make it safe. Also think it would be limited to what it can access | |
| Benefits Outweigh Risks |
I think it would be more beneficial than a negative. AI has great positives and it's becoming the way of the future |
| I feel that AI can bring many good things to the world like solving world hunger or a cure for diseases. I do not feel like it would be a threat if it is used in the correct way. | |
| AI Not Autonomous |
I don't think AI is something that can be dangerous personally. It's made and programmed by humans and is just a computer |
| I believe if they are built and programmed by us, then they should be safe I feel |