Skip to main content

Chatbots can influence political views, new study finds

Tuesday 9 December 2025
Chatbot

Conversations with AI models can influence people’s political opinions, with information-packed arguments proving the most convincing, a new study from authors including from the London School of Economics and Political Science (LSE) has found. However, there is a trade-off: the most persuasive arguments made by AI tend to be the least accurate.

In the new paper, The levers of political persuasion with conversational AI published in the journal Science, the researchers set out to understand how persuasive AI chatbots can be when talking about a range of political issues including climate change, the cost of living crisis, public sector pay and mortgage rates.

They ran three experiments with almost 77,000 people in the UK, using 19 different AI models to discuss over 700 political issues. They then measured how much these conversations changed people’s political opinions and checked the accuracy of nearly half a million claims made by the AI.

The researchers found that ‘information dense’ arguments made by the chatbots, packed with lots of facts and evidence, were the most persuasive in changing people’s views. However, the more information-heavy the arguments became, the less accurate they tended to be.

Another key factor in making the chatbot arguments more persuasive was if the models had received post-training. This is where AI models are refined after development to fulfil certain goals or preferences. Some of the chatbots in the experiment had been post-trained specifically for persuasion and these were found to be up to 51 per cent more convincing than those that had not been trained in this way. Similarly, if the chatbots received certain prompts, they were found to be 27 per cent more persuasive.

Interestingly, the researchers found that post-training and prompts made AI arguments much more persuasive than if they had been fed tailored personal information about individual users. They also found that making AI models bigger by increasing their scale didn’t have a large impact on their persuasiveness.

In the paper, the researchers highlight the potentially dangerous consequences of post-training techniques having such an impact on persuasion: “Powerful actors with privileged access to such post-training techniques could thus enjoy a substantial advantage from using persuasive AI to shape public opinion— further concentrating these actors’ power.”

They added: “…Even actors with limited computational resources could use these techniques to potentially train and deploy highly persuasive AI systems, bypassing developer safeguards that may constrain the largest proprietary models (now or in the future). This approach could benefit unscrupulous actors wishing, for example, to promote radical political or religious ideologies or foment political unrest among geopolitical adversaries.”

The researchers do, however, also note that there are probable psychological limits to human persuadability, with people only likely to go so far in what they believe. They also state that the very conditions that make conversational AI most persuasive—sustained engagement with information-dense arguments—may also be those most difficult to achieve in the real world.

Commenting on the findings, the paper’s co-lead author Dr Ben Tappin from the Department of Psychological and Behavioural Science at LSE said: “Accurately understanding AI-driven persuasion is important so that policymakers and the public can be clear-eyed about both its potential and its limitations for influencing public opinion and behaviour. We see this work as contributing to that goal.”

The paper The levers of political persuasion with conversational AI was authored by academics from the UK AI Security Institute, the University of Oxford, LSE, Stanford University, the Massachusetts Institute of Technology and Cornell University.