The X factor: four ways one social media platform undermined democracy during the 2024 summer riots

Contents
In July and August 2024, following the horrifying stabbing attack and horrifying killing of three girls in Southport, riots broke out, bringing violence and fear to several cities across the UK. In the aftermath, many scholars and journalists dug into social media's role in the chaos. Telegram was mostly used as an organising tool, helping rioters connect and coordinate around shared targets. X, however, fuelled the riots in a different way, amplifying fake news about who carried out the attack and pushing conspiracy theories.
Funded by the LSE Urgency Fund, I led a research project looking at how X's systems of recommendation likely fuelled the summer riots by amplifying visual representations of racist conspiracy theories.
Together with Nick Lewis, I manually collected all the posts made by two verified X accounts between 4 July and 4 August 2024 and their engagement metrics (number of views, “likes”, shares and comments). The first account was run by a UK-based far-right political party whose members have openly expressed White nationalist views. The other was an account of unknown origin categorised as “Media & News”. This was one of the first accounts to share false information about the perpetrator of the Southport attack and has actively used Generative AI tools to increase the viral potential of its Islamophobic and xenophobic posts.
Both accounts hold the blue checkmark, which in the past used to be granted after a rigorous process of verification but is now part of X’s subscription model. Their names will not be revealed to avoid amplifying the creators of extremist content. By systematically analysing the data, we found four clear ways X has undermined democracy.

1. Amplifying racist conspiracy theories
After coding all the posts featuring visual representations of racist conspiracy theories and running a regression analysis between the total number of views obtained by them and the correspondent total number of “likes”, shares, and comments, we discovered that these posts were roughly amplified 30 per cent more than other content. Conspiracy theories with strong ties to violent extremism such as “White genocide” and “The Great Replacement” were not only visually expressed through images and videos, but they were also significantly amplified by algorithmic systems of recommendation.
The White genocide conspiracy theory, which gained prominence in American White nationalist circles in the 1990s, claims that White populations are being deliberately exterminated through ‘‘immigration, integration, abortion, and violence against White people’’. The Great Replacement Theory – popularised in the 2000s through the work of the French writer Renaud Camus – claims that ‘‘White European populations are being deliberately replaced at an ethnic and cultural level through immigration and the growth of minority communities’’. Both evoke an apocalyptic fate and therefore appeal to a crusading mentality that has occasionally culminated in violence.
2. Reproducing racist Generative AI content with high viral potential
On one of the examined accounts, 39 of the posts visually representing racist conspiracy theories were created with Generative AI tools and these posts attracted disproportionally high attention. The average views per post and engagement were nearly three times higher than for other content, with one of the posts alone obtaining 11 million views in October 2024.
In many of these posts, racialised Muslim men were depicted as sexual predators allegedly waiting for a chance to prey on young White European or British girls. After the Southport attack, there was a noticeable shift from the supposed racialised Muslim enemy to the White British hero through images adhering to meme aesthetics operating as instruments to fantasise violence against racialised migrants, particularly Muslims.

Through this we can argue that X hasn't just enabled its users to freely share Generative AI-created images featuring stereotypes that symbolically establish a hierarchy between White and Black individuals, Christians and Muslims, natives and so-called invaders. Its algorithms have actively amplified this content, further reinforcing racist and Islamophobic fantasies that have historically fuelled violence.
3. Spreading fake news
In addition to facilitating the spread of false information about the identity of the perpetrator of the Southport attack, we also identified cases in which videos were used to support manipulated information expressed in the captions. These encouraged interpretations of the images as evidence of proof of the alleged invasion of the UK, difficulties with integration into British culture, and the Islamisation of British institutions like the police and parliament. This trend was more noticeable on the account owned by the British political party and none of these posts was flagged by the platform as false information.
By spreading fake news that feeds into racist conspiracy theories, X's algorithms don't just fuel polarisation by reinforcing the antagonism ‘‘us versus them’’ that lies at the heart of the contemporary far right. They also seem to directly influence processes of mass radicalisation, thus influencing some people to act against their perceived enemies.
4. Legitimising racist conspiracy theories through verified accounts
Racist conspiracy theories were consistently performed as truth by X accounts holding the blue checkmark. As this symbol used to be given to accounts after a rigorous process of verification, it can still be perceived as a marker of credibility. By giving access to the blue checkmark through a model of paid subscriptions, X has not only facilitated the spread of mis/disinformation insofar as the checkmark comes with a “largest reply prioritisation”. It seems to have also facilitated the legitimation of racist conspiracy theories due to associations between the blue checkmark and credibility.
Out of the examined 388 posts shared by the X accounts labelled as "Media & News," 35 came from verified accounts. Fourteen of these featured images or videos promoting racist conspiracy theories. One of them was made by a Hungarian media outlet and two were made by an account tagged as “Education”. In terms of location, the observed verified accounts were based in the UK and different European countries, such as Hungary, Spain, and France.
How to address the X-factor
X’s algorithmic systems of recommendation, its relaxed approach to content moderation, its symbiosis with Generative AI tools that enabled the creation of images based on racist and Islamophobic stereotypes, and its current subscription model have all transformed the platform into a polarisation engine that undermines democracy.
If regulations have historically lagged behind technological advances, it's time to find alternative solutions. The combination of algorithms, racism, Islamophobia, fake news and conspiracy theories poses a direct threat, not only to marginalised groups but to democracy itself. We can't control everything shared on X, but we can take responsibility as citizens: by raising awareness, pressuring governments to implement effective regulation, and engaging responsibly with social media platforms. Through collective effort, we can address the X factor.
The London School of Economics and Political Science (LSE) is a world-leading university, specialising in social sciences and ranked top in the UK by The Times and Sunday Times Good University Guide 2026. Based in the heart of London, we are a global community of people and ideas that transform the world.




