AI and the stock market: are algorithmic trades creating new risks?

Contents
On 6 May 2010, nearly $1 trillion in market value was wiped off the US stock market within a matter of minutes. It took over half an hour before the market corrected itself. What has come to be known as the 2010 Flash Crash, or Crash of 2:45, serves as a stark reminder of how quickly and unpredictably financial markets can spiral in the digital age – especially when small flaws in automated trading systems go unchecked.
The Flash Crash was not caused by major system failure, but instead was the result of small data errors entering the system. It is the potential damage these more “minor” algorithmic trading flaws can inflict that interests Maximilian Goehmann, a PhD candidate in the Department of Management at LSE. Maximilian’s research centres on artificial intelligence and, in particular, the application of machine learning in the financial sector.
In a written submission to a UK government Treasury Committee inquiry into AI in financial services, Maximilian makes the argument that market instability could as equally come about as the result of a small and seemingly inconsequential error as through the type of major technical failure that is more likely to be an immediate concern for policymakers.
The issue was that there were a lot of algorithms with similar settings that were each triggering each other … and that created a cascading failure that led to this sudden and huge drop in the market.
How prevalent is automated trading?
Estimates suggest that between 60 to 70 per cent of trades are now conducted algorithmically – meaning that the vast majority of trades are decided by Automated Trading Systems (ATS). These models have enormous potential, enabling firms to capitalise on speed, efficiency and data processing capacity far beyond human capabilities.
They also typically ensure that trades are executed in line with a defined set of parameters and rules. If these rules fail to adapt to real-time conditions – for example, should incorrect data enter the system – problems can arise. The 2010 Flash Crash is one prime example, when a large automated sell order was executed through an algorithm that didn’t account for market dynamics, triggering rapid reactions from high-frequency trading systems.
“The issue was that there were a lot of algorithms with similar settings that were each triggering each other,” explains Maximilian. “One feedback loop was triggering the next, which triggered the next, and that created a cascading failure that led to this sudden and huge drop in the market.”
“The 2010 Flash Crash was exacerbated by a series of feedback loops between algorithmic trading systems,” explains Maximilian. “Many of these systems, operating without coordination, responded to rapidly changing market conditions in ways that unintentionally amplified volatility. This cascade of reactive behaviour led to a sudden and extreme market decline.”
Maximilian credits this event with sparking his current research. “Ever since, I’ve been interested in the idea of machines being able to predict markets. It's all about forecasting – getting an understanding of the sensors and reacting forward,” he says.
“Financial markets are probably the best reflection of society in a way, because traders and banks don’t operate purely on financial data; they also respond to psychology. So, in a way, you can see society’s performance reflected through the economy. I find it fascinating how machines are able to read that.”
When markets trade on accurate data, they attempt to function optimally. But when algorithmic trading is driven by flawed inputs – as the Flash Crash demonstrates – the consequences can be severe. Using real-world trading data, Maximilian’s research focuses on how these “small but frequent” data errors emerge and contribute to systemic fragility.
“A truly free market is there to regulate itself, but we can install warning signs through transparency of competition.”
How can policymakers manage the risks of algorithmic trading?
While some might consider regulation to be the first step for policymakers seeking to strengthen the system and reduce its exposure to incorrect data, Maximilian cautions against viewing regulation as a silver bullet. Instead, he suggests reworking models already in existence, such as through expanding voluntary data certification frameworks.
Maximilian makes several recommendations that he believes would help the UK government guard against crises like the Flash Crash occurring again. “Instead of mandating rigid standards, data providers should be encouraged – or incentivised – to publicly certify and benchmark their data accuracy. This would foster market-driven quality improvements through competitive differentiation, not bureaucratic obligation.”
Advising real-time data oversight, he continues: “The Financial Conduct Authority (FCA) could oversee the licensing regime for data providers, incorporating quality monitoring independently into its existing certification, for example, and requiring real-time risk management controls for algorithmic trading systems, not just for algorithmic systems for all participants to showcase basically the data quality level.”
Data stress testing, he says, could also alleviate some of the risk of incorrect data entering the system. This is something the Bank of England, for example, could implement. Mandatory independent stress tests would simulate data anomalies and allow errors, such as duplicated quotes, missing values and inconsistent timestamps, to be detected early.
“Over-regulation is not the key here,” he stresses. “Rather than impose new rules, a better approach is to encourage real-time transparency frameworks that incentivise better behaviour through market mechanisms. For example, voluntary data certification that allows participants to signal quality and win trust, rather than imposing rules.
“I'm not advocating for policymakers to necessarily implement more rules,” he continues. “What I am trying to stress is the point that there might be a potential issue which could cascade into something bigger if not fully thought through. Implementing new policies or bureaucracy is not necessarily always the most appropriate solution.
“A truly free market is there to regulate itself, but we can install warning signs through transparency of competition.”
Maximilian also warns against looking solely at the AI behind these systems. “There is a lot of focus on AI and algorithms, but not necessarily enough attention is being paid to the data that they are trained on. This is an area we need to also look at, because there might be the chance that the machine takes data differently than the way it was originally intended. So we need to be aware that bureaucracy can sometimes cause more harm than good. There needs to be a balance.”
Looking to the future
As the UK government inquiry into AI and financial services continues, Maximilian hopes that serious consideration will be given to strengthening oversight and detection systems to catch minor errors before they lead to major consequences. With policymakers now examining his proposed approach, he remains focused on the future of algorithmic trading.
“I'm exploring the role of data and the impact of automated trading systems – how effective they are, their adoption rate, and the risks involved if these systems fail or rely on underdeveloped infrastructure,” he explains. These are pressing questions for anyone with a stake in ensuring that financial markets remain resilient, efficient, and free from preventable errors.
Maximilian Goehmann was speaking to Jess Winterstein, Deputy Head of Media Relations at LSE.