Skip to main content

Will AI replace coders… How to read the next AI jobs hype cycle

Friday 13 March 2026
The image is a very detailed, black-and-white sketch-like illustration featuring a complex scene of interconnected figures and technology. The artwork portrays various individuals in different environments to represent the relationship between technology and humans. 

In the foreground, multiple people are surrounded by computer screens filled with data visualisations, charts, and technical information. A woman seated in an armchair appears deep in thought, surrounded by data-filled monitors. Beside her, a man leans over, using a tablet to assist with their inspection of a plant or tree. In the centre, a figure holds a large frame or screen displaying anatomical illustrations, representing the use of AI to analyse medical imagery. To the left, another person is intently observing a computer screen, while a second figure nearby is deeply immersed in analysing data. A woman dominates the right side of the composition, gazing upwards as if in contemplation or envisioning something beyond the immediate scene. The background features more people, including a family holding hands, and other abstract representations of data.

The threat of AI replacing jobs in any number of sectors is often just over the horizon. Drawing on the history of frontier technologies and a series of interviews with coders under threat of AI replacement, Kate Vredenburgh and Lauren Wong set out what we should be asking about how AI is changing work.

Every few months, another viral post warns us that artificial intelligence will soon replace entire categories of workers. On February 9th, entrepreneur Matt Shumer’s post “Something Big is Happening” argued that software engineers were “the first domino” to fall—that once AI mastered coding, all knowledge work would follow. Citrini Research’s post arguing that widespread AI disruption of white collar jobs could cause an economic crisis sent stock prices temporarily tumbling. The narrative has a simple logic: if AI can write code, surely it can do anything requiring intelligence.

But should we trust these predictions? These claims are often backed by little to no evidence, and are not transparently based in micro- or macro-economic models that one can disagree with. That being said, AI clearly has, and will continue to have, a massively disruptive impact on coder’s jobs, as well as the entire IT sector.

These claims are often backed by little to no evidence, and are not transparently based in micro- or macro-economic models that one can disagree with.

The bigger mistake, is to jump from claims about model capabilities, to claims about their impact on work, or from coding jobs to claims about jobs generally. Let’s start with the former mistake, jumping from model capabilities to claims about their impact on work.

Technology doesn’t inevitably drive change

The success story of AI products for coding built on top of Large Language Models (LLMs) make the following seductive, but false, claim very tempting: if a company builds a powerful, innovative technology, then there will be a clear path to commercialisation and a positive impact on work (or some other domain of life). We can look at other struggles to bring frontier technologies to the workplace, however, to see that this is not the case.

Virtual reality (VR), for example, was once widely positioned by technology leaders as the next computing platform that would revolutionise the way we work and collaborate. Yet despite sustained investment and experimentation, this technology has struggled to reach broad enterprise-scale deployment, with adoption instead concentrated in specific industries (education, manufacturing) and limited use cases (learning and training, design). While some of VR’s headwinds can be attributed to the technology itself – including ergonomics, high cost of ownership and content development – businesses have been slower to adopt VR as a result of more general dynamics around new tech commercialisation: products rely on “innovative” positioning rather than providing clear solutions to relevant, high-value problems, unproven or ambiguous returns on investment, complexity of integration with existing enterprise technology and workflows, and insufficient change management to drive employees’ acceptance of an unfamiliar technology. With VR as with AI, technological capability alone is not enough; it must be matched with organisational alignment, clear economics, and institutional readiness to move beyond proof-of-concept pilots toward sustained, wide-scale adoption.

Is there evidence for AI displacing conventional work?

Even if technologies are successfully commercialised and integrated into one sector, it does not mean that they will be hugely disruptive to the labour market across the board. To make claims about the latter, we need evidence about the impact of frontier technologies on work, and macroeconomic models that translate those micro-impacts on productivity or automation to impacts at the level of the economy as a whole. Some of those metrics involve testing particular capabilities of models that are relevant for completing economically relevant tasks.

Even if technologies are successfully commercialised and integrated into one sector, it does not mean that they will be hugely disruptive to the labour market across the board.

METR, for example, found that the length of tasks that AI systems can complete has doubled every six months. Other benchmarks may test whether AI models can complete occupational tasks just as well as human experts. OpenAI’s GDPval benchmark, for instance, asks experts to judge an AI and human deliverable for a task on 1,320 economically valuable tasks across 44 occupations. GPT-5 and Claude Opus 4.1 outputs were ranked higher than human outputs about 40% and 50% of the time overall. Of course, these benchmarks only test human preferences over a single task, attempted once; but, they provide some weak evidence that current AI systems can complete tasks to human satisfaction across a range of industries.

AI is changing the quality of jobs

Furthermore, what these debates often miss is how AI impacts the quality of jobs, rather than wages or levels of employment. In joint work with Dr. Marco Meyer, Kate Vredenburgh’s UKRI-funded Future Leader’s Fellowship studies how workers—particularly coders—actually experience AI tools, and whether they retain a sense of autonomy or agency in their jobs.

In interviews, what struck us was that workers’ concerns went beyond whether they had control over their day-to-day tasks. Even workers with considerable formal control over their workflows worried about their agency being degraded when working with AI agents. They worried about no longer remaining the authors of their work, and being able to stand behind the normative judgments and trade-offs involved. Coders worried about becoming “babysitters” for AI, and just being there to validate AI outputs and “mark work.”

Even workers with considerable formal control over their workflows worried about their agency being degraded when working with AI agents.

In addition, many want to occupy a role that involves resolving trade-offs and exercising substantive judgement, not merely executing routine tasks. A data analyst drew a sharp distinction between “cleanup work” and the “insights and recommendations” that gave her role meaning. Finally, coders worried about no longer being seen as an expert by colleagues, nor the person who is answerable for the final product. One database manager insisted that “the ultimate output is my responsibility”—what mattered was being able to put her name to something and stand behind it.

These two concerns are interconnected: it is hard to take responsibility for work you did not produce and do not fully understand. What our interviews reveal is that workers’ autonomy concerns track something beyond having options or identifying with their roles. Even workers who formally controlled their workflows, and who were enthusiastic about AI, worried about the erosion of their standing as professionals.

What should we do about this? Protecting worker autonomy requires regulation that goes beyond existing health-and-safety frameworks to address the subtler threat to human agency. This, we argue, provides a distinctive justification for regulating workplace AI—one that existing regulatory frameworks miss. The question we should be asking is not only “will this job disappear?” but “what kind of work will remain—and will workers retain the agency over their own work that makes it worth doing?”

By Kate Vredenburgh and Lauren Wong

Kate Vredenburgh is an Associate Professor in the Department of Philosophy, Logic and Scientific Method at the London School of Economics. She researches questions across the philosophy of social science, political philosophy, and the philosophy of AI. From 2024-2028, she is investigating questions about AI and the future of work, thanks to a UKRI Future Leaders Fellowship.

Lauren Wong is the Category Manager for Commercial VR at Meta Reality Labs, responsible for the commercialization of Meta's enterprise VR headsets and software.

This article has been published on the LSE Impact Blog.

Image credits: Ariyana Ahmad & The Bigger Picture / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/