JUDY: The question people always ask me is, is AI going to cause huge unemployment? It's the same question I was asked during the microelectronic revolution — the internet revolution, if you want to call it that. We've been through all of these huge technological changes, and somehow, in the popular imagination, there's always a sort of fear that these technologies are going to cause massive unemployment. And somehow, we don’t learn from history.
SOPHIE: Welcome to LSE iQ, the podcast where we ask social scientists and other experts to answer one intelligent question. I’m Sophie Mallet from LSE’s research team, and in this episode I ask: Will AI free us from work?
But this month, we’re doing iQ a little differently — and I’ll explain why.
Earlier this year, in the depths of winter, on an incredibly rainy, wet, dark and very London day, I sat down with Judy Wajcman, who we heard from at the start of the episode. She is currently LSE’s Emeritus Professor of Sociology and a leading voice on technology and work.
I was there to ask her the question: Is AI really taking our jobs? For an incredible video we worked on this year, and more about how you can watch that later, but we ended up speaking about so much more. Judy gave me a rare look behind the scenes at how tech pitches in Silicon Valley actually go down. She revealed what really saves people time — spoiler alert, it’s not tech. And she told me about how a senior Apple executive once boasted to her about a tech feature that totally misses the point — and you might be using it every day because I know I do.
So in today’s episode, we’re just going to let you listen in on that conversation. So, come in out of the rain, into the room, the heating is cranked. And we started with the headlines.
JUDY: You know, the McKinseys of this world — those big consultancy firms — somehow relish putting out reports saying this many million jobs will be automated or lost. It creates panic in the population and gets a lot of traction because it’s simple. It’s just saying, "there will be automation."
But it’s much more complicated than that when you look in detail at what’s really happening with jobs and how they’re changing. People rarely look back at previous technological changes and draw lessons for the current period of time — and there are lots of lessons we can draw.
From my point of view, the very best economists admit this. They speculate about what the possibilities could be, but they conclude that it’s early days and we really don’t know.
The debate tends to take the form of: is it going to be automation in the sense of replacing jobs or are these technologies going to complement jobs? So it’s always this thing about replacement or augmentation — that’s the terms that are used.
And when we look historically, what we see is technological changes have brought great changes but what happens as far as I can see through my long history of looking at these things is that some jobs are replaced, some are changed and let’s say augmented, and lots of different kinds of jobs are created. All of those things go on and at any point in time, it’s hard to know which of those things is going to be the dominant thing.
There are lots of examples one can think of. People always point to the fact that banking was automated — but actually, a lot of people in banks found there was a growth in other forms of employment. And that’s been the case in lots of industries. Even if we take something like the gig economy and platform work, which we’re all concerned about — the conditions of that work are not what they should be — all those kinds of work didn’t exist before those platforms existed. So here we have a whole new area of work that we’re studying, engaging with, trying to regulate and legislate about. But all forms of work that didn’t exist in their current form.
One of the growth areas in Britain and in most places is there’s a shortage of data scientists, software engineers, and machine learning professionals. And all of those jobs are actually new jobs that didn’t exist in the form they do now.
So I think we need to think about all of those things — that some jobs are replaced and automated, but I’d say that the vast majority of jobs are changed.
SOPHIE: So Judy’s point here is clear: history shows that technology rarely wipes out work entirely. It reshapes it. But there’s no doubt that some jobs are being replaced.
Just last month, Amazon, one of the richest companies in the world, cut 14,000 corporate jobs and pointed to AI as a reason for the layoffs. Elon Musk, everyone’s favourite tech spokesman has even gone so far as to say: ‘there will come a point where no job is needed — you can have a job if you want to, for personal satisfaction, but the AI will be able to do everything.’
So, in my conversation with Judy, I printed out a few quotes as talking points — all from the most famous tech bros.
SOPHIE: Let me know what you think of these quotes.
JUDY: I think it’s complete nonsense. The issue is distributing work more fairly. I don’t think we’re going to have huge amounts of things automated so that we’re just going to sit about. I don’t even know where to start with this, actually, that’s the truth. How can these people who validate working long hours more than anyone — who say ‘we do amazing, creative work, we do genius work, we want to work all the time because the work is so enjoyable and interesting and amazing.’ How can these people think about other people not having access to interesting, enjoyable work? What sort of notion is that?
For people’s identity, to feel like you’re worth something in society and have an identity, I’m afraid in this society that we live in, having a job, having some work, having a kind of function is incredibly important to identity and shouldn’t be denied people. And it’s a weird thing for those people to tell us to live differently.
SOPHIE: Judy doesn’t just study these trends from afar. She’s spent years inside the tech world, including time at Stanford. She’s conducted research into Silicon Valley culture itself. So when she talks about the most recent tech hype cycle — and this time it’s AI — she’s speaking from decades of experience.
But the problem for me as a producer is she has so much research, and I’d spent a lot of those short, wintry days before the shoot bundled up in layers, wrapped up with Judy’s research, watching videos, listening to podcasts and reading interviews, published books, chapters, articles and research papers and there was just a lot of it. But something kept on coming up a recurring focus on design – and who gets to design.
JUDY: My overall concern is that at the moment, a very small number of companies — we know the big five — have phenomenal power and control over the design of technologies. They’re making decisions for us about what these technologies will look like. They’re making decisions about, for example, making a lot of social media, I mean I don’t like the word ‘addictive’ but they’re certainly designed to capture our attention. They’re absolutely designed to do that. And one can imagine designing a lot of these things that weren’t designed to do that. We know that social media is totally designed to do that – to keep us ‘on’.
I remember talking to a senior executive at Apple some years ago — Apple’s one of the better companies — and talking about the amount of time people were spending on these things. He took me out to lunch and said, "Well, we’ve got a new feature. We’ll tell you how many hours you’re spending on it. We’ll have this feature now so every week you’ll get a report saying how many hours you’re spending on it. " That was his idea of a solution to this problem. It’s like the equivalent solution that parents should have control of social media — as if the responsibility lies with individuals or parents, rather than the designers of the technology. If there’s one thing I’ve argued for my entire now very lengthy career, looking at the sociology of science and technology it has been that we absolutely have to focus on design and the designers — not always be put in the position as a consumer making choices about what you’re going to have, but we absolutely have to look at who is designing the technology and what they’re designing it for.
SOPHIE: So what are they designing it for? When I was traveling to the interview in the dark that morning, I was surrounded by film kit, cameras, tripods, lights. In the back of the cab looking out the window, and there was one thing from Judy’s research that I couldn’t quite get my head around. In one of her talks, Judy spoke about an obsession with efficiency and optimization, and that a lot of the way technology is presented to us is like marketing.
But what does tech have to gain from presenting AI as a threat to our jobs? Like, how does that work as marketing? Surely we’d be less likely to use it if we thought it would replace us?
JUDY: Because the core of productivity is doing more in less tim, right? Technology’s always sold in terms of making us more productive. That’s the case in the workplace — spend a fortune on technologies to make workers more productive, so you make more money. I don’t buy that story at all. But it’s a handy story for people marketing those kinds of technologies.
SOPHIE: Can you talk more about that myth of saving time? Because we all know that from our own experience, we don’t have more time. So why does it benefit tech companies to say: ‘this is amazing, you’ll have more time?’
When you ask people about the future, what they’d like, they often talk about having fantastic technologies to save time. And the tech companies feed off that by saying, "Oh, you’ve got a problem with time? We’ve got the solution. We’ll automate your email replies, all of these things in the home, relationships, you won’t even have to pick a restaurant anymore. We’ll do all of these things for you to save time."
We know that doesn’t work. The promise that somehow the more technology means more time has never been true. Really wealthy spend a lot of money employing other people to save them time. The best way to save time is to buy someone else’s labour that is cheaper than yours.
In the autumn, the big companies do their sales pitch on the latest thing they’ve designed, their latest devices. If you listen to those pitches, they will often be about time.
And part of the presentation, which I find kind of ironic myself is that often the narrative these things will save you time so you can use your own time wisely. It’s always like: ‘These things are mundane’. It’s also that what tasks are seen as mundane and valuable and what aren’t? And there’s a notion that some kinds of activities aren’t really worth doing and it would be better to automate them and that will leave you to do things with your time that are better, that are a wiser use of your time. And in that story is an amazing amount of value-laden assumptions about what are good activities to do, what are valuable activities to do and what are the activities that it would be just as well to automate. And we need to question a lot of those assumptions.
One of the things I wrote about in my book, and I was thinking about it a lot because my mother at the time was still alive but in a nursing home and there was a lot of discussion at the time about using AI assistance in homes. And she had a bit of dementia, this is a common thing for people my age. And I was writing about time at the time, one of my topics. And I remember just observing lots of, it was particularly, daughters sitting in the nursing home like I was sitting there and not doing a lot but just kind of giving time, I don’t know what you want to call it but just being there which is using time in a way. The kind of "being there" time has got absolutely nothing to do with technology and can’t be automated. It’s incredibly precious time but how does one even start thinking about what one would do with that time but it’s a time that we don’t see much discussion of that.
I was in Stanford once, at a little workshop that was literally called "Coding for Care" believe it or not. And at lunchtime, which is often the case for these sort of things at Stanford, people come and sort of do pitches for their products and we had a series of people who came to pitch something and I remember these guys pitching basically a surveillance technology. Of course they didn’t talk about it like that. They said they could be sitting in Palo Alto or wherever it was at their company and watching what their mother was doing at home because they had fantastically good technology in her house and so if she fell over or something they would immediately be able to see what was happening in the home. And a lot of people sitting there said: ‘this is kind of surveillance technology. Are you going to look at her in every room in the house? Even in the bathroom?’ Like raising questions about this and also about the fact it was a completely individualised model of care. But what about collective care? Is the only model that older people will be on their own in a house? Might there not be a better collective solution to this problem?
So even something like that — it’s like how you think about the problem, how you kind of conceptualise the problem before you think about what kinds of technologies would be solutions to those problems? I think a lot of the technologies we get at the moment are in a way looking to solve something that doesn’t exist, or that shouldn’t exist in the way that it currently does, kind of looking for applications, like that there are technologies with capacities so let’s look at applications for these things rather than starting with genuine social needs and social problems and then designing the technologies in relation to those.
SOPHIE: Judy's point is really striking. So much of what we see in tech isn't about solving real social problems. It's about finding uses for capabilities that already exist. But that does not mean she dismisses AI altogether. In fact, she saw enormous promise in some areas, and that's where our conversation turned next.
JUDY: This technological revolution will do amazing things. Absolutely, do I want cancer diagnosis to improve? Do I want cancer diagnosis to improve? Do I want AI to be used for scans and apparently be much more accurate than humans? There's all kinds of medical diagnostic, you know, energy applications. There are myriad ways in which these technologies will improve our lives, medical lives, energy. You know, absolutely.
My worry is that, you know, people think about these technologies as being the same thing and don't think about: ‘well, this technology is terrific for this. It'll be terrific for diagnostics. You know, it will be terrific for measuring energy.’
It won't be terrific for educating kids at school. I really don't think it will be terrific for that. I don’t think it will be terrific as companions, as substitute companions for us rather than having communities. I don't think it will be terrific to have automated robotics in nursing homes substituting for nurses. So we need to have a much more nuanced conversation. And that's always been the case with technology, you know, and I've always sort of said, as a labour scholar, I mean, what kinds of work do we want automated and what kinds of work don't we want automated?
You know, we wouldn't actually want to put our kids into a nursery knowing that machine, it was only machines that were interacting with them. Nobody would think that was a great idea. But actually it is the end point of thinking that these AIs will be kind of companions. So I think we really need to kind of think more carefully about what we want to use the technology for.
SOPHIE: What I took from this is that AI is not some homogenous monolith that we can describe as good or bad. It's a set of tools with very different strengths, different limits.
There's huge potential in areas like healthcare and energy, but I get that there should be caution in assuming it can do everything. And that's why I wanted to put a bold claim to her. We were absolutely running out of time on the shoot because the intense wind and rain attacking the windows was starting to get in the way of our sound but I just had time to hand her another printed out quote.
It was from Geoffrey Hinton, someone often called ‘the godfather of AI’, who in 2025 said: ‘Almost everybody I know who's an expert in AI believes that they will exceed human intelligence, it’s just a question of when.’ You’re shaking your head.
JUDY: I'm shaking my head. I mean there's been a lot of discussion again by the kind of tech bros, if I could call them that about artificial intelligence getting to the point where it will be more intelligent than human beings. And actually, it's already kind of out of date because actually the limits of these technologies are all to patent to all of us.
At the moment there's problems with even designing algorithms that aren’t gender bias, race bias, age bias, cultural bias. I mean there are so many problems with a lot of these technologies. There are so many so-called hallucinations, as they call them, which is that these technologies actually are not accurate, that it's so far from any notion of human intelligence, and also even the notion of human intelligence. I mean, you know as a sociologist that this has been debated forever. Like what is that what is intelligence? How culturally specific is it? And that a lot of the technical discussion sees intelligence as something that is individual and not collective, and it's something that we do together. And in fact is not simply functional, that there are lots of different kinds of intelligence like psychological intelligence and emotional intelligence.
And I always say to people, actually, these are not the people I think are the most intelligent in the world to be telling me that these machines are going to be smarter than us.
SOPHIE: If you’d like to watch Judy’s video on whether AI is really taking our jobs, head to YouTube or any of our social channels. This episode was written and produced by me, Sophie Mallet, with script development by Anna Bevan and editing by Mike Wilkerson.
And if you enjoy IQ, please leave us a review to help others discover the podcast. Join us next month when Charlotte Kelloway asks, "Will the next world war be a cyber war?"
Will artificial intelligence cause huge unemployment? Will it free us from working? Will it replace us? In this special edition of LSE iQ, Sophie Mallett sits down with Professor Judy Wajcman, LSE’s Emeritus Professor of Sociology and one of the world’s leading voices on technology and society.Together, they explore one of the biggest questions of our time: what does artificial intelligence really mean for the future of work?
In this wide-ranging conversation, Judy shares what really saves people time, talks about the fear of job replacement, and warns of the dangers of letting the most powerful tech companies design the future
From Silicon Valley boardrooms to everyday lives, Judy challenges us to think differently about progress, productivity, and what we truly value as work.
Contributors: Judy Wacjman
Research links:
From connection to optimisation
Feminism confronts AI: the gender relations of digitalisation
LSE iQ is a university podcast by the London School of Economics and Political Science.
Take the listener survey for the LSE Phelan US Centre’s podcast, The Ballpark and enter the prize draw for £250 in vouchers!
The LSE Phelan US Centre’s podcast, will be ten years old in 2026, and they want to hear from you to make their podcast even better. Their survey only takes 10-15 minutes, and you'll have the chance to enter a prize draw to win £250 in vouchers.
The Ballpark brings academic commentary to a wide audience, including to students, policymakers and a global community of academics. Recent highlights include The US’ changing relationship with NATO and Europe with Dr Celeste Wallander and an ongoing mini-series on covering topics including AI and the workplace and the US-China AI race.
- Fill in the listener survey – it only takes 10-15 minutes – here: https://forms.office.com/e/Vcj8V8uGM1
- Voucher prize draw terms and conditions are available here: