Your Boss is a Machine: Protecting Worker Autonomy in an AI-Driven Economy
What is worker autonomy, and why is it morally valuable? How can AI hinder or promote worker autonomy? This project investigates questions of autonomy, good work, and the future of work in an economy permeated by AI.
Project Description
Funder: UKRI
Project Leader: Dr Kate Vredenburgh
Project Timeline: September 2024 - August 2028
Worker autonomy is under threat from AI. As AI becomes more general, and thus more autonomous itself, in the coming decades, it could replace tasks, amplify human skills, or produce economic and moral deskilling. The first two enhance worker wellbeing; but, economic and moral deskilling threatens to seriously undermine worker autonomy, especially for early career workers.
Societies are at a critical juncture where recent advances in AI could enhance or seriously harm worker autonomy. There are, however, no obvious solutions to that threat. Moral theories of worker autonomy and social scientific measurement instruments were developed in an era before AI, and there are major gaps in the regulation of algorithmic management.
Drawing on tools from philosophy, the social sciences, and law, the project develops a new moral theory of worker autonomy that can address the challenges posed by AI. The framework will be grounded in novel research on the impact of AI on worker autonomy in the UK, and will generate metrics to measure worker autonomy and evaluate regulatory interventions. This work will advance our understanding of worker autonomy and of legal instruments to promote it, alongside our ability to measure it.
A More Egalitarian Future of Work? AI offers us an opportunity to rethink how we ought to work in the future. It has forced researchers and policymakers to confront facets of work beyond levels of employment, such as quality of jobs, precarity of work, or managers’ power over workers. But, there is often an important moral lens missing from these debates. AI is an opportunity to try for more egalitarian workplaces, labour markets, and societies, both domestically and globally.
Will AI replace coders… How to read the next AI jobs hype cycle. The threat of AI replacing jobs in any number of sectors is often just over the horizon. Drawing on the history of frontier technologies and a series of interviews with coders under threat of AI replacement, Kate Vredenburgh and Lauren Wong set out what we should be asking about how AI is changing work.
Suspicious Minds Podcast. Kate Vredenburgh has been featured on multiple episodes of the Suspicious Mind Podcast - a documentary series that investigates the disturbing rise of AI as a trigger for delusional thinking.
Marco Meyer
Marco Meyer is the principal investigator of a research group at the University of Hamburg, which investigates topics in organizational and social epistemology. His research draws on philosophy, economics, psychology and data science to address ethical and political issues. He works on topics across political philosophy, social epistemology, and the philosophy of AI.
Lauren Wong is a digital product and commercial strategy leader with deep expertise in virtual reality and emerging technology. Lauren spent six years at Meta Reality Labs driving the global commercialization of VR across both consumer and enterprise markets. Most recently, she served as Category Manager for Commercial VR, where she built Meta's B2B SaaS model for enterprise VR and scaled the commercial VR business across 22 markets worldwide. Prior to Meta, Lauren led Strategy for Cartier's Retail Innovation Lab, where she focused on leveraging emerging technologies to create next-generation customer experiences. Lauren holds an MBA from New York University's Stern School of Business and a BA in Architecture with High Honors from Princeton University.
Vredenburgh, K. “Egalitarianism and the Future of Work.” 2026. Contemporary Debates in the Philosophy of AI. Wiley-Blackwell: 229-243. DOI:10.1002/9781394258840.
Vredenburgh, K. Fairness and randomness in decision-making: the case of decision thresholds. Synthese 206, 4 (2025). https://doi.org/10.1007/s11229-025-05091-7