According to a famous anecdote, when mechanically produced stockings were presented to Queen Elizabeth in the 16th century, she rejected them, saying: “I have too much love for my poor people, who earn their bread by knitting, to give my money to promote an invention which will tend to their ruin by depriving them of employment”. Governments, politicians, rulers, and leaders throughout history have not always embraced technological innovation as a means of progress. Many 18th and 19th-century theorists argued that some technologies were designed to destroy jobs and make people poorer. Ricardo famously observed that machines could lead to a general decline in the welfare of the working class. Today we see the exact opposite, says professor at the Polytechnic Institute of Paris, Antonio Casilli. On his second day in office, Trump invited Sam Altman to the White House to present his new AI company, Stargate. China responded by issuing DeepSeek at a fraction of the cost, but billions are soaring in public discourse as companies are competing on who is going to sit in the driver’s seat of the so-called AI revolution. And as Elon Musk is taking on governmental bureaucracy, we see an alignment between governments and technology moguls that has never been observed before.
Casilli, whose new book “Waiting for Robots. The Hired Hands of Automation” has just been released by the University of Chicago Press, presented the findings of his research at the European Parliament last November. With his Digital Platform Labour (DiPLab) initiative, an interdisciplinary research group that looks into the labour behind AI, he recently published the first Policy Memo on the human cost of DeepSeek. They argue that behind the buzz about the Chinese progress, achieved at a fraction of the investment given in the US that counts in billions of dollars, lies the hidden labour of millions of law paid workers. For Casilli, the political economy of AI is the same whether in Tier 3 cities in the Chinese mainland or in the global south, where Microsoft and Amazon host their data factories. “Human robots”, are behind the term Artificial intelligence, says Casilli.
Q: You claim that DeepSeek's cost efficiency versus OpenAI argument is misleading? Can you tell us why do you believe that?
A: The way we — my research team, DiPLab — understand AI is as a global market for a new type of labour. You have a front office with few very specialised and highly paid engineers and a back office with millions — by some estimates, hundreds of millions — of people doing very unrecognized, poorly paid work. We've studied extensively the geographical distribution of databases used to train AI. These are all over the Northern Hemisphere: around Palo Alto (Meta), Virginia (Amazon), Texas (OpenAI), and in China around universities like Beijing University, which are near Chinese AI companies. Chinese AI companies create partnerships with these universities to access their data for training new models, which is what happened with DeepSeek.
Ongoing human labour for moderating prompts and evaluating outputs remains hidden in accounting, it isn't classified as capital investment. So, DeepSeek benefited from what the Chinese government provided: big data, university connections, and a huge workforce ready to annotate and train data.
Q: Can you share a particular case study?
A: Machine learning requires teaching machines to recognize or reproduce work. There are notable data annotation companies in China like DataTang and DataOcean. China has created data annotation hubs in peripheral areas, such as Haikou, a city on an island in southern China between Hong Kong and Vietnam. These hubs employ people to produce data for AI companies in coastal cities like Shenzhen, also known as the Chinese Silicon Valley, not far from Haikou. China's approach differs in that it mainly uses poorly paid data annotators from within China, unlike OpenAI, which recruits these data workers in low-income countries in the Southern Hemisphere. Instead, China focuses on poorer “tier three” cities with slower economic growth, where people receiving poverty subsidies work at the sustenance level doing data annotation. DeepSeek has benefited from this environment in China.
Q: However, this is not just the Chinese way of doing things. Where else do you conduct research and how large is the annotation industry?
A: With my colleagues at DiPLab, we work in countries of the global South like Kenya, Madagascar, Egypt, and Venezuela. We've documented disturbing practices, such as young women in Egypt annotating data for Chinese facial recognition systems, or people in Kenya filtering toxic content for OpenAI. In Venezuela, entire families work producing data, often using old computers distributed during the Chavez era. Globally, AI firms exploit cheaper labour in regions with weak labour protections—Kenya, Venezuela, and the Philippines—while wealthier nations host corporations profiting from this system. Even India and China occupy dual roles as both exploiters and exploited. The 2021 the Oxford Internet Institute estimated that there were 163 million online platform workers, while the World Bank's 2023 report suggested 154-435 million global gig workers (6-12% of the global workforce). While precise numbers are debatable, the trend shows consistent growth. China even mandates a 20% annual growth in data annotation through two different government directives. But then think about the market values: Stargate alone is valued at $500 billion, exceeding Elon Musk's wealth, covering data centres, energy costs, data acquisition, and labour.
Q: You criticize the narrative that AI fosters productivity and social good, arguing it worsens inequality. Can you elaborate?
A: Digital labour involves three main families. First, there's the visible gig economy, like Uber drivers who produce data while working and are poorly paid for a job on demand. Uber workers are unique in this case because only half their time is spent driving. The rest involves interacting with an app that generates data used for various purposes, including training AI and developing self-driving car technologies. These drivers help teach autonomous vehicles by annotating visual data — identifying objects like trees, cars, and people crossing streets.
Second, there are the less visible data and crowd workers who annotate data for AI, earning extremely low wages — around 90 euros monthly in Madagascar or 400 euros in Kenya. Third, there's unpaid work by users, like completing reCAPTCHAs or providing feedback on AI outputs, which is essentially uncompensated digital labour that others are paid for elsewhere.
So, Jeff Bezos coined the term “artificial artificial intelligence” in 2005, revealing the irony of platforms like Amazon Mechanical Turk, where workers earn cents per task. Our 2024 EU Parliament study (https://hal.science/hal-04662589v1/) exposed entrenched inequalities, with data annotation work concentrating on precarious groups—migrants, women, and Southern European countries. These roles typically mirror migration and colonial patterns: Turkish workers in Germany, Francophone Africans in France, and South Americans in Spain and Portugal. All AI solutions require some degree of human manipulation and operation, often done remotely. We meet people who "do the machine" — impersonating machine intelligence and simulating chatbots. We've worked with the European Parliament to bring these back-office workers from distant continents to Brussels, making their presence known to policymakers.
Q: You title your book after Samuel Beckett’s Waiting for Godot. The usual mantra, when a new technology appears, is that it will be a cataclysmic event for many professions. In the case of AI should we wait for job replacement predictions to come true?
A: OpenAI researchers published a paper 6 months after launching ChatGPT, announcing that 46% of jobs would be impacted or replaced by AI. This mirrors the same predictions pushed by economists since 2013. Around 15 years ago, an Oxford study predicted AI would replace 47% of US jobs by 2030. Now we're almost in 2030, and despite COVID-19, climate crisis, and geopolitical crises, people are not being replaced by AI. So the problem is not that “Godot” is not coming. The issue is that there is somebody who has a vested interest in having us waiting for some messianic technology to show up as a salvation or extinction event.
Q: Can initiatives like the EU's open-source LLM counter the risk of becoming a “digital colony” of U.S. tech monopolies?
A: While the EU's push for homegrown AI addresses valid concerns about dependency, replicating the scale of U.S. or Chinese models demands massive investment and labour. Success depends on addressing the underlying inequalities — cheap labour, energy costs, and data extraction. Without structural changes, Europe risks replicating the same exploitative patterns it seeks to avoid.
Images from the Wikimedia Foundation