AI and Hidden Labor: How Companies Exploit Cheap Workforces
The Human Cost of Artificial Intelligence: Hidden Labor and Web3 Solutions

AI platforms heavily rely on hidden and invisible labor from developing countries, specifically in Africa, Latin America, and Asia. These workers receive meager wages, sometimes less than $2 per hour, for reviewing content that can be shocking or violent to train AI security systems. These practices have raised widespread global concerns and led to lawsuits demanding crucial ethical reforms in the AI industry. Prominent examples include the lawsuit filed against Sama and OpenAI in Kenya, which exposed harsh working conditions and content moderators' exposure to horrific materials that affected their mental health (The Verge, 2023; Time, 2023).
The Hidden Face of Artificial Intelligence: Reliance on Human Labor

The artificial intelligence sector exceeds $500 billion in value and is reshaping multiple industries, from banking to healthcare. However, these advanced technologies rely on armies of human workers who perform tasks that AI systems cannot fully handle, such as data labeling, filtering harmful content, and correcting machine errors. Without these essential workers, algorithms could collapse, revealing a stark paradox: AI, despite its progress, might become a new digital frontier for labor malpractices and a new form of unethical behavior. If companies and innovators do not act, the promise of AI may unravel under the weight of its ethical contradictions.
It is tempting to believe that AI systems are self-sufficient and improve themselves through endless loops of data feedback and computations. However, the reality is more complex; AI does not entirely clean or train itself. The scale of this "hidden labor" crisis is astonishing. Large freelance platforms employ millions of workers to label data, correct model errors, and purge violent or explicit content. These workers are sourced from countries in the Global South, such as Kenya, India, and the Philippines, supporting an $8 billion industry that fuels the AI revolution (ILO, 2024).

These workers are often highly educated but accept these jobs because better opportunities are scarce in their countries. They sign up believing they will contribute to cutting-edge technology, only to find themselves trapped in digital piecework, sometimes requiring exposure to highly abusive content, such as child sexual abuse, hate speech, or horrific images of violence (The Verge, 2023). The pay is low, psychological support for these difficult tasks is rare, and job security is almost nonexistent, leading to serious mental health challenges (Time, 2023).
Lawsuits and Growing Concerns About AI Ethics

Why haven't companies rectified this situation? Because it's cheap and easy to overlook. However, this comes with increasing risks. Consumers and regulators have already begun to question the ethics of AI supply chains. For example, in 2023, a lawsuit was filed in Kenya against Sama and OpenAI, the provider of the popular chatbot ChatGPT, on behalf of former content moderation workers. The lawsuit alleged that workers were subjected to unsafe working conditions and arbitrary dismissal after being forced to process extremely graphic and disturbing material, leading some to suffer from PTSD, anxiety, and depression (The Verge, 2023; Time, 2023).
The AI industry also faces lawsuits related to misclassifying workers as independent contractors instead of employees, depriving them of basic benefits and rights. In June 2025, lawsuits in the United States targeted the AI industry for these practices, with a prominent case against TransPerfect Translations in California for violating wage and hour laws (Independent Contractor Compliance, 2025). The EU AI Act and similar efforts globally set new expectations for transparency, fairness, and accountability. Companies that fail to address the human cost of AI may face reputational damage, regulatory fines, or worse – a collapse of trust in the systems they have built.
Web3: A Potential Solution for an Ethical AI Future

Web3 technology may be the solution the AI sector desperately needs. The promises of Web3 – decentralization, transparency, and user empowerment – directly address many of the shortcomings of AI's hidden labor system. However, these tools remain largely unutilized by enterprise AI, representing a clearly missed opportunity.
Decentralized Autonomous Organizations (DAOs) provide a way to integrate true transparency and fairness into AI supply chains. Unlike traditional freelance platforms, where decisions about payment, task selection, or working conditions are made behind closed doors, DAOs make every decision transparent and visible. Every vote cast, every rule change, and every payment to a contributor is stored on a public ledger, creating an auditable trail that cannot be altered after the fact. This means anyone, from participants to external auditors, can track who made decisions about "what, when, and how." Immutable payment records eliminate the disputes that plague opaque freelance work, while public governance records ensure that power is not concentrated in the hands of a few.
Real-world examples are beginning to show what is possible. Some decentralized employment platforms allow freelance workers to collectively manage their wage structures and benefits, with all transactions and decisions recorded on-chain for complete transparency. Other companies apply similar principles to research projects and contributors, where rules regarding compensation and project selection are codified in smart contracts, leaving little room for hidden decisions or unfair practices. These models exist and are effective, but the reality is that enterprise AI has shown little interest in adopting them so far.

Many AI leaders in corporations cling to the idea that ethical supply chains are simply too expensive – an unfortunate cost incompatible with the margins demanded by investors or clients. But this is a myth that Web3 technologies can finally dismantle. Web3's value is not limited to ethics; it offers efficiency gains that traditional systems cannot match. Smart contracts automate payments and rewards, reducing the need for large administrative teams and eliminating intermediaries who add cost without providing value. Immutable blockchain records mean that payment disputes, task verification, and contract enforcement occur with much less friction, saving time, legal costs, and operational problems.

However, Web3 is not without its flaws. Decentralized systems can replicate biases if data or governance are not properly audited. Transparency alone does not guarantee interpretability in AI decisions. And DAOs face the risk of elitism if influence drifts towards a wealthy few. Clearly, the real risk lies in doing nothing. Companies that lead in ethical AI supply chains will not only avoid the coming backlash but will also earn the trust of their customers, regulators, and employees. As for those who continue to overlook, they will eventually discover that the cost of cleaning up the mess is far greater than the cost of fixing it now.

Web3 offers the clearest path to cleaning up the hidden AI mess. But the window for voluntary reform is closing rapidly. Companies can either lead this change or be dragged into it when the backlash hits. The choice will not be theirs for long.
