In the magical realm of artificial intelligence (AI), there’s a gremlin hiding under the glamour – the human grunt work of training, ratifying and checking these algorithms. Companies like Google and OpenAI would have us believe that their AI systems mostly spring forth from enormous data sets. However, there’s another story lurking in the shadows. It’s the narrative of AI raters – the real people who, often under grim working conditions, make sense of the AI’s chaotic outputs, allowing the machine to write like a human or even predict the next word accurately.
The Illusion of Intelligence
One of the great paradoxes of AI is the human effort that underpins its illusion of intelligence. AI programs, despite their wizard-like capacities, lack the intuition and context that humans inherently possess. They can generate text like a seasoned author or prepare a chocolate cake recipe, but they can’t evaluate their own outputs. For that, they rely on humans who work as AI raters.
Downplaying the Human Element
AI pioneers, such as Google, Facebook, and OpenAI, have deployed human raters for nearly a decade to refine their AI algorithms. However, the human element in AI is often downplayed or outright ignored in their official narratives, as acknowledging human intervention risks shattering the illusion of machine intelligence. This selective amnesia also conveniently overlooks the less savory aspects of AI such as misinformation, hateful content, and labor exploitation.
A Miserly Reality
Despite their pivotal role, AI raters often face harsh working conditions for minimal pay, a story that mirrors the plight of social-media content moderators. Stress, low pay, inconsistent tasks, and tight deadlines characterize their working lives, and the surge in AI training data only amplifies these issues. Moreover, the tech industry has a history of swift retaliation against workers daring to advocate for better pay or working conditions.
The Discrepancy of Value
Ironically, while AI raters’ labor is systematically undervalued, the industry itself is booming, with one RLHF and data annotation company valued at over $7 billion in 2021. The forecast predicts a nearly $14 billion market for labeling data used to train AI models by 2030. Yet, this flood of money rarely trickles down to the workers at the heart of this ghostly labor.
Source:The Atlantic