Meet Bob. He’s your everyday guy who spends hours filling out surveys on crowd-work platforms like Amazon’s Mechanical Turk. He’s part of a vast, invisible workforce that’s helping to shape our understanding of human behavior. But Bob has a secret weapon: ChatGPT, a generative AI tool that’s been his scribe, pouring out its digital heart on his behalf.
Bob’s not alone. From students to office workers, coders to dungeon masters, people are turning to AI tools like ChatGPT to optimize their work. It’s a trend that’s stirring up both applause and suspicion.
The AI Controversy in Crowd Work
Crowd workers are the latest group to face accusations of using AI as a shortcut. Some platforms are now adopting policies or technology designed to deter or detect the use of large language models like ChatGPT. But it’s not all black and white. There’s a need for careful consideration to avoid unfairly burdening workers already facing precarious conditions.
The AI Detection Race
Companies like CloudResearch are developing in-house ChatGPT detectors, while others argue that the onus should be on researchers to establish trust. But here’s the kicker: AI detection tools can be inaccurate and may become less effective as text generators keep improving.
The Future of Crowd Work in the AI Era
The rise of AI in crowd work could have significant implications. It could lead to a decrease in the amount of crowd work, as researchers reconsider the types of studies they conduct online. It could also distort our understanding of the world if bad data gets into published research or even warp future AI systems.
But let’s not forget the human element. As Justin Sulik, a cognitive science researcher, puts it, “Building trust is a lot simpler than engaging in an AI arms race with more sophisticated algorithms to detect ever more sophisticated AI generated text.”