Stepping into the AI Information Era
We’re stepping into a new era, an era where artificial intelligence is not just taking over mundane tasks but also becoming a significant player in the creation of online content. Unfortunately, this isn’t just limited to generating useful insights or amusing stories. According to a recent study, AI is proving to be a convincing manufacturer of disinformation, even more so than humans.
The Power of AI in Disinformation
The study conducted by Giovanni Spitale at the University of Zurich found that people were 3% less likely to identify false tweets generated by AI compared to those crafted by humans. Though this difference seems marginal, it’s enough to raise alarm bells. The implications are serious considering the growing problem of AI-generated disinformation.
Spitale shared his concerns, stating, “The fact that AI-generated disinformation is not only cheaper and faster, but also more effective, gives me nightmares.” He predicts that the credibility gap could widen even further with the utilization of advanced AI models like OpenAI’s GPT-4.
The Experiment
The research team ran an experiment using popular disinformation topics such as climate change and COVID-19. They asked OpenAI’s GPT-3 to generate true and false tweets and compared these with a random sample of real tweets. When quizzed, the participants struggled more to spot the falsehoods in the AI-generated tweets.
Why is AI so convincing?
Why are we more likely to fall for AI-written disinformation? The researchers propose it could be the structured and concise nature of AI-generated text that makes it easier for readers to process. As powerful AI tools become increasingly accessible, the risk of these tools being misused for disinformation campaigns rises. The weapons to counteract this, AI text-detection tools, are still in their early stages and lack full accuracy.
OpenAI’s Stance
OpenAI, the creator of GPT-3, recognizes the potential misuse of its AI models for disinformation campaigns, which is strictly against its policies. They released a report earlier this year, admitting that it’s “all but impossible to ensure that large language models are never used to generate disinformation.”
However, OpenAI also calls for a balanced perspective on the impact of disinformation campaigns, emphasizing the need for more research to understand the populations most susceptible to AI-generated false content.
A Call for Vigilance
Jon Roozenbeek, a postdoc researcher at the University of Cambridge who studies misinformation, echoes this sentiment, stating that it’s too early to panic. He notes that despite AI’s potential to generate more persuasive disinformation, there are still hurdles such as tech platform moderation and automated detection systems that can limit its spread.
In a world where artificial intelligence is becoming an integral part of our daily lives, it’s crucial to stay vigilant and critically analyze the information we consume, irrespective of whether it’s human or AI-generated.