Approaching AGI: Understanding Potential Dangers and Preparing for Advanced AI Threats

·

·

Oh, the Humanity: AI Looms as a Potential Existential Threat

The Center for AI Safety (CAIS) has issued a warning that artificial intelligence (AI) is as dangerous as nuclear war and global pandemics. Even Sam Altman, head honcho of ChatGPT creator OpenAI, supports this statement. So, let’s take a look at the potential issues that have AI experts shaking in their boots.

In the Red Corner: AI Takeover

One potential risk is that AI could slip out of its creator’s control. Artificial general intelligence (AGI) refers to AI that’s as smart or smarter than humans at various tasks. Current AI systems, such as ChatGPT, are built to make users feel like they’re chatting with a fellow human being. You know the drill, a virtual BFF.

Experts are divided on how to define AGI, but they agree that this potential technology presents dangers to humanity that need to be researched and regulated. David Krueger, an AI expert and assistant professor at Cambridge University, highlights military competition between nations as the most obvious example of these dangers.

A total war scenario powered by AI with advanced systems smarter than people could easily spiral out of control and might end up wiping out humanity. Sounds like the plot of a dystopian sci-fi novel, doesn’t it?

In the Blue Corner: AI-Induced Mass Unemployment

There’s a growing consensus that AI poses a threat to some jobs. Abhishek Gupta, founder of the Montreal AI Ethics Institute, sees the prospect of AI-induced job losses as the most “realistic, immediate, and perhaps pressing”existential threat.

Losing jobs en masse means people lose their sense of purpose. Work might not be everything, but it does occupy a significant chunk of our lives. CEOs are getting candid about leveraging AI, like IBM’s Arvind Krishna announcing plans to slow hiring for roles that could be replaced with AI. Bold move, Arvind.

In the Green Corner: AI Bias

If AI systems are used to make wider societal decisions, systematic bias becomes a serious risk. There have been examples of bias in generative AI systems, such as early versions of ChatGPT. OpenAI has since added more guardrails to help ChatGPT avoid problematic answers.

Generative AI image models can produce harmful stereotypes, as demonstrated by tests run by Insider earlier this year. Undetected bias in AI systems making real-world decisions could have grave consequences, according to Gupta.

The training data is often based on predominantly English language data, and funding for training AI models with different languages is limited. Janis Wong of The Alan Turing Institute points out that this exclusion results in certain languages being trained less effectively than others.

So, what’s the upshot of all this AI doom and gloom? As humanity races towards a future filled with AI, it’s crucial to be aware of the potential hazards and address them through research, regulation, and collaboration. After all, we wouldn’t want to become a cautionary tale in some future robot’s history books.

Source: