OpenAI’s Chess Match with the EU Over AI Act Legislation

·

·

In the intricate world of AI regulation, OpenAI, the mastermind behind our favorite language model ChatGPT, has been involved in a high-stakes game of chess with the European Union. The prize? The guidelines under which AI will operate in the near future. Unearthed documents reveal that OpenAI has been working to massage the forthcoming AI Act into a more favorable shape. Indeed, the firm’s lobbying efforts haven’t been for naught, as several of their proposed amendments have found their way into the legislation’s final draft..

The “High Risk” Designation Debate

Before the AI Act got the thumbs up, there was a hot debate about whether all general-purpose AI systems (GPAIs), like OpenAI’s ChatGPT and DALL-E, should fall under the “high risk” category. This classification would bind OpenAI and other GPAI developers to adhere to stringent safety and transparency obligations. OpenAI, however, made it clear that they weren’t fans of this idea, putting forth the argument that only companies using AI for high-risk use cases should be held to these regulations.

In a white paper sent to the EU, OpenAI argued that although GPT-3 has capabilities that could be exploited for high-risk use cases, the system itself isn’t inherently high risk. It’s a bit like giving a chef a knife – the tool isn’t inherently dangerous, but its application could be. The problem, of course, is ensuring that everyone wielding the tool does so responsibly.

A Plea for Innovation and a Lack of Regulatory Suggestions

During a meeting with European Commission officials, OpenAI voiced concerns about the potential for their AI systems to be slapped with the high-risk label. They suggested that such a categorization could place a stranglehold on AI innovation. However, OpenAI stopped short of offering alternative regulations that should be in place. It’s a bit like a motorist arguing for unlimited speed on the highways but not proposing a safe speed limit to replace the existing one.

Lobbying Efforts Pay Off, But Transparency Still Required

OpenAI’s lobbying efforts have seemingly paid off. GPAIs are not automatically considered high risk in the final draft of the AI Act. However, it does mandate companies to conduct risk assessments and divulge if copyrighted material was used to train their AI models. OpenAI endorsed the inclusion of “foundation models” as a separate category in the AI Act, but the sources of their data remain shrouded in mystery.

OpenAI’s CEO Sam Altman: A Dance of Duality

Sam Altman, the CEO of OpenAI, has displayed a somewhat contradicting stance on regulation. On one hand, he promotes regulation and has highlighted the potential dangers of AI. Conversely, he has hinted that OpenAI might pull the plug on its operations in the EU market if compliance with the region’s incoming AI regulations proves untenable. This is a classic case of wanting to have your cake and eat it too.

OpenAI’s “Trust Us to Self-Regulate” Approach and The Road Ahead for the AI Act

OpenAI’s risk mitigation strategy for GPAIs is, to say the least, ambitious. In their white paper sent to the EU Commission, they declared their approach as “industry-leading”. But as Daniel Leufer, a senior policy analyst at Access Now, noted, OpenAI is effectively asking to self-regulate. They are looking for trust from the authorities to handle safety measures, but when it comes to setting regulatory standards, they’re not so enthusiastic.

So, what’s next? The EU’s AI Act is not yet a done deal. The legislation is now entering a final “trilogue” stage to iron out the details. A final nod of approval is expected by year’s end, but the act itself may take around two years to come into effect. We’ll have to watch closely to see how this delicate dance between AI companies and regulators concludes.

Source: The Verge