Regulating AI: An Economist’s Perspective on Balancing Innovation and Risks

·

·

An Economist’s Guide to Taming AI: Taking the Slow Road

Oh, the irony. OpenAI’s CEO, Sam Altman, frets about the monster he’s been nurturing. You know, the powerful machine learning models we lovingly call “AI.” While they’ve made their way into our daily lives, providing neat party tricks like predictive texting or voice assistants, there’s an increasingly fierce debate about how (or if) we should regulate these beasts.

Altman, the guy at the helm of one of the largest producers of such models, tells lawmakers we need regulation to curb the risk of powerful models turning rogue. But just how much risk are we talking about here? Some “AI experts” believe the existential risk to be about 10%. That’s right, a 1 in 10 chance of utter disaster. Not exactly reassuring, is it? And let’s not even get started on the smaller, yet significant issues, like lawyers citing phony cases or news outlets publishing fake news.

The Math of Regulation

Now we have MIT economics professor Daron Acemoglu and grad student Todd Lensman trying to bring some science to the regulation debate. They’ve come up with what they call the first economic model for regulating transformative technologies. What’s their initial hunch? Slow and steady wins the race. A machine learning tax coupled with specific sector restrictions on tech use could provide the best outcomes.

In their minds, a transformative technology is something that can boost productivity in any sector where it’s deployed. But, and here’s the caveat, it can also bring disaster if misused. They argue for a cautious, slow roll-out of new tech, allowing us to understand the potential benefits and risks before full adoption across the economy. This approach offers an easier course correction if we find out the tech is riskier than we initially thought.

Regulatory Sandboxes

The economists from MIT propose a combination of taxing transformative technologies and limiting their use to certain sectors where the adoption risk is low. This approach, known as a “regulatory sandbox,” is commonly used with new technologies and could delay the adoption of machine learning in high-risk sectors until we understand it better.

The Case for Accelerating Tech Adoption

But what if we have it all wrong? What if speeding up the adoption of transformative tech increases our knowledge about it, reducing risks instead? The authors aren’t ruling this out and suggest future research should explore this avenue.

George Mason University economist, Tyler Cowen, suggests that the slow adoption approach might make us vulnerable to rival nations like China, which could develop less safe or more threatening AI. This accelerates-the-adoption argument often comes up in discussions, suggesting we need to embrace AI’s risky aspects, like in weapon systems, or risk being left in the dust.

But even this point of view suggests regulation is necessary. Advocates for AI safety argue that we need to stick to first principles. If AI will be misused for mass surveillance, the U.S. should lay down laws to prevent that, instead of running to adopt dystopian tech just to be first in line.

Source: qz.com