Regulators should apply a healthy skepticism to generative AI developments to guarantee a competitive marketplace.

Steve Weber, Professor of the Graduate School, UC Berkeley School of Information

May 12, 2023

4 Min Read
a person's hand in front of a screen with a selection of options, one of them being AI
Source: Wright Studio via Adobe Stock

Generative artificial intelligence (AI) is developing at breakneck speed. After a couple of months of unalloyed enthusiasm, crucial questions about accuracy, bias, security, and regulation are now surfacing. Recently we've seen officials in Germany and Italy scrutinize or outright ban ChatGPT over security and privacy concerns. US regulators are moving toward a similar healthy skepticism.

Blanket regulations on particular applications of AI models might appeal to some as a way to constrain markets, but as Bill Gates recently said, "it won't solve [its] challenges." A better way for regulators to ensure AI development and deployment is safe, open, and producing real-world benefits is to keep markets robust by scrutinizing AI partnerships that lack transparency and other arrangements that try to prevent fair competition. The standard to strive for is an innovative, transparent, and competitive marketplace that can bring life-changing technology to the masses in safe and responsible ways.

Microsoft's Footprint

The place to start is with the partnership between Microsoft and OpenAI. Microsoft grasped the potential of OpenAI's work long before the resounding success of ChatGPT's public launch. But Microsoft's 2019 deal with OpenAI was not a conventional financial investment. Instead, the initial billion dollars from Microsoft largely came in the form of Azure credits, a de facto subsidy that led to OpenAI being built on Microsoft's cloud — exclusively and rent free.

This unusual partnership has created deep ties between Microsoft and OpenAI's technology infrastructures and sets a clear path to a technological walled garden. The agreement presents pressing questions for regulators: why should this partnership not be viewed as a deft move to create a subsidiary relationship while avoiding antitrust scrutiny? If so, should the Federal Trade Commission step in immediately to examine the impact on the competitive landscape? Is telegraphing a walled-garden strategy enough to warrant investigation and potential action by regulators today to forestall future harm?

History suggests the answer to these questions should be yes. Digital technology over the past 40 years has followed a predictable cycle: a long period of slow, incremental evolution culminating in a threshold moment that changes the world. This pattern led to the World Wide Web in the 1990s, mobile phones in the 2000s, and is happening today with AI. As AI prepares to enter a new phase of broad adoption and revolutionary technologies, the biggest risk that technology itself cannot solve for will very likely be anti-competitive business practices.

History also shows what will likely happen if regulators stand by. Large, first-mover firms will try to lock up foundational technologies and use market power to create long-term advantage. Microsoft wrote the playbook with the bundling of Internet Explorer into Windows and now appears ready to rerun that familiar play.

Equal Terms

If OpenAI can't run its most advanced models efficiently on non-Microsoft platforms, society will lose out. We want foundational technologies to be available on equal terms to innovators large and small, established and otherwise. We want companies to succeed wildly by using and building on foundational technologies, on the basis that innovation and competition creates previously unimaginable products that benefit customers and society at large. We don't want one company serving as gatekeeper and hoarding foundational technology to limit innovation from competitors. And more importantly, if we let a Microsoft AI walled garden be built, are we inviting other AI walled gardens to quickly follow — an Oracle walled garden, a Meta walled garden, a Google walled garden — limiting interoperability and stunting innovation? This is precisely the scenario that modern antitrust policy aims to prevent.

An optimist might object to this argument and point out that the early pathway for foundational technologies is notoriously hard to foresee. No one can prove at this moment that new entrants and open source alternatives won't reduce OpenAI's lead or even get out ahead. But if that hopeful view is incorrect, going backward to undo the damage will be tougher, bordering on impossible. Hoping for the best isn't a good strategy in antitrust, just as elsewhere.

Modern innovation often requires massively ambitious bets. It's one thing for a monolithic firm to invest billions in a startup with long-term research and development programs. It's another thing entirely to shape the investment into a captive relationship with a just-emerging foundational technology whose application could lead the innovation environment for decades.

Regulators are right to question the policies that guide AI ethics, fairness, and values. But one of the most effective ways to advance those goals is to ensure a broad, diverse, and competitive marketplace where the key foundational technologies are open for equal access. That means taking steps now to prevent "walled gardens'' from being built in the first place. Rather than scrambling for a cure too far down the road, regulators should step in now and ensure the Microsoft-OpenAI partnership isn't simply anti-competitive activity under a clever disguise. Otherwise, a single company's profit could be set up to prevail over what promises to be a world changing threshold moment.

About the Author(s)

Steve Weber

Professor of the Graduate School, UC Berkeley School of Information

Steve Weber works at the intersection of technology markets, intellectual property regimes, and international politics He has published numerous books, including The Success of Open Source and, most recently, Bloc by Bloc: How to Build a Global Enterprise for the New Regional Order, and serves as Professor of the Graduate School, School of Information, UC Berkeley. He has worked with and received research funding from a number of technology firms, including Google and Microsoft.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights