The recent alarmist demand for a six-month pause or even a militarily enforced shutdown in AI research – from people with experience, money and influence in the artificial intelligence industry – is founded on some fundamentally flawed thinking that will encourage the same destructive outcome for humanity that we seek to avoid.
That the U.S. government is simultaneously orchestrating a crackdown on the crypto industry, a field of open-source innovation that develops the kind of cryptography and network coordination technologies needed to manage AI threats, makes this an especially dangerous moment for all of us.
The issue is not, in and of itself, that an out-of-control AI could evolve to kill us all. (We all know that. For decades, Hollywood has taught us that it is so.) No, the task is to ensure that the economics of AI don't intrinsically encourage that horrific result. We must prevent concentrated control of the inputs and outputs of AI machines from hindering our capacity to act together in the common interest. We need collective, collaborative software development that creates a computational antidote to these dystopian nightmares.
The answer does not lie in shutting down AI innovation and locking ChatGPT creator OpenAI, the industry leader that has taken the field to its current level of development, into pole position. On the contrary, that's the best way to ensure the nightmare comes true. Not because OpenAI is inherently evil – its professed interest in containing AI's threat to humanity is quite likely genuine and well intended – but because the business model it follows fosters such risks.
We know this from the debacle of Web2, the ad-driven, social platform-based economy in which the decentralized Web1 internet was re-centralized around an oligarchy of all-knowing data-aggregating behemoths including Google, Facebook and Amazon. This exploitative system will go into overdrive if AI development occurs under the same monopoly defaulting structure. The solution is not to halt the research but to incentivize AI developers to devise ways to subvert that model.
We need a new model of decentralized ownership and consensus governance, one that's built on incentives for competitive innovation but has, within it, a self-correcting framework that drives that innovation toward the public good. Sufficiently decentralized ownership and control could prevent any single party from dictating AI development and that, instead, the group as a whole would opt for models favorable to the collective.
Overly utopian? Maybe. But if we're going to forge a common project based on open-source innovation and collective governance, the economic phenomena that look most like what we need are the ecosystems that have sprung up around blockchain protocols.
While CEO Sam Altman has not, as yet, joined Tesla CEO and OpenAI investor Elon Musk as one of more than 25,000 signatories to an open letter calling for a six-month pause in AI development, many believe that if that letter's demands were implemented, the company would be a direct beneficiary, making it harder for any competitor to challenge OpenAI's dominance while earning Altman's company control of AI development going forward.
We have a choice: Do we want AI to be captured by the same concentrated business models that took hold in Web2? Or is the decentralized ownership vision of Web3 the safer bet? I know which one I'd pick.
EmoticonEmoticon