theregister.com

Ex-Googler Schmidt warns US: Try an AI 'Manhattan Project' and get maim'd

ANALYSIS Former Google chief Eric Schmidt says the US should refrain from pursuing a latter-day "Manhattan Project" to gain AI supremacy, as this will provoke preemptive cyber responses from rivals such as China that could lead to escalation.

China would not sit idle waiting to be dictated to by the US once it achieves superintelligence, the authors warn. This assumes that rivals would accept a lasting imbalance of power rather than act to prevent it, thereby undermining the very stability the strategy purports to secure ...

Schmidt is one of three co-authors of a paper that likens artificial intelligence to nuclear weapons during the Cold War, and warns that the race to develop increasingly sophisticated AIs could disrupt the global balance of power and raise the odds of great power conflict.

The paper, "Superintelligence Strategy," posits that rapid advances in AI are poised to reshape nearly every aspect of society, but that governments see them as a means to military dominance, which will drive a "bitter race" to maximize AI capabilities.

The paper claims that the development of a "superintelligent" AI surpassing humans in nearly every domain would be the most precarious technological advancement since the atomic bomb.

The other authors are Dan Hendrycks, director of the Center for AI Safety, and Alexandr Wang, founder and CEO of Scale AI.

The unstated assumption is that the US is the country that will lead the way in AI development, while China would be the fearful aggressor making some kind of preemptive strike. It doesn't seem to have occurred to the authors that China recently surprised the world with AI capabilities that it was not thought to be capable of, or that most cyber threats come from Russia.

Another unstated assumption is that "superintelligence" is actually possible at all, and not just some pipe dream.

Any state that succeeds in producing a superior AI poses a direct threat to the survival of its peers, and the paper authors assert that states seeking to secure their own survival will be forced to sabotage such destabilizing AI projects for deterrence. This might take the form of covert operations to degrade training runs to outright physical damage disabling AI infrastructure.

Thus, the paper states, we are already approaching a dynamic similar to nuclear Mutual Assured Destruction (MAD) – such as the state of détente that developed during the Cold War – in which no power dares attempt an outright grab for strategic monopoly, as any such effort would invite a "debilitating" response.

Schmidt and company christen this Mutual Assured AI Malfunction or (MAIM), a combination of words that seems likely to have been chosen for that acronym. Under MAIM, they posit that AI projects developed by states are constrained by mutual threats of sabotage.

Yet AI technology also has the potential to deliver benefits across numerous areas of society, from medical breakthroughs to automation. Embracing AI's benefits is important for economic growth and progress in the modern world, the authors believe.

States grappling with the challenges can follow one of three strategies, according to the paper. The first is a hands-off approach with no restrictions on AI developers, chips, or models. Proponents of this strategy insist that the US government impose no limitations on AI companies, lest they curtail innovation and give China an advantage.

The second is a worldwide voluntary moratorium strategy to halt further AI advances, either immediately or once certain hazardous capabilities, such as hacking or autonomous operation, are detected.

Third is a monopoly strategy, where an international consortium along the lines of CERN in Europe would lead global AI development.

After outlining these three alternatives, the authors highlight a proposal from the US-China Economic and Security Review Commission (USCC) to pour US government funding into a kind of Manhattan Project to build superintelligence. This would invoke the Defense Production Act to channel resources into a remote site dedicated to developing a super AI to gain a strategic monopoly.

Such a strategy would inevitably raise alarm, and China would not sit idle waiting to be dictated to by the US once it achieves superintelligence, the authors warn. This assumes that rivals would accept a lasting imbalance of power rather than act to prevent it, thereby undermining the very stability the strategy purports to secure.

The paper concludes that states should prioritize deterrence over winning the race for superintelligence. MAIM implies that any state seeking a strategic monopoly on AI power will face retaliatory responses from rivals, as well as non-proliferation agreements – similar to nuclear arms control – aimed at restricting AI chips and open-weight models to limit rogue actors.

"States that act with pragmatism instead of fatalism or denial may find themselves beneficiaries of a great surge in wealth. As AI diffuses across countless sectors, societies can raise living standards and individuals can improve their well-being however they see fit."

Fat chance of that happening. The US is far more likely to choose the hands-off strategy and let its tech sector do whatever it wants with no restrictions. At least we can console ourselves that any "superintelligence" is likely to be a long way away from realization. ®

Read full news in source page