Now Reading
Elon Musk and Tech Leaders Call for Pause on ‘Out-of-Control’ AI Development Race

Elon Musk and Tech Leaders Call for Pause on ‘Out-of-Control’ AI Development Race

A group of over 1,000 AI experts, technologists and business leaders published an open letter urging AI labs to pause the training of systems “more powerful than GPT-4.”

The letter was published by the Future of Life Institute, a non-profit organization that focuses on mitigating risks associated with transformative technology. Tech leaders from around the world signed the letter, including Apple Co-Founder Steve Wozniak; SpaceX, Tesla and Twitter CEO Elon Musk; Stability AI CEO Emad Mostaque; Executive Director of the Center for Humane Technology Tristan Harris; and Yoshua Bengio, founder of AI research institute Mila.

The reason behind this plea is simple. Advanced AI could bring about a massive change that should be planned for and managed with “commensurate care and resources.”

“Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control,” the letter said.

OpenAI, Microsoft, Google and other tech companies have been working rapidly to generate their AI models. Nearly every week, it seems as if a tech company announces a new advancement or product release. But the letter says it’s all happening far too quickly for ethical, regulatory and safety concerns to be properly exercised.

“We must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?”

The open letter is calling for a six-month pause on AI experiments that “should be developed only once we are confident that their effects will be positive and their risks will be manageable.” It recommends AI labs use this time to collectively develop safety protocols that can be audited by third parties. If they don’t pause, governments should step in and impose a moratorium.

The letter also suggests it’s long past time for lawmakers to get involved. From regulatory authorities dedicated to overseeing and tracking AI systems, to implementing programs that distinguish real from generated content, to even auditing and certifying AI models, AI developers should be working alongside policymakers to “dramatically accelerate development of robust AI governance systems.”

It’s important to note the letter isn’t calling for a total stop on all AI developments. It’s just saying pump the brakes. Societies need a brief time-out to ensure proper infrastructure is in place so that the inevitable AI revolution can be safe and beneficial to all.

© 2023 RELEVANT Media Group, Inc. All Rights Reserved.

Scroll To Top

You’re reading our ad-supported experience

For our premium ad-free experience, including exclusive podcasts, issues and more, subscribe to

Plans start as low as $2.50/mo