Shall we pause, will every one? For 6 months. Dangerous? What difference will it make?
ARTIFICIAL INTELLIGENCE/TECH
Elon Musk and top AI researchers call for pause on ‘giant AI experiments’
An open letter says the current race dynamic in AI is dangerous, and calls for the creation of independent regulators to ensure future systems are safe to deploy.
By JAMES VINCENT in The Verge
Mar 29, 2023, 5:08 AM EDT|96 Comments / 96 New
A number of well-known AI researchers — and Elon Musk — have signed an open letter calling on AI labs around the world to pause development of large-scale AI systems, citing fears over the “profound risks to society and humanity” they claim this software poses.
The letter, published by the nonprofit Future of Life Institute, notes that AI labs are currently locked in an “out-of-control race” to develop and deploy machine learning systems “that no one — not even their creators — can understand, predict, or reliably control.”
“We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”
“Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4,” says the letter. “This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”
Signatories include author Yuval Noah Harari, Apple co-founder Steve Wozniak, Skype co-founder Jaan Tallinn, politician Andrew Yang, and a number of well-known AI researchers and CEOs, including Stuart Russell, Yoshua Bengio, Gary Marcus, and Emad Mostaque. The full list of signatories can be seen here, though new names should be treated with caution as there are reports of names being added to the list as a joke (e.g. OpenAI CEO Sam Altman, an individual who is partly responsible for the current race dynamic in AI).
The letter is unlikely to have any effect on the current climate in AI research, which has seen tech companies like Google and Microsoft rush to deploy new products, often sidelining previously-avowed concerns over safety and ethics. But it is a sign of the growing opposition to this “ship it now and fix it later” approach; an opposition that could potentially make its way into the political domain for consideration by actual legislators.
As noted in the letter, even OpenAI itself has expressed the potential need for “independent review” of future AI systems to ensure they meet safety standards. The signatories say that this time has now come.
“AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts,” they write. “These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.”
No comments:
Post a Comment