Experts in technology and Elon Musk call for a six-month hold on AI systems because they pose a threat to society.
AI, according to Elon Musk and other tech experts, could mark a significant shift in human history.
It turns out that Elon Musk was not joking when he said that the AI war could soon end civilization in a tweet. In fact, he was quite serious. According to a new report, the CEO of Twitter has signed an open letter urging a six-month halt to the development of AI systems. AI, according to him and other tech experts, could mark a significant shift in human history. The AI labs are currently engaged in a bloody competition to outperform one another in the AI race, according to the letter. Systems "that no one — not even their creators — can understand, predict, or reliably control" are the goal of their "out-of-control race."
Therefore, we urge all AI labs to immediately suspend the training of AI systems with more power than GPT-4 for at least six months. All important players should be included in this public pause that can be verified. "Governments should step in and institute a moratorium if such a pause cannot be enacted quickly," Musk wrote in the letter. It also mentions that AI systems should only be developed once we are certain that their benefits and risks can be controlled.
Musk and his colleagues have suggested in the letter that the AI labs and independent experts should take advantage of this pause to jointly develop and implement a set of shared safety protocols that "are rigorously audited and overseen by independent outside experts" for advanced AI design and development. Systems that adhere to these protocols should be guaranteed to be safe beyond a reasonable doubt. However, just because you ask the developers to stop working on AI products for a short time does not mean that AI development as a whole will stop. The letter gives the impression that it is merely withdrawing from the perilous race toward ever-larger, unpredictable black-box models with novel capabilities.
"AI developers must collaborate with policymakers to significantly accelerate the development of robust AI governance systems in parallel. At a minimum, these should include: new and skilled administrative specialists devoted to simulated intelligence; oversight and tracking of large pools of computational capacity and highly capable AI systems; systems for provenance and watermarking that make it easier to tell the difference between real and fake and find model leaks; a robust ecosystem for certification and auditing; liability for harm brought on by AI; substantial public funding for the study of technical AI safety; "and well-resourced institutions for coping with the dramatic economic and political disruptions that AI will cause, especially to democracy," the letter stated.