Introduction to the Call for a Ban on AI Superintelligence
A diverse group of public figures, including Prince Harry, his wife Meghan, and prominent computer scientists, economists, artists, and conservative commentators, have come together to call for a ban on the development of AI "superintelligence" that they believe could threaten humanity. This group includes notable individuals such as Steve Bannon, Glenn Beck, and AI pioneers like Stuart Russell, Yoshua Bengio, and Geoffrey Hinton.
The Letter and Its Signatories
The letter, which was released recently, takes direct aim at tech giants like Google, OpenAI, and Meta Platforms, who are competing to develop a form of artificial intelligence designed to outperform humans at many tasks. The statement calls for a ban on the development of superintelligence until there is broad scientific consensus that it can be developed safely and controllably, and until there is strong public support. In addition to Prince Harry and Meghan, other notable signatories include Apple co-founder Steve Wozniak, British billionaire Richard Branson, and former Chairman of the Joint Chiefs of Staff Mike Mullen.
Concerns About AI Superintelligence
The letter highlights the potential risks associated with the development of superintelligence, including economic obsolescence, human disempowerment, loss of freedom, civil liberties, dignity, and control, as well as national security risks and even the possible extinction of humanity. Prince Harry added a personal note, stating that "the future of AI should serve humanity, not replace it" and that "there are no second chances." Other signatories, such as Stuart Russell, emphasized the need for adequate security measures to be put in place to prevent the risks associated with superintelligence.
The Debate About AI Superintelligence
The letter is likely to spark ongoing debates in the AI research community about the likelihood of superhuman AI, the technical ways to achieve it, and how dangerous it could be. Max Tegmark, president of the Future of Life Institute, noted that criticism of AI development has become mainstream, and that the debate is no longer just between "nerds" but has become a broader societal concern. However, the debate is complicated by the fact that the same companies pursuing superintelligence are also developing AI products that can bring benefits to society.
Previous Calls for a Moratorium on AI Development
This is not the first time that a call has been made for a moratorium on AI development. In March 2023, a letter was sent to tech giants calling on them to temporarily suspend development of more powerful AI models. However, none of the major AI companies heeded this call, and some, like Elon Musk, have even launched their own AI startups to compete with others in the field. Tegmark noted that he has written to the CEOs of all major AI developers in the US but does not expect them to sign, citing the pressure to compete in the AI development race.
The Need for Government Intervention
Tegmark emphasized the need for government intervention to stigmatize the superintelligence race and to step in to regulate the development of AI. He noted that the current race to develop AI is driven by a desire to be first to market and to gain a competitive advantage, rather than a desire to develop AI that is safe and beneficial to society. The call for a ban on AI superintelligence is a attempt to shift the focus of the AI development community towards developing AI that is safe, controllable, and beneficial to humanity.