Editor’s Note: The following is a brief letter from Ray Kurzweil, cofounder and member of the board at Singularity Group, Singularity Hub’s parent company, in response to the Future of Life Institute’s recent letter, “Pause Giant AI Experiments: An Open Letter.”
The FLI letter addresses the risks of accelerating progress in AI and the ensuing race to commercialize the technology and calls for a pause in the development of algorithms more powerful than OpenAI’s GPT-4, the large language model behind the company’s ChatGPT Plus and Microsoft’s Bing chatbot. The FLI letter has thousands of signatories—including deep learning pioneer, Yoshua Bengio, University of California Berkeley professor of computer science, Stuart Russell, Stability AI CEO, Emad Mostaque, Elon Musk, and many others—and has stirred vigorous debate in the AI community.
…
Regarding the open letter to “pause” research on AI “more powerful than GPT-4,” this criterion is too vague to be practical. And the proposal faces a serious coordination problem: those that agree to a pause may fall far behind corporations or nations that disagree. There are tremendous benefits to advancing AI in critical fields such as medicine and health, education, pursuit of renewable energy sources to replace fossil fuels, and scores of other fields. I didn’t sign, because I believe we can address the signers’ safety concerns in a more tailored way that doesn’t compromise these vital lines of research.
I participated in the Asilomar AI Principles Conference in 2017 and was actively involved in the creation of guidelines to create artificial intelligence in an ethical manner. So I know that safety is a critical issue. But more nuance is needed if we wish to unlock AI’s profound advantages to health and productivity while avoiding the real perils.
— Ray Kurzweil
Inventor, best-selling author, and futurist