In a significant move, OpenAI has announced the formation of a new team dedicated to assessing and addressing what they refer to as “catastrophic risks” associated with artificial intelligence (AI). This bold initiative, named Preparedness, will be led by Aleksander Madry, the Director of MIT’s Center for Deployable Machine Learning, who recently joined OpenAI as the “Head of Preparedness.” This article delves into the details of this exciting development and what it means for the future of AI.
What Is Preparedness and What Will It Do?
Preparedness is a specialized team within OpenAI tasked with a range of crucial responsibilities aimed at safeguarding against potential AI-related catastrophes. Its core focus areas include monitoring, forecasting, and developing protective measures against various AI dangers. These risks encompass a wide spectrum, from AI’s capacity to deceive and manipulate humans (such as in phishing attacks) to its potential to generate malicious code.
We are building a new Preparedness team to evaluate, forecast, and protect against the risks of highly-capable AI—from today's models to AGI.
Goal: a quantitative, evidence-based methodology, beyond what is accepted as possible: https://t.co/8lwtfMR1Iy
— OpenAI (@OpenAI) October 26, 2023
Some of the risk categories that Preparedness is set to investigate may seem like they belong in the realm of science fiction, including “chemical, biological, radiological, and nuclear” threats. While these may appear far-fetched, OpenAI is committed to taking a proactive approach to address even the most extreme possibilities.
Sam Altman’s Stance on AI Risks
OpenAI’s CEO, Sam Altman, is known for his concerns about the existential risks posed by AI. He has often articulated these concerns, raising questions about the possibility of AI leading to human extinction. Preparedness is a tangible response to these concerns, underlining OpenAI’s dedication to studying and addressing these potential risks.
A Broader Perspective
OpenAI is not limited to studying only the extreme scenarios. The organization is open to investigating “less obvious” but still substantial AI risks. In conjunction with the launch of the Preparedness team, OpenAI is actively seeking input from the wider community. They have announced a competition inviting ideas for risk studies, offering a $25,000 prize and potential job opportunities within the Preparedness team for the top ten submissions.
For instance, one of the contest entry questions challenges participants to consider the most unique yet plausible catastrophic misuse of OpenAI’s advanced models. This highlights the organization’s commitment to uncovering unforeseen risks and developing strategies to mitigate them.
Formulating a Risk-Informed Development Policy
Preparedness is not just about identifying risks; it also plays a vital role in formulating a “risk-informed development policy.” This policy will outline OpenAI’s approach to evaluating AI models, the tools used for monitoring, actions taken to mitigate risks, and the governance structure for overseeing the model development process. This comprehensive approach covers both the pre- and post-model deployment phases.
OpenAI’s objective is clear: while they acknowledge the potential benefits that advanced AI models can bring to humanity, they also recognize the increasingly severe risks associated with them. Hence, they are committed to establishing a robust understanding and infrastructure for the safe development and deployment of highly capable AI systems.
The Bigger Picture
The announcement of the Preparedness team aligns with a major AI safety summit held by the U.K. government. This strategic timing underscores the urgency and significance of OpenAI’s efforts in addressing AI-related risks.
Moreover, this development comes after OpenAI’s announcement of forming a team dedicated to studying, guiding, and controlling emerging “superintelligent” AI. Both Sam Altman and Ilya Sutskever, OpenAI’s Chief Scientist and co-founder, share the belief that AI with intelligence surpassing that of humans could become a reality within the decade. The emergence of such powerful AI systems necessitates thorough research to understand and establish mechanisms for control and restriction.
In conclusion, OpenAI’s creation of the Preparedness team is a significant step towards ensuring the responsible development and deployment of AI technologies. This proactive approach reflects the organization’s commitment to addressing the potential catastrophic risks associated with advanced AI. It also underlines the importance of community involvement in identifying and mitigating these risks.
FAQs
1. What is the main objective of OpenAI’s Preparedness team?
OpenAI’s Preparedness team is primarily responsible for monitoring, forecasting, and developing protective measures against various AI dangers, including risks related to AI’s capacity to deceive and manipulate humans and its potential to generate malicious code.
2. Why is OpenAI studying “far-fetched” AI risks, such as chemical and biological threats?
OpenAI is committed to taking a proactive approach to addressing even the most extreme possibilities, as they believe in thorough risk assessment and mitigation.
3. How can individuals get involved with OpenAI’s risk studies?
OpenAI is actively seeking input from the wider community and has launched a competition inviting ideas for risk studies. Participants have a chance to win a $25,000 prize and potential job opportunities within the Preparedness team.
4. What is OpenAI’s approach to addressing AI-related risks?
OpenAI is not only focused on identifying risks but also on formulating a “risk-informed development policy.” This policy outlines their approach to evaluating AI models, monitoring tools, risk mitigation actions, and governance structure for oversight.
5. Why is it important to study and control “superintelligent” AI, as mentioned by OpenAI’s CEO, Sam Altman?
The study and control of “superintelligent” AI are essential because there is a belief that AI with intelligence exceeding that of humans could emerge in the near future. Understanding and controlling such AI systems is crucial to ensure their responsible use and to mitigate potential risks.
Follow us on our social networks and keep up to date with everything that happens in the Metaverse!
Twitter Linkedin Facebook Telegram Instagram Google News Amazon Store