The industry leader in the Artificial Intelligence industry, OpenAI is taking a new initiative to explore the growing risks of AI. The firm has created a new team to evaluate AI models and report on the associated risks.
Quick Take:
- OpenAI launches Preparedness to study and tackle the risks of new AI systems.
- Preparedness will keep a check on the catastrophic AI risks and suggest a forward action plan for OpenAI.
- The firm has set a prize for the community’s participation in AI risk studies.
The new team will track, evaluate, forecast, and protect potential “catastrophic risks” that are making their way with the expansion of AI. As per the announcement, the team will be called Preparedness and it will be led by Aleksander Madry.
Madry is also the director of the Center for Deployable Machine Learning at MIT. He will be responsible for leading the team in digging through the dangers of AI systems. Recently, consumers and experts have raised concerns about the growing ability of AI tools.
The negative use of AI tools can lead to malicious activities, posing a threat to humans. Plus, AI also has the ability to generate codes seamlessly, helping with phishing attacks in the digital world. The team is tasked to dig further into the chemical, radiological, biological, and nuclear threats of the technology.
Furthermore, Preparedness will look into AI’s role in cybersecurity and autonomous adaptation. It is a growing belief that AI systems can be misused by scammers and hackers around the globe. Therefore, there have been growing calls for controlling the expansion of AI systems.
OpenAI Plans to Keep a Check on AI
OpenAI has revolutionized the AI landscape with the launch of ChatGPT. However, the firm and its CEO Sam Altman have always openly expressed their concerns over AI risks.
In a statement, OpenAI noted,
“We believe that frontier AI models, which will exceed the capabilities currently present in the most advanced existing models, have the potential to benefit all of humanity. We take seriously the full spectrum of safety risks related to AI, from the systems we have today to the furthest reaches of superintelligence. […] To support the safety of highly-capable AI systems, we are developing our approach to catastrophic risk preparedness.”
Altman believes that AI could even lead to human extinction. At the same time, he says that OpenAI will devote resources to studying these scenarios and making an alternative way out of it. With the launch of Preparedness, the company is also encouraging the community to participate in AI risk studies.
The tech company has come up with a Preparedness challenge. Anyone can participate in this challenge by sharing the threat of AI and its potential solutions. As a prize, OpenAI is offering $25,000 in API credits and a job in the Preparedness team for the top 10 submissions.
In addition, Preparedness will also outline the approach of OpenAI to build AI model evaluations and monitoring tooling. It will come up with suggestions to mitigate the risks of AI while structuring a plan for the firm in the long run.