OpenAI has published its initial beta version of its Preparedness Framework, outlining safety precautions for its AI models. The company commits to running consistent evaluations on its frontier models and showing the findings in risk “scorecards” that will be continuously updated. The risk thresholds will be classified into four safety levels: low, medium, high, and critical. OpenAI is also restructuring its decision-making process, with a dedicated Preparedness team and a cross-functional Safety Advisory Group. Leadership will remain the decision-maker, but the Board of Directors will have the right to reverse decisions. Other framework elements include developing a protocol for added safety, collaborating with external and internal teams to track real-world misuse, and pioneering new research in measuring how risk evolves as models scale.
Related Posts
Power Trio: This USB-C charger can handle 3 laptops at once, with a $60 discount for Black Friday
- admin
- November 21, 2023
- 0
The Ugreen Nexode 300W GaN 5-Ports Desktop Charging Station is offered at an incredible Black Friday price on Amazon, providing amazing value for a powerful […]
Samsung’s Potential $400 Foldable Phone: What You Need to Know
- admin
- November 10, 2023
- 0
The article discusses the emergence of foldable phones, particularly a new Samsung foldable phone that would offer an affordable option for consumers. It mentions that […]
The Black Sea: A Focal Point in the Ongoing Russia-Ukraine Conflict
- admin
- August 4, 2023
- 0
The Black Sea, once a serene expanse of waters, has transformed into a heated theater of conflict in the ongoing Russia-Ukraine war. On the surface, […]