OpenAI, the crew behind the famous ChatGPT chatbot, just dropped a hefty 27-page plan called the “Preparedness Framework.” What’s it about? Well, they’re all about dodging the worst things that could happen with their super-strong AI, like cyber chaos or AI getting tangled up in making serious weapons.
Who’s Got the Say and Safety Stuff
The big shots at OpenAI make the call on unleashing new AI models, but the final decision lies with the board of directors. They can put the brakes on decisions made by the top dogs. But hold up, before that even kicks in, OpenAI has a bunch of safety checks lined up.
There’s a special team, the “preparedness” gang, leading the charge. Aleksander Madry, taking a break from MIT, is leading this squad. They’re watching out for risks and making scorecards to sort them into categories—low, medium, high, or critical.
Safety First, Then Rollout
Their plan says only models with a “medium” or lower risk score after safety measures can go live. And for further development, only those with a “high” or lower risk score get the green light.
This document’s still a work in progress, labeled as beta. They’re planning to tweak it based on feedback. Flexibility is key.
Board Drama and How Things Work
OpenAI’s way of running things got some attention when CEO Sam Altman had a wild ride—kicked out and then back in within just five days. People started questioning his power and how much say the board really has.
The current board is just starting, but it’s lacking diversity—a point that’s got folks talking. Some are worried that companies regulating themselves might not be enough. They’re calling on lawmakers to step in and make sure AI gets developed safely.
Safety Talk in the Tech World
This push for safety by OpenAI comes in a year of non-stop debate about AI disasters. Top AI brains, like Altman and Google Deepmind’s Demis Hassabis, signed a bold statement. They’re calling for everyone to focus on reducing the risks of AI causing huge problems, alongside things like pandemics and nuclear threats.
While this got everyone talking, some folks reckon companies are using these far-off disaster ideas to distract from the real issues of AI tools causing problems right now.
Wrapping Up
OpenAI’s got a roadmap to tackle AI risks head-on. With a solid plan and constant updates, they’re aiming to steer through the tricky world of powerful AI safely. But it’s not just about them—questions about how things are run, diversity, and what role lawmakers should play are still flying around in the AI world.