ChatGPT creator forms an “independent” safety board with the power to pause AI models

0comments
OpenAI's ChatGPT logo displayed on a background of pink and green stripes.
Safety is crucial for any company venturing into AI, and it’s particularly important for one of the top players in the game, OpenAI, the team behind ChatGPT. Now, following a detailed review of its safety and security practices, the company has created an independent safety board to ensure everything stays in line.

But just how independent is it really?


OpenAI is transforming its Safety and Security Committee into an independent “Board oversight committee” that can hold off on model launches if there are safety issues, as mentioned in a blog post from the company. This shift comes after a recent 90-day review of its safety and security practices, during which the committee suggested establishing the independent board.

The committee, led by Zico Kolter and featuring members like Adam D’Angelo, Paul Nakasone, and Nicole Seligman, will get updates from company leaders on safety assessments for major model launches.
 
Together with the full board, it will oversee these launches and can push back a release if safety concerns arise. Additionally, OpenAI’s entire board of directors will receive regular updates on safety and security issues.

However, since the members of OpenAI’s safety committee also sit on the broader board of directors, it’s a bit unclear how independent the committee truly is or how that independence is organized.

On the other hand, OpenAI says it is working with external organizations, such as Los Alamos National Labs – one of the top national labs in the US – to explore how AI can be safely utilized in lab settings for advancing bioscientific research.

Additionally, it recently struck deals with the AI Safety Institutes in the US and UK to work together on researching new AI safety risks and establishing standards for trustworthy AI.


– OpenAI, September 2024

Recommended Stories
I believe strict regulations are essential for AI, especially regarding how companies train their models, so it is encouraging to see safety boards becoming more common. For example, Meta also has its Oversight Board, which reviews content policy decisions and issues rulings that Meta must adhere to. However, I think it would be better if safety boards were made up of people with no ties to the companies they oversee. What’s your take on this?
Can’t get enough of mobile tech?
Subscribe to access new exclusive content and perks.
You can still enjoy the standard PhoneArena experience for free.
  • In-depth reviews, tests & analyses
  • Expert opinions on the latest trends
  • Live community events and games
  • Ad-free browsing, discounts and more
Start Free Trial See the latest subscriber-only articles

Recommended Stories

Loading Comments...
FCC OKs Cingular\'s purchase of AT&T Wireless