Report: OpenAI reduces safety testing amid competition pressure

OpenAI has significantly shortened its safety testing period for new AI models, prompting concerns about insufficient safeguards. According to a Financial Times report by Cristina Criddle, testers now have just days to evaluate models compared to several months previously. Eight people familiar with OpenAI’s testing processes indicated that evaluations have become less thorough as the $300 billion startup faces pressure to release new models quickly. For GPT-4, launched in 2023, testers had six months for evaluations, while the upcoming o3 model may be released after less than a week of safety checks. One tester of the new o3 model warned that as large language models become more capable, their potential for misuse increases, calling the accelerated timeline “reckless” and “a recipe for disaster.”

Related posts:

Stay up-to-date: