OpenAI’s new open source models receive a divided response

OpenAI has released two new open source language models, gpt-oss-120B and gpt-oss-20B, marking its first major open release since 2019. According to a report by Carl Franzen for VentureBeat, the initial reactions from the AI community are highly mixed.

Supporters praise the move as a significant step for open source AI in the West. Experts like Simon Willison highlighted the models’ efficiency and strong performance in reasoning, math, and coding tasks. An independent analysis firm called gpt-oss-120B the “most intelligent American open weights model.” The models are also notable for their accessibility, with one designed to run on a single enterprise-grade GPU and the other on a consumer laptop.

However, critics point to significant limitations. The models still lag behind leading open source alternatives from Chinese companies. Several users reported that the models excel at technical benchmarks but fail at creative writing and tasks requiring common sense. Some, like AI researcher Teknium, called the release a “legitimate nothing burger.” There is speculation that OpenAI trained the models predominantly on synthetic data to avoid copyright issues, which may have limited their real-world knowledge. Third-party tests also revealed low scores in multilingual reasoning and a tendency to refuse certain user prompts, raising concerns about usability and bias.

Ultimately, the verdict remains split. While the release is a landmark for accessibility, its practical value and long-term impact will depend on how developers utilize these new tools.

Related posts:

Stay up-to-date: