Chinese AI startup DeepSeek has released a significant update to its open-source reasoning model, bringing it closer to competing with paid services from OpenAI and Google. The new DeepSeek-R1-0528 model shows substantial improvements in complex reasoning tasks across mathematics, science, and programming.
VentureBeat’s Carl Franzen reports that the updated model achieved 87.5% accuracy on the AIME 2025 test, up from 70% in the previous version. Coding performance also improved, with accuracy rising from 63.5% to 73.3% on the LiveCodeBench dataset.
DeepSeek is a spinoff of Hong Kong’s High-Flyer Capital Management. The company made waves in January with its initial R1 model release. The latest version remains free and open-source under the MIT License, allowing commercial use and customization.
The update introduces new features including JSON output support and function calling capabilities. The model also shows reduced hallucination rates for more reliable results. Users can access the model through DeepSeek’s website or download it via Hugging Face for local deployment.
DeepSeek has also released a smaller 8-billion parameter version for users with limited computing resources. This distilled model requires approximately 16 GB of GPU memory and can run on high-end consumer graphics cards.
Early feedback from developers has been positive, with users praising the model’s coding abilities. The release positions DeepSeek as a strong open-source alternative to proprietary models like OpenAI’s o3 and Google’s Gemini 2.5 Pro.