Hugging Face creates open alternative to OpenAI’s Deep Research

Hugging Face has developed an open-source version of autonomous research technology, matching key capabilities of OpenAI’s recently launched Deep Research feature. As reported by Benj Edwards for Ars Technica, the project called “Open Deep Research” was completed within 24 hours of OpenAI’s announcement. The new tool enables AI models to independently browse the web and … Read more

Hugging Face tries to replicate DeepSeek’s R1 as open source

Researchers at Hugging Face have launched a project to create an open-source version of DeepSeek’s R1 AI reasoning model. As reported by Kyle Wiggers for TechCrunch, the initiative called Open-R1 aims to duplicate all components of the original model, including training data and methods. Led by Hugging Face’s head of research Leandro von Werra, the … Read more

Hugging Face launches compact AI models for image and text analysis

Hugging Face has released two new AI models designed for processing images, videos, and text on devices with limited resources. As Kyle Wiggers reports for TechCrunch, the models called SmolVLM-256M and SmolVLM-500M require less than 1GB of RAM to operate. The models, containing 256 million and 500 million parameters respectively, can describe images, analyze video … Read more

Small language models achieve breakthrough with new scaling technique

Researchers at Hugging Face have demonstrated that small language models can outperform their larger counterparts using advanced test-time scaling methods. As reported by Ben Dickson for VentureBeat, a Llama 3 model with just 3 billion parameters matched the performance of its 70-billion-parameter version on complex mathematical tasks. The breakthrough relies on scaling “test-time compute,” which … Read more

New AI model from Hugging Face promises efficient image processing

Hugging Face has introduced SmolVLM, a new vision-language AI model that processes both images and text while using significantly less computing power than comparable solutions. As reported by Michael Nuñez, the model requires only 5.02 GB of GPU RAM, compared to competitors that need up to 13.70 GB. The system uses advanced compression technology to … Read more

Hugging Face releases compact language models for smartphones and edge devices

Hugging Face has released SmolLM2, a new family of compact language models designed to run on smartphones and edge devices with limited processing power and memory. The models, released under the Apache 2.0 license, come in three sizes up to 1.7B parameters and achieve impressive performance on key benchmarks, outperforming larger models like Meta’s Llama … Read more

Chinese models lead Hugging Face ranking

Hugging Face’s new ranking of the best freely avaiable language models shows that Chinese models currently lead the way. Alibaba’s Qwen models dominate the top spots in the ranking, which is based on more challenging tests than its predecessor. Skills such as knowledge recall, inferring from long texts, complex mathematics, and following instructions are assessed.