Nous Research launches AI model with optional reasoning mode

Nous Research has released DeepHermes-3, a new AI language model that allows users to switch between detailed reasoning and quick responses. As reported by Carl Franzen for VentureBeat, this 8-billion parameter model builds on Meta’s Llama technology. Users can activate a special reasoning mode that makes the AI show its thought process before providing answers. … Read more

Mistral Small 3 rivals larger competitors

French startup Mistral AI has announced the release of Mistral Small 3, a 24-billion-parameter language model that the company claims matches the performance of models three times its size. According to Mistral AI, the new model achieves 81% accuracy on standard benchmarks while processing 150 tokens per second, making it comparable to Meta’s Llama 3.3 … Read more

DeepSeek-R1 brings significant cost reduction for Enterprise AI

DeepSeek’s new AI reasoning model R1 could substantially reduce the costs of developing AI applications. According to an analysis by Ben Dickson in VentureBeat, DeepSeek-R1 offers similar capabilities to leading models at a fraction of the price. The model costs $2.19 per million output tokens, compared to OpenAI’s o1 at $60 per million tokens. When … Read more

Hugging Face launches compact AI models for image and text analysis

Hugging Face has released two new AI models designed for processing images, videos, and text on devices with limited resources. As Kyle Wiggers reports for TechCrunch, the models called SmolVLM-256M and SmolVLM-500M require less than 1GB of RAM to operate. The models, containing 256 million and 500 million parameters respectively, can describe images, analyze video … Read more

DeepSeek releases new reasoning models and introduces distilled versions

Chinese AI company DeepSeek has announced the release of its new reasoning-focused language models DeepSeek-R1-Zero and DeepSeek-R1, along with six smaller distilled versions. The main models, built on DeepSeek’s V3 architecture, feature 671 billion total parameters with 37 billion activated parameters and a context length of 128,000 tokens. According to company statements, DeepSeek-R1 achieves performance … Read more

Diffbot launches new AI model with real-time fact checking

Diffbot, a Silicon Valley company, has introduced a new AI model that combines AI capabilities with real-time fact verification. As reported by Michael Nuñez for VentureBeat, the system uses graph retrieval-augmented generation (GraphRAG) technology based on Meta’s Llama 3.3. The model connects to Diffbot’s Knowledge Graph, a database containing over one trillion facts that updates … Read more

Nvidia announces new AI models and technologies at CES 2025

Nvidia has unveiled multiple new AI initiatives at CES 2025, centered around their Nemotron model families and Cosmos World Foundation Models. The company’s CEO Jensen Huang presented these developments during his opening keynote, introducing AI models designed to advance both enterprise and consumer applications. The Nemotron family includes language and vision models available as NIM … Read more

Nvidia introduces “desktop AI supercomputer” Project Digits for $3,000

At CES 2025 in Las Vegas, Nvidia announced Project Digits, a compact desktop AI supercomputer aimed at researchers, data scientists, and students. The device, scheduled for release in May 2025 at a price point of $3,000, represents the company’s effort to bring powerful AI computing capabilities to individual desks. At the core of Project Digits … Read more

Tested: DeepSeek-V3 matches top AI models at lower cost

A detailed analysis published by Sunil Kumar Dash reveals that DeepSeek’s latest AI model achieves performance comparable to leading closed-source models while offering significant cost advantages. The model outperforms existing open-source alternatives in mathematics and reasoning tasks, according to extensive benchmark testing. The analysis demonstrates that DeepSeek-V3 surpasses GPT-4 and Claude 3.5 Sonnet in mathematical … Read more

Developer shares guide for running AI models locally

A detailed guide for running large language models (LLMs) on personal computers has been published by software developer Abishek Muthian on his blog. The article provides a thorough overview of hardware requirements, essential tools, and recommended models for local LLM deployment. Muthian emphasizes that while he uses high-end hardware including a Core i9 CPU and … Read more