Alibaba releases new AI models challenging global tech leaders

Alibaba’s Qwen team has launched two significant AI models – Qwen2.5-VL and Qwen2.5-Max – that demonstrate advanced capabilities in various tasks. According to the company, these models can perform text and image analysis, control computers and mobile devices, and compete with established AI systems from OpenAI, Anthropic, and Google on multiple benchmarks. The Qwen2.5-VL model …

Read more

Hugging Face launches compact AI models for image and text analysis

Hugging Face has released two new AI models designed for processing images, videos, and text on devices with limited resources. As Kyle Wiggers reports for TechCrunch, the models called SmolVLM-256M and SmolVLM-500M require less than 1GB of RAM to operate. The models, containing 256 million and 500 million parameters respectively, can describe images, analyze video …

Read more

Anthropic’s faster AI model Claude 3.5 Haiku available to all users

Anthropic has made its latest AI model, Claude 3.5 Haiku, available to all users through its web and mobile chatbot platforms. According to VentureBeat reporter Carl Franzen, the model was previously accessible only to developers via API since October 2024. The new model features a 200,000-token context window, surpassing OpenAI’s GPT-4 capacity. Third-party benchmarking organization …

Read more

OpenAI adds real-time video and screen sharing capabilities to ChatGPT

OpenAI has introduced real-time video analysis and screen sharing features to ChatGPT’s Advanced Voice Mode, marking a significant expansion of the AI chatbot’s capabilities. The new functions, announced during a livestream, allow ChatGPT Plus, Team, and Pro subscribers to interact with the AI through their phone cameras and share their device screens for real-time analysis …

Read more

Tests show strong performance of Google’s Gemini 2.0 Flash model

Independent developer Simon Willison has conducted extensive testing of Google’s newly announced Gemini 2.0 Flash model, documenting the results on his blog. The tests reveal significant capabilities in multimodal processing, spatial reasoning, and code execution. The model demonstrated exceptional accuracy in analyzing complex images, as shown in a detailed assessment of a crowded pelican photograph …

Read more

Google launches Gemini 2.0 AI model with expanded capabilities and agent features

Google has announced Gemini 2.0, its latest artificial intelligence model that introduces significant advances in multimodal capabilities and autonomous agent features. The experimental version, Gemini 2.0 Flash, is being released first to developers and trusted testers through Google’s AI platforms. According to Google, the new model can generate text, images, and multilingual audio while operating …

Read more

Amazon launches Nova family of AI models for text, image and video generation

Amazon Web Services has introduced Nova, a new family of artificial intelligence models designed for text, image and video generation. The announcement was made by CEO Andy Jassy at the AWS re:Invent conference. The Nova family consists of four text-generating models: Micro, Lite, Pro, and Premier. Micro, Lite, and Pro are immediately available to AWS …

Read more

AnyChat unifies access to multiple AI language models

AnyChat, a new development tool, enables seamless integration of multiple large language models (LLMs) through a single interface. Developer Ahsen Khaliq, machine learning growth lead at Gradio, created the platform to allow users to switch between models like ChatGPT, Google’s Gemini, Perplexity, Claude, and Meta’s LLaMA without being restricted to one provider, as reported by …

Read more

Mistral AI launches enhanced language model and ChatGPT competitor

French AI startup Mistral has unveiled Pixtral Large, a new 124-billion-parameter language model, alongside major updates to its Le Chat platform, reports Carl Franzen. The new model features advanced multimodal capabilities, including image processing and optical character recognition, while maintaining a significant context window of 128,000 tokens. The model is available for research purposes through …

Read more

Moondream raises $4.5M for compact yet powerful AI vision-language model

Moondream, a startup backed by Felicis Ventures, Microsoft’s M12 GitHub Fund, and Ascend, has emerged from stealth with $4.5 million in pre-seed funding. According to VentureBeat’s Michael Nuñez, the company has developed an open-source vision-language model that boasts 1.6 billion parameters but matches the performance of models four times its size. The model, which can …

Read more