Cohere releases Aya Vision, a multilingual vision model with open weights

Cohere’s research division has launched Aya Vision, an open-weight vision model supporting 23 languages. According to Carl Franzen’s report in VentureBeat, the model comes in 8-billion and 32-billion parameter versions and can analyze images, generate text, and translate visual content. Aya Vision outperforms larger models like Llama 90B while requiring fewer computational resources. The model …

Read more

Microsoft introduces efficient Phi-4 for text, image, speech processing

Microsoft has unveiled two new AI models in its Phi series: Phi-4-multimodal with 5.6 billion parameters and Phi-4-mini with 3.8 billion parameters. These small language models (SLMs) deliver exceptional performance while requiring significantly less computing power than larger systems, challenging the notion that bigger AI models are always better. The Phi-4-multimodal model stands out for …

Read more

Alibaba releases new AI models challenging global tech leaders

Alibaba’s Qwen team has launched two significant AI models – Qwen2.5-VL and Qwen2.5-Max – that demonstrate advanced capabilities in various tasks. According to the company, these models can perform text and image analysis, control computers and mobile devices, and compete with established AI systems from OpenAI, Anthropic, and Google on multiple benchmarks. The Qwen2.5-VL model …

Read more

Hugging Face launches compact AI models for image and text analysis

Hugging Face has released two new AI models designed for processing images, videos, and text on devices with limited resources. As Kyle Wiggers reports for TechCrunch, the models called SmolVLM-256M and SmolVLM-500M require less than 1GB of RAM to operate. The models, containing 256 million and 500 million parameters respectively, can describe images, analyze video …

Read more

Anthropic’s faster AI model Claude 3.5 Haiku available to all users

Anthropic has made its latest AI model, Claude 3.5 Haiku, available to all users through its web and mobile chatbot platforms. According to VentureBeat reporter Carl Franzen, the model was previously accessible only to developers via API since October 2024. The new model features a 200,000-token context window, surpassing OpenAI’s GPT-4 capacity. Third-party benchmarking organization …

Read more

OpenAI adds real-time video and screen sharing capabilities to ChatGPT

OpenAI has introduced real-time video analysis and screen sharing features to ChatGPT’s Advanced Voice Mode, marking a significant expansion of the AI chatbot’s capabilities. The new functions, announced during a livestream, allow ChatGPT Plus, Team, and Pro subscribers to interact with the AI through their phone cameras and share their device screens for real-time analysis …

Read more

Tests show strong performance of Google’s Gemini 2.0 Flash model

Independent developer Simon Willison has conducted extensive testing of Google’s newly announced Gemini 2.0 Flash model, documenting the results on his blog. The tests reveal significant capabilities in multimodal processing, spatial reasoning, and code execution. The model demonstrated exceptional accuracy in analyzing complex images, as shown in a detailed assessment of a crowded pelican photograph …

Read more

Google launches Gemini 2.0 AI model with expanded capabilities and agent features

Google has announced Gemini 2.0, its latest artificial intelligence model that introduces significant advances in multimodal capabilities and autonomous agent features. The experimental version, Gemini 2.0 Flash, is being released first to developers and trusted testers through Google’s AI platforms. According to Google, the new model can generate text, images, and multilingual audio while operating …

Read more

Amazon launches Nova family of AI models for text, image and video generation

Amazon Web Services has introduced Nova, a new family of artificial intelligence models designed for text, image and video generation. The announcement was made by CEO Andy Jassy at the AWS re:Invent conference. The Nova family consists of four text-generating models: Micro, Lite, Pro, and Premier. Micro, Lite, and Pro are immediately available to AWS …

Read more

AnyChat unifies access to multiple AI language models

AnyChat, a new development tool, enables seamless integration of multiple large language models (LLMs) through a single interface. Developer Ahsen Khaliq, machine learning growth lead at Gradio, created the platform to allow users to switch between models like ChatGPT, Google’s Gemini, Perplexity, Claude, and Meta’s LLaMA without being restricted to one provider, as reported by …

Read more