Cohere’s research division has launched Aya Vision, an open-weight vision model supporting 23 languages. According to Carl Franzen’s report in VentureBeat, the model comes in 8-billion and 32-billion parameter versions and can analyze images, generate text, and translate visual content. Aya Vision outperforms larger models like Llama 90B while requiring fewer computational resources. The model is available on Cohere’s website, Hugging Face, and Kaggle under a Creative Commons Attribution-NonCommercial license, limiting its use for commercial applications. Users can also access it through WhatsApp. Key capabilities include captioning images, answering visual questions, and translating image content across languages spoken by approximately half the world’s population. The model represents part of Cohere’s broader Aya initiative focused on multilingual AI development.