Meta has today released the new version of its AI model series: Llama 3.2, which for the first time includes vision models that can process both images and text. The larger versions with 11 and 90 billion parameters should be able to compete with closed systems like Claude 3 Haiku for image processing.
Also new are smaller text models with 1 and 3 billion parameters that run on mobile devices and edge systems. According to Meta, these are suitable for applications such as summaries or scheduling directly on the device. At the same time, the company is introducing the Llama Stack, a set of tools and APIs designed to simplify the development and deployment of AI applications.
Meta is working with a number of cloud providers and technology companies to accelerate the adoption of AI. The company emphasizes its open source approach and the accessibility of its models. At the same time, security mechanisms have been enhanced, including a new Llama Guard model for filtering problematic content.
Sources: Meta, VentureBeat, VentureBeat