Multimodal AI Reka Core announced

Reka, a San Francisco-based AI startup, introduces Reka Core, a powerful multimodal language model developed in less than a year that is supposed to match or leapfrog leading models from OpenAI, Google, and Anthropic. The model offers different modalities such as image, audio and video, supports 32 languages and comes with a context window of 128,000 tokens, which could make Reka Core suitable for a wide range of use cases in different industries.

Related posts:

Stay up-to-date: