Chinese AI company MiniMax has released MiniMax-M1, a new open-source language model that can process up to one million tokens of context. This makes it capable of handling entire book collections in a single conversation, reports Carl Franzen for VentureBeat. The model is available for free commercial use under an Apache 2.0 license on platforms like Hugging Face and GitHub. MiniMax-M1 comes in two variants and uses 456 billion parameters with innovative reinforcement learning techniques. The company spent just $534,700 on training, far less than competitors like OpenAI’s GPT-4, which cost over $100 million. The model performs strongly on coding and mathematics benchmarks, scoring 86% on the AIME 2024 mathematics competition. MiniMax-M1 offers businesses an efficient alternative to expensive proprietary AI models.