Essential AI has launched Rnj-1, an open-source large language model designed to excel at coding, mathematical reasoning, and scientific tasks. The model, named after Indian mathematician Srinivasa Ramanujan and pronounced “range-1,” marks the company’s first major contribution to the open-source AI ecosystem.
The team reports on the Essential AI blog that the 8 billion parameter model demonstrates exceptional performance in software engineering tasks. On SWE-bench, a benchmark that measures real-world programming abilities, Rnj-1 performs significantly better than similarly sized models and approaches the capabilities of much larger systems.
Essential AI developed two versions of the model. The base version serves as a foundation, while the instruction-tuned variant follows user commands and handles multi-step tasks. Both versions use the Gemma 3 architecture and support contexts of up to 32,000 tokens.
The model shows particular strength in writing and optimizing code. On algorithmic coding benchmarks like HumanEval+ and MBPP+, Rnj-1 competes with the strongest open models of similar size and sometimes outperforms GPT OSS 20B, which has more than twice as many parameters. The instruction-tuned version can use profiling tools to iteratively improve code efficiency, a capability typically reserved for larger models.
In mathematical problem solving, Rnj-1 matches top open-source models on AIME’25, a challenging high school mathematics benchmark. The model also performs competitively on GPQA-Diamond, which tests knowledge in biology, physics, and chemistry with questions difficult even for non-domain experts with internet access.
Essential AI’s development journey began in February when the company decided to focus primarily on model capabilities rather than product development. The team chose to prioritize pre-training over post-training, betting that strong foundational training would be necessary for downstream success. This approach contrasted with the industry trend following DeepSeek R1’s release, which emphasized reinforcement learning.
The company split its development into two phases throughout the year, using smaller models between 200 million and 2 billion parameters for rapid experimentation. The team then validated promising results at larger scales with 8 billion parameter models. Essential AI reports achieving model training efficiency of approximately 50 percent of maximum achievable performance on AMD MI300X GPUs.
Infrastructure played a crucial role in the development. The team built a unified training framework supporting both TPU and GPU platforms across two cloud providers. They also developed an automated node recovery service that reduced wasted computation by two thirds.
The model maintains performance even when compressed to lower precision formats. Essential AI reports that Rnj-1 retains quality while transitioning from BF16 to FP8 to NVFP4, significantly increasing token throughput in prompt-heavy workloads.
Essential AI positions itself as an advocate for open-source AI development. The company believes that mastery of the underlying technology represents a viable path to building useful and enduring AI companies. Both the base and instruction-tuned versions of Rnj-1 are available to the public with full model cards and usage instructions.