Prime Intellect has introduced INTELLECT-2, a 32B parameter AI model that represents the first of its kind trained using globally distributed reinforcement learning. The model employs a decentralized approach, utilizing compute resources from contributors around the world rather than centralized GPU clusters. In a technical report, Prime Intellect details their custom-built infrastructure components, including PRIME-RL, TOPLOC, and SHARDCAST, which enable asynchronous training across heterogeneous networks. The company reports improved performance on mathematics and coding benchmarks compared to the QwQ-32B model. Prime Intellect has open-sourced the model, along with their code and data, to advance research in decentralized AI training methods. Future plans include expanding to tool-assisted reasoning, crowdsourcing higher-quality training data, and optimizing infrastructure for building more advanced open models.