Alibaba Cloud has launched an upgraded version of its Qwen2.5-Turbo AI model that can now process contexts of up to one million tokens, equivalent to approximately 1.5 million Chinese characters or 10 full-length novels. The improved model achieves 93.1 points on the RULER long text evaluation benchmark, surpassing GPT-4’s score of 91.6. According to Alibaba, the new version delivers significantly faster inference speeds, reducing processing time for million-token contexts from 4.9 minutes to 68 seconds through sparse attention mechanisms. The company maintains its pricing while offering enhanced capabilities for tasks like novel comprehension, code assistance, and research paper analysis. The model retains strong performance on shorter sequences, matching GPT-4o-mini’s capabilities while handling longer contexts.