ServiceNow has launched Fast-LLM, an open-source framework that speeds up artificial intelligence model training by 20%. As reported by Sean Michael Kerner for VentureBeat, the technology has already proven successful in training ServiceNow’s StarCoder 2 language model. Fast-LLM introduces two key innovations: “Breadth-First Pipeline Parallelism” for optimized computation ordering and improved memory management that reduces fragmentation during training operations. The framework functions as a drop-in replacement for PyTorch environments and requires minimal configuration changes to existing AI training pipelines. According to Nicolas Chapados, VP of research at ServiceNow, the technology can significantly reduce costs and environmental impact for large-scale AI training operations. The company aims to further develop Fast-LLM through community contributions, following their successful open-source approach with StarCoder.