Nous Research has launched a groundbreaking project to train a 15-billion parameter large language model using computers distributed across the internet instead of traditional data centers. As reported by Carl Franzen for VentureBeat, the company is livestreaming the pre-training process on their website, showing real-time evaluation benchmarks and hardware locations throughout the U.S. and Europe. The project utilizes Nous DisTrO, their proprietary technology that reduces inter-GPU communication bandwidth requirements by up to 10,000 times, making it possible to train AI models on standard internet connections. The initiative involves partnerships with Oracle, Lambda Labs, Northern Data Group, Crusoe Cloud, and the Andromeda Cluster, potentially revolutionizing AI development by making it accessible to smaller organizations and researchers without access to expensive supercomputers.
Nous Research trains AI model using distributed internet computers
Related posts:
Tags: Nous Research