Geoffrey Hinton, Jensen Huang, and Fei-Fei Li were instrumental in launching the deep learning revolution, despite facing skepticism from colleagues, Timothy B. Lee writes. Hinton spent decades promoting neural networks and developed the backpropagation algorithm for training them efficiently, as detailed in Cade Metz’s book “Genius Makers.” Huang, CEO of Nvidia, recognized the potential of GPUs for non-graphics applications and launched the CUDA platform in 2006, enabling faster neural network training. Li created ImageNet, a vast labeled dataset of over 14 million images, which provided the training data needed to demonstrate the power of large neural networks.
In 2012, a neural network called AlexNet, trained on ImageNet using Nvidia GPUs, achieved unprecedented performance in the ImageNet competition. This groundbreaking result, presented by Alex Krizhevsky and dubbed “an unequivocal turning point” by Yann LeCun, vindicated the work of Hinton, Huang, and Li. It also established Nvidia GPUs as the industry standard for training neural networks, leading to the company’s trillion-dollar valuation today. The success of AlexNet demonstrates the importance of pursuing unorthodox ideas and the transformative impact of the convergence of neural networks, big data, and GPU computing.