Steven Adler, a former safety researcher at OpenAI, has expressed serious concerns about the rapid advancement of artificial intelligence technology. In a recent series of posts on X (formerly Twitter), as reported by Dan Milmo in The Guardian, Adler described the current pace of AI development as “terrifying” and questioned humanity’s long-term survival prospects.
During his four-year tenure at OpenAI, Adler led safety research for new product launches and long-term AI systems. He specifically warned about the risks associated with the race to develop artificial general intelligence (AGI), systems designed to match or exceed human intelligence across all tasks. According to Adler, no research laboratory has yet solved the critical challenge of AI alignment – ensuring AI systems follow human values. He emphasized that faster development reduces the likelihood of finding proper safety solutions in time. The former researcher also highlighted the industry’s problematic dynamics, where even companies aiming for responsible AGI development face pressure from competitors who might take shortcuts.
His concerns echo those of AI expert Geoffrey Hinton, though others like Meta’s chief AI scientist Yann LeCun maintain less alarming views. Adler’s warnings came as Chinese company DeepSeek announced an AI model competing with OpenAI’s technology, further intensifying the global AI race. He called for implementation of proper safety regulations to address these challenges.