The United States and United Kingdom have declined to sign an international declaration on AI safety at the Paris AI Action Summit, while the European Union has withdrawn its planned AI liability directive. These developments signal a significant shift in the global approach to AI regulation.
At the Paris summit, US Vice President JD Vance emphasized America’s determination to maintain its dominance in AI technology while warning against “overly precautionary” regulations. The declaration, signed by approximately 60 countries including China, India, and Germany, called for ensuring AI systems are “safe, secure and trustworthy.”
Key Developments:
- The US and UK refused to sign the summit declaration, marking a departure from their previous positions
- The European Commission withdrew its AI liability directive following criticism from the US
- France announced €200 billion in planned investments for AI infrastructure
- Competition with China intensified following DeepSeek’s breakthrough in AI development
The EU’s withdrawal of its AI liability directive, announced late on February 11, came directly after Vance’s criticism of European regulatory approaches. The Commission cited “no foreseeable agreement” as the reason for withdrawing the directive, suggesting a potential shift toward prioritizing competitiveness over strict regulation.
The summit revealed growing tensions between regulatory approaches and innovation goals. While European leaders announced substantial investments in AI development, the event highlighted the challenges of balancing safety concerns with technological advancement.
Industry leaders, including executives from OpenAI, Anthropic, and Google DeepMind, participated in the summit, with discussions focusing more on AI’s potential benefits rather than previous concerns about safety risks.
Sources: Ars Technica, New York Times, Euractiv