Google removes AI weapons and surveillance restrictions from ethics policy

Google has revised its AI ethics guidelines, eliminating previous restrictions on using AI for weapons and surveillance applications. According to reporting by Nitasha Tiku and Gerrit De Vynck in The Washington Post, the company removed the “Applications we will not pursue” section that had been in place since 2018.

Google’s AI chief Demis Hassabis and SVP James Manyika explained the change in a blog post, citing the need for democratic countries to lead AI development and support national security interests. The update aligns Google with other major AI companies like OpenAI and Anthropic, which have recently formed partnerships with defense contractors.

The original restrictions were implemented after employee protests against Project Maven, a Pentagon contract for analyzing drone footage. The new guidelines maintain commitments to human oversight and testing to prevent harmful outcomes, while emphasizing collaboration between companies and governments sharing democratic values.

Industry experts note this shift reflects the growing importance of AI in military applications and changing dynamics in U.S.-China technology competition. The policy change also comes amid increasing partnerships between tech companies and defense agencies, with competitors Microsoft and Amazon already maintaining strong Pentagon relationships.

Related posts:

Stay up-to-date: