On Tuesday, Google dropped from its "AI Principles" page its pledge not to pursue ventures that "cause overall harm," applications for "weapons," applications for "surveillance," or pursuits that conflict with "international law and human rights."
Amid the rescinding of principles, a pair of Google executives also issued a blog post on Tuesday, stating that the new policy going forward needed to be competitive with the "increasingly complex geopolitical landscape."
The AI principles were first introduced in 2018, according to Wired, as a rebuff to quell internal protests at the technology company over a contract to work on a United States drone program. But the protest was apparently successful. Google declined to renew the contract for the drone program and announced a set of AI principles.
But on Tuesday, such principles were overridden. Google executives James Manyika, the company's senior vice president for research, labs, technology, and society, and Demis Hassabis, CEO of Google DeepMind, wrote in a blog post that "we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security."
Google employees have since expressed concern about the company's position change. Parul Koul, a Google software engineer and president of the Alphabet Workers Union-CWA, told Wired, "It's deeply concerning to see Google drop its commitment to the ethical use of AI technology without input from its employees or the broader public, despite long-standing employee sentiment that the company should not be in the business of war."