Google’s recent decision to remove its prohibition on using artificial intelligence (AI) for weapons and surveillance has drawn sharp criticism from Amnesty International. Matt Mahmoudi, Researcher and Adviser on AI and Human Rights, warned that the move sets a dangerous precedent and increases the risk of human rights violations.

Concerns Over AI in Warfare and Surveillance
Amnesty International has long documented the threats posed by AI-powered technologies, particularly when used for mass surveillance, societal control, and automated military systems. According to Mahmoudi, Google’s policy reversal could lead to AI being integrated into lethal autonomous weapons, semi-automated drone strikes, and mass surveillance programs, enabling serious privacy violations and undermining international human rights standards.
“AI-powered technologies could fuel surveillance and lethal killing systems at a vast scale, potentially leading to mass violations and infringing on the fundamental right to privacy,” said Mahmoudi. He further emphasized that Google’s previous commitment to restricting AI applications in weapons development was an acknowledgment of these risks—one that has now been abandoned.
Amnesty is urging Google to reinstate its commitment to ethical AI development and refrain from producing or selling systems that could facilitate human rights abuses. The organization also calls on governments to implement binding regulations to oversee the deployment of AI in military and surveillance applications.
“The facade of self-regulation perpetuated by tech companies must not distract us from the urgent need to create robust legislation that protects human rights,” Mahmoudi stated.
>>>LiU309094PVUTL Replacement Battery for Blackview Tab 90
Background on Google’s Policy Shift
On Tuesday, Google quietly removed its pledge to avoid developing AI technologies that “cause overall harm,” including those used in weapons and surveillance systems. The company justified the change by stating that businesses and governments must collaborate on AI that supports national security.
Amnesty International has previously criticized Google’s surveillance-based business model, citing its incompatibility with the right to privacy. In 2019, the organization published research highlighting how facial recognition systems amplify racial discrimination and threaten freedom of expression, thought, and protest rights.
With Google now shifting its AI ethics stance, concerns are growing that the development of AI-driven military and surveillance technologies could escalate, raising urgent questions about accountability and regulation.