Over 600 Google employees have urged Alphabet CEO Sundar Pichai to prevent the company’s AI systems from being utilized for classified military operations. Despite this internal opposition, Google reportedly signed an agreement with the US Department of Defense to allow use of its AI models for “any lawful government purpose,” including sensitive military activities. This contract resembles deals recently made by OpenAI and xAI with the Pentagon. Although the agreement stipulates that the AI is not intended for domestic mass surveillance or autonomous weapons without human oversight, Google agreed to relinquish control over lawful government decisions and will assist in modifying safety settings upon request. A Google spokesperson affirmed the commitment against AI use for mass surveillance or autonomous weaponry without human supervision, describing the provision of API access to commercial models as a responsible contribution to national security. However, employees remain concerned that classified applications could proceed without transparency or employee awareness, potentially enabling unethical or harmful uses such as lethal autonomous weapons.
This controversy recalls past internal protests, notably over Project Maven in 2018, which Google chose not to renew. Since then, Google’s stance has evolved, removing prior restrictions against developing AI for weapons and certain surveillance from its principles. Executives have recently emphasized that democracies should lead AI development collaboratively to protect people, encourage growth, and bolster national security. The employee letter concludes by urging Sundar Pichai to uphold the company’s founding values by rejecting classified workloads.
The ainewsarticles.com article you just read is a brief synopsis; the original article can be found here: Read the Full Article…



