Google removed its longstanding prohibition against the use of artificial intelligence in weapons and surveillance systems. Former employees and industry experts claim that this could reshape Silicon Valley’s approach to AI safety.
This change, implemented quietly this week, eliminates important portions of Google’s AI Principles explicitly prohibit the company from developing AI to develop weapons or surveillance. These principles, Established in 2018,had served as a benchmark for responsible AI.
The last bastion has disappeared. “There’s no hold barred,” said Tracy Pizzo Frey who spent five years implementing Google As Senior Director of Outbound Product Management and Engagements, Responsible AI, and Responsible AI at Google Cloud in a BlueSky post. “Google really stood out in this level of transparency about what it would build.” (19659009) The The revised principles removes four specific prohibitions : technologies that are likely to cause harm overall, weapons applications, monitoring systems, and technologies which violate international law and rights. Google now says that it will “mitigate any unintended or harmful consequences” and align itself with “widely-accepted principles of international law, and human rights.”
Google relaxes AI ethics: What does this mean for military and surveillance tech?
The timing of this shift is particularly sensitive, as artificial intelligence capabilities are rapidly advancing and debates about appropriate guardrails continue to intensify. Google’s motives have been questioned by the timing of this change, even though it claims that these changes were long in development.
In an interview with VentureBeat, Pizzo Frey stated that “we’re in a situation where there isn’t much trust in big technology and every move which even appears to remove safeguards creates more mistrust.” She said that establishing clear ethical boundaries was crucial to building trustworthy AI systems in her time at Google.
Original principles were revealed in 2018 amid employee protests. Project Mavena Pentagon contract that involves AI for drone footage analyses. Google declined to renew the contract but the new changes may signal an openness to similar military partnership. The revised ethical framework retains some of the elements of Google’s prior ethical framework, but emphasizes risk management instead. This approach aligns with industry standards such as the NIST AI Risk Management Framework (19459050) is a framework that aims to manage the risks associated with AI applications. However, critics claim it offers less concrete restrictions for potentially harmful applications.
Pizzo Frey said that “even if the rigor of the AI is not the same, ethics considerations are no lesser important for creating good AI”highlighting the importance of ethical considerations in improving AI products’ accessibility and effectiveness.
Project Maven to policy switch: the road to Google’s AI ethics overhaul.
According to industry observers, this policy change may influence how other companies approach AI ethics. Google’s Original principlesset a precedent for self-regulation by corporations in AI development. Many enterprises looked to Google for guidance regarding responsible AI implementation.
Modification of Google’s AI Principles reflect broader tensions between rapid innovation and ethical restrictions in the tech industry. Companies are under pressure to balance responsible AI development with market demand as competition in AI development increases.
Pizzo Frey expressed concern over the pressure from the AI industry to release products quickly, without a thorough evaluation of the potential consequences.
The ethical dilemma of big tech: Will Google’s AI-policy shift set a standard for the industry?
This revision raises questions regarding Google’s internal decision-making process and how employees could navigate ethical considerations in the absence of explicit prohibitions. Pizzo Frey established review processes at Google that brought diverse perspectives together to evaluate AI applications.
Google’s commitment to responsible AI is unchanged, but the removal of specific restrictions marks a significant departure in its previous leadership role when it came to establishing clear ethical bounds for AI applications. As artificial intelligence advances, the industry is watching to see how this change might impact the broader landscape for AI development and regulations.
Want to impress your boss? VB Daily can help. We provide you with the inside scoop on what companies do with generative AI. From regulatory shifts to practical implementations, we give you the insights you need to maximize ROI.
Read our privacy policy
Thank you for subscribing. Click here to view more VB Newsletters.
An error occured.