Home News Silicon Valley spooks AI safety advocates

Silicon Valley spooks AI safety advocates

0
Silicon Valley spooks AI safety advocates

Silicon Valley leaders, including White House AI & Crypto Czar David Sacks, and OpenAI Chief Strategist Jason Kwon, caused a stir on the internet this week with their comments about groups promoting AI security. In separate instances they claimed that some advocates of AI safety were not as virtuous and were either acting in their own interest or for billionaire puppet masters.

AI Safety groups that spoke to TechCrunch said the allegations from Sacks are Silicon Valley’s latest attempt at intimidation of its critics. But it is not the first. Some venture capital firms spread rumors in 2024 that a California AI Safety Bill, SB 1047 would send startup founders into jail. The Brookings Institution labelled the rumor “one of many”. The bill was criticized for “misrepresentations”but Gavin Newsom vetoed the bill anyway.

Sacks and OpenAI’s actions have scared AI safety advocates, whether or not they intended to intimidate their critics. TechCrunch contacted many nonprofit leaders in the past week who asked to remain anonymous to protect their organizations from retaliation.

This controversy highlights Silicon Valley’s increasing tension between building AI responsibly while building it as a massive consumer-product — a topic that my colleagues Kirsten Krosec, Anthony Ha and I explore on this week’s Equity Podcast. We also discuss a new AI safety bill passed in California that regulates chatbots and OpenAI’s approach towards erotica on ChatGPT.

Sacks wrote on Tuesday a Post on X claiming that Anthropic — who has ‘s raised concerns about AI’s potential to contribute to unemployment and cyberattacks and catastrophic harms to the society — is simply frightening to get laws that will benefit themselves and drown smaller startups in paperwork. Anthropic is the only major AI laboratory to have endorsed California’s Senate Bill (SB) 53, a bill which sets safety reporting requirements on large AI companies. SB 53 was signed into law by Governor Jerry Brown last month.

Sacks responded to a Jack Clark, co-founder of Anthropic, wrote a viral essay about his concerns regarding AI. Clark gave the essay at the Curve AI Safety Conference in Berkeley a few weeks earlier. Sacks did not see it in that light.

Anthropic runs a sophisticated regulatory capture campaign based on fear mongering. It is primarily responsible for the state-wide regulatory frenzy which is damaging to the startup ecosystem. https://t.co/C5RuJbVi4P

— David Sacks (@DavidSacks) October 14, 2025

Sacks stated that Anthropic is pursuing a “sophisticated regulation capture strategy.” It’s important to note, however, that a truly sophisticated approach would not involve making the federal government an enemy. In a October 27-29, 2025

Also this week, OpenAI’s chief strategy officer, Jason Kwon, wrote a Post on X explaining the reason why the company sent subpoenas AI safety nonprofits such as Encode. Encode is a nonprofit that advocates responsible AI policy. A subpoena, or legal order, is a legal document that demands documents or testimony. Kwon said OpenAI was suspicious of the fact that several organizations opposed its restructuring after Elon Musk filed a lawsuit against it. This was because Musk was concerned that ChatGPT had strayed away from its nonprofit mission. Encode filed a brief in support Musk’s lawsuit and other nonprofits spoke publicly against OpenAI restructuring.

The story is much more complex than this.

Everyone knows that we are actively defending Elon in a suit where he is attempting to damage OpenAI for the sake of his own financial gain.

Encode is the organization that was responsible for the lawsuit. @_NathanCalvin (19459055) serves as the General counsel, was one… https://t.co/DiBJmEwtE4

— Jason Kwon (@jasonkwon) October 10, 2025 (19659018) “This raised transparency concerns about who was funding these and whether there had been any coordination,” said Kwon.

NBC News has reported this week that OpenAI issued broad subpoenas for Encode. OpenAI also asked six other nonprofits for their communications relating to two of OpenAI’s biggest opponents, Musk, and Meta CEO Mark Zuckerberg. OpenAI asked Encode to provide communications relating to its support for SB 53.

A prominent AI safety leader told TechCrunch there is a growing divide between OpenAI’s research organization and its government affairs team. OpenAI’s safety team regularly publishes reports that reveal the risks associated with AI systems. However, OpenAI policy unit lobbied for SB 53 because it preferred uniform rules at the federal government level. Joshua Achiam is OpenAI’s director of mission alignment. He spoke out against his company issuing subpoenas for nonprofits. This week, I will be posting on X .

Achiam said, “At the risk of my entire career, I will say that this doesn’t sound great.”

Brendan Steinhauser is the CEO of Alliance for Secure AI, a nonprofit that promotes AI safety. (They have not been subpoenaed to by OpenAI). He told TechCrunch OpenAI appears to believe its critics are a part of a Musk conspiracy. He argues that this is not true, and that many in the AI safety community are critical of xAI’s safety practices or lack thereof.

Steinhauser said that OpenAI is using this to intimidate critics and dissuade nonprofits from doing so. “I think Sacks is concerned that the [the AI safety] movements are growing and people want these companies to be held accountable.” He responded with a social media post claiming AI safety advocates were out of touch. He called on AI safety organizations to speak to “people who are in the real world using AI in their homes, organizations, and businesses.”

According to a recent Pew survey, roughly half of Americans use AI. AI has a greater impact on Americans than they do on other countries, but they are still unsure of what concerns them. Another recent study looked at the details and found that American voters are more concerned about The AI safety movement is more concerned with job losses and deepfakes (19459055) than the catastrophic risks that AI can cause.

Addressing safety concerns could come at a cost to the AI industry’s rapid expansion — a tradeoff that worries many Silicon Valley residents. The fear of overregulation is understandable, as AI investment supports a large part of the American economy.

However, after years of unregulated AI advancement, the AI Safety movement appears to be gaining momentum heading into 2026. Silicon Valley’s efforts to fight back against safety groups may be an indication that they are working.

www.aiobserver.co

Exit mobile version