Grossman
Dall-E prompt.
The 2024 U.S. elections focused on traditional issues such as the economy and immigration. However, their quiet impact on AI policies could prove to be even more transformative. Voters tipped the balance in favor of those who advocate rapid AI development without regulatory hurdles, even though there was no debate question or major campaign pledge about AI. This acceleration is profound. It signals a new era in AI policy, one that emphasizes innovation over caution. It also signals a shift in the debate about AI’s risks and rewards.
Many assume that the pro-business stance taken by President-elect Donald Trump will lead his administration to favor those who develop and market AI and other advanced technology. His party platform says little about AI. It does, however, emphasize a policy strategy focused on repealing AI regulation, focusing in particular on what they called “radical leftist ideas” within the existing executive orders of incoming administration. The platform, on the other hand, supported AI development that aimed to foster free speech and “human prospering,” calling for policy changes that would enable innovation in AI.
Early indicators based on the appointment of leading government positions highlight this direction. There is a bigger story in the making: The resolution of a heated debate about AI’s future.
A heated debate
Since the release of ChatGPT in November 2022 there has been an intense debate between those who want AI development to accelerate and those who wish to slow it down.
In March 2023, the Future of Life Institute spearheaded a letter calling for a six-month AI break in the development of the most advanced AI systems. This was in response to OpenAI’s GPT-4 large Language Model (LLM), released several months after ChatGPT.
More than 1,000 technology leaders, researchers, and politicians signed the letter at first, including Elon Musk and Steve Wozniak of Apple, 2020 Presidential candidate Andrew Yang and Lex Fridman. Also included were AI pioneers Yoshua Benjamin and Stuart Russell. The number of signees grew to over 33,000. Collectively, these people became known as “doomers”a term that captured their concerns about the potential existential risk from AI.
But not everyone was in agreement. OpenAI CEO Sam Altman did not sign. Bill Gates, among others, did not sign. Many cited concerns about the potential harm of AI as their reason for not taking action. Many conversations arose about the dangers of AI running amok and causing disaster. Many in the AI field began to use the equation p(doom) to describe their assessment of the likelihood of doom. The work on AI did not stop.
Just for the record, in June 2023 my p(doom), was 5%. This might seem low but it wasn’t zero. I felt that major AI labs were sincere about their efforts to rigorously test new models before release and to provide significant guardrails to their use.
Many AI-related observers have rated existential risk higher than 5%. Some have even rated it much higher. AI safety researcher Roman Yampolskiy estimated that AI could end humanity at a rate of over 99%. A study published early this year, representing the views and opinions of over 2,700 AI researchers showed that the median prediction for very bad outcomes such as the extinction of humans was 5%. Would you board a flight if it had a 5% probability of crashing? AI researchers and policymakers are faced with this dilemma.
Must go faster
Other have openly dismissed worries about AI and pointed instead to what they perceived to be the huge upsides of the technology. Andrew Ng, who founded and led Google Brain Project, and Pedro Domingos, a professor of computer sciences and engineering at the University of Washington (and author of “The Master Algorithm”) are among those who have dismissed AI concerns. They argued that AI was part of the solution. Ng argues that there are existential threats, such as climate changes and future pandemics. AI can help to mitigate these.
Ng argued that AI development should not be paused, but should instead go faster. This utopian view of technology has been echoed by others who are collectively known as โeffective accelerationistsโ or โe/accโ for short. They argue that technology โ and especially AI โ is not the problem, but the solution to most, if not all, of the worldโs issues. Startup accelerator Y Combinator CEO Garry Tan, along with other prominent Silicon Valley leaders, included the term โe/accโ in their usernames on X to show alignment to the vision. Reporter Kevin Roose at the New York Times captured the essence of these accelerationists by saying they haveย an โall-gas, no-brakes approach.โ
A Substack newsletter from a couple years ago described the principles underlying effective accelerationism. Here is the summation they offer at the end of the article, plus a comment from OpenAI CEO Sam Altman.
AI acceleration ahead
2024 election results may be seen as a pivotal moment, allowing the accelerationist vision to shape U.S. AI policies for the next few years. The President-elect appointed David Sacks to the role of “AI czar”
Sacks is a vocal critic against AI regulation, and a proponent for market-driven innovation. He brings his experience as an investor in technology to this position. He is one the leading voices of the AI industry and much of his AI-related remarks align with the accelerationist views expressed by the incoming platform.
In 2023, Sacks responded to the AI executive orders from the Biden administration by tweeting: “The U.S. is hopelessly broke, but we do have one unmatched asset as a nation: Cutting-edge AI innovation driven by a free and unregulated software development market. Sacks’ appointment is a sign of a shift in AI policy towards policies that favor industry self-regulation and rapid innovations.
Elections have consequences.
It is doubtful that most voters gave AI policy implications much thought when they cast their votes. In a very tangible manner, the accelerationists won the election. They could potentially be pushed out of the way by those who advocate a more cautious federal approach to mitigate AI’s risks in the long term.
As the accelerationists chart a path forward, stakes are higher than ever. It remains to be determined whether this era will bring unprecedented progress or unintended disaster. As AI development accelerates the need for informed public discussion and vigilant oversight become more important. The way we navigate through this era will determine not only the technological progress, but also our collective destiny.
To counteract the lack of action on the federal level, one or more states may adopt various regulations. This has already happened in California and Colorado to some extent. California’s AI bills, for example, focus on transparency requirements. Colorado’s AI hiring practices are based on AI discrimination, and offer models for state-level government. All eyes will now be on the voluntary testing, self-imposed guardrails, and other AI model developers, such as Anthropic, Google and OpenAI.
To summarize, the accelerationist victory will lead to less restrictions on AI innovation. This increased speed can lead to faster innovation but it also increases the risk of unintended effects. I’m revising my p (doom) now to 10%. What’s yours? Gary Grossman, EVP of technology at Edelman, is global lead of the Edelman AI Center of Excellence.
Read more from DataDecisionMakers.