How AGI became the most consequential conspiracy theory of our time

Can you sense it in the air?

Whispers abound: it might arrive within a year, or perhaps in five. This breakthrough promises to revolutionize everything-curing diseases, reversing climate change, and ushering in an era of unprecedented prosperity. It’s said to hold the key to solving humanity’s most daunting challenges in ways we can barely envision, fundamentally transforming what it means to be human.

But what if this vision is overly optimistic? There are also dire warnings that it could trigger catastrophic consequences, even threaten our very existence.

Regardless of the timeline or outcome, one thing is clear: a monumental shift is imminent.

We’re not talking about religious prophecies or extraterrestrial ascensions, nor political conspiracies. The subject is artificial general intelligence (AGI)-a theoretical future technology capable of performing any intellectual task a human can.


This article explores how the surge in AGI discourse is reshaping perceptions of science and technology in the modern era.


In tech circles, especially in innovation hubs like Silicon Valley, AGI is often discussed with a near-mystical reverence. Ilya Sutskever, cofounder and former chief scientist of OpenAI, reportedly led his team in chants of “Feel the AGI!”-a testament to the almost spiritual fervor surrounding the concept. In 2024, Sutskever departed OpenAI, whose mission is to ensure AGI benefits all humanity, to launch Safe Superintelligence, a startup focused on preventing or controlling a potential rogue AGI. The term “superintelligence”-an enhanced form of AGI-has become the new buzzword as conversations about AGI enter mainstream tech discourse.

Sutskever’s journey embodies the complex emotions many AGI proponents experience: he helped build the foundations of a technology he now finds deeply unsettling. “This will be a watershed moment-before and after,” he remarked shortly before leaving OpenAI. When asked why he shifted his focus to containment, he candidly admitted, “I’m doing it out of self-preservation. Ensuring any superintelligence doesn’t go rogue is absolutely critical.”

He is far from alone in harboring grand, sometimes apocalyptic visions.

Every era has its prophets who believe they are witnessing a pivotal transformation-an epochal event that will divide history into before and after.

For us, that transformative event is the anticipated arrival of AGI. As Shannon Vallor, a technology ethics scholar at the University of Edinburgh, notes, society has cycled through eras-the computer age, the internet age, and now the AI age. “It’s common to be told that the next big thing is just around the corner,” she says. “What’s different with AGI is that it doesn’t yet exist.”

That’s why “feeling the AGI” transcends typical hype. There’s something more peculiar at play. In my view, AGI resembles a conspiracy theory-perhaps the most impactful one of our time.

Having covered AI for over a decade, I’ve witnessed AGI evolve from a fringe fantasy to the central narrative driving a multi-billion-dollar industry. What was once a speculative dream now underpins the valuations of some of the world’s most valuable companies and influences global markets. It justifies massive investments in data centers and infrastructure purportedly necessary to realize this vision. AI companies are selling this dream hard.

Consider the rhetoric from industry leaders: Dario Amodei, CEO of Anthropic, claims AGI will be as intelligent as a “nation of geniuses.” Demis Hassabis of Google DeepMind envisions it sparking “an era of unparalleled human flourishing, enabling space colonization.” Sam Altman of OpenAI promises it will “dramatically boost prosperity and encourage people to enjoy life more and have more children.” These are bold promises.

Yet, the flip side is equally stark. When not painting utopias, these leaders warn of existential risks. In 2023, Amodei, Hassabis, and Altman jointly stated that mitigating AI-driven extinction risks should be a global priority on par with pandemics and nuclear war. Elon Musk has estimated a 20% chance that AI could annihilate humanity.

Katja Grace, lead researcher at AI Impacts, observes, “Superintelligence was once taboo to mention publicly, but now tech CEOs openly discuss building it. They joke about the risks, but the underlying tension is palpable.”

It all sounds a bit like a conspiracy theory. Such theories typically require a flexible narrative that can absorb contradictions, a promised utopia accessible only to those who uncover hidden truths, and a hope for salvation from worldly suffering.

AGI ticks these boxes. While it’s not a conspiracy in the strict sense, examining its parallels with conspiratorial thinking helps clarify its nature: a techno-utopian or techno-dystopian fever dream deeply entwined with cultural beliefs that resist easy dismissal.

This isn’t mere speculation. The AGI narrative dominates tech and economic discourse. Understanding its origins, allure, and influence is essential to making sense of today’s AI landscape.

Admittedly, likening AGI to a conspiracy theory is provocative and will unsettle many. But follow me down this path, and I’ll illuminate the underlying dynamics.

From Fringe to Mainstream: The Rise of AGI in Silicon Valley

The Allure of a Catchy Concept

Conspiracy theories often begin on society’s margins-small groups piecing together “evidence” or vigilantly watching the skies. Some, however, gain traction, becoming accepted by influential figures and even shaping policy. UFOs, once dismissed as fringe, now receive government attention. Vaccine skepticism, a far more dangerous example, has influenced public health decisions. AGI has followed a similar trajectory.

Back in 2007, AI was largely confined to narrow applications like recommendation algorithms used by Amazon and Netflix (then still mailing DVDs). The idea of a machine with human-like intelligence was considered fanciful.

Ben Goertzel, an AI researcher, had grander ambitions. In the 1990s, he founded Webmind, aiming to create a digital “baby brain” trained on the early internet. The startup failed, but Goertzel remained a key figure in a fringe community dreaming of versatile, human-level AI.

Seeking a term to distinguish this vision from conventional AI, a former Webmind employee, Shane Legg, coined “Artificial General Intelligence.” The phrase had a compelling ring.

Legg later co-founded DeepMind with Demis Hassabis and Mustafa Suleyman. Initially, mainstream researchers scoffed at AGI talk. Ilya Sutskever recalls AGI being a “dirty word.” Andrew Ng, founder of Google Brain, dismissed it as “loony.”

So how did AGI move from fringe to forefront? Goertzel attributes it to several factors. First, the annual Conference on Artificial General Intelligence, launched in 2008, helped legitimize the field by aligning with established AI conferences and attracting students.

Second, DeepMind’s founders occasionally referenced AGI, lending corporate credibility. Third, connections between early AGI advocates and influential investors like Peter Thiel helped channel resources and attention. Goertzel recalls trying to convince Thiel of AGI’s potential, while Thiel also heard warnings of its dangers from thinkers like Eliezer Yudkowsky.

The Emergence of the Doomsayers

Eliezer Yudkowsky, a pivotal figure in AGI discourse, diverges from Goertzel’s optimism. He estimates a 99.5% chance that AGI’s arrival will be catastrophic.

In 2000, Yudkowsky co-founded the Singularity Institute for Artificial Intelligence (later the Machine Intelligence Research Institute), focusing on preventing disastrous outcomes. Peter Thiel was an early supporter.

Yudkowsky’s warnings initially fell on deaf ears, as the notion of a superintelligent AI was pure science fiction. But in 2014, philosopher Nick Bostrom’s book Superintelligence popularized these concerns, influencing tech leaders like Bill Gates and Elon Musk.

“Bostrom packaged Eliezer’s ideas in a way that was palatable to the mainstream,” Goertzel explains. “This gave AGI a stamp of legitimacy beyond fringe speculation.”

Illustration of AGI concept

STEPHANIE ARNETT | PUBLIC DOMAIN

Yudkowsky’s ideas have permeated AI culture, especially through online communities like LessWrong, where many current AI engineers first encountered his warnings.

His influence extends to a new generation of AI skeptics, such as David Krueger, a University of Montreal researcher who believes superhuman AI will inevitably cause human extinction unless development halts immediately.

Yudkowsky’s recent book, If Anyone Builds It, Everyone Dies, co-authored with Nate Soares, argues for an international ban on AGI development, even suggesting nuclear retaliation to enforce it. The book, a New York Times bestseller, has endorsements from national security experts and celebrities alike, amplifying his message.

Yudkowsky also played a key role in connecting Peter Thiel to DeepMind’s founders, leading to significant investments. Thiel and Elon Musk helped establish OpenAI in 2015, aiming to build safe AGI. OpenAI’s CEO Sam Altman credits Yudkowsky with accelerating AGI interest, though Thiel later grew wary of the “AI safety” faction’s influence.

Today, OpenAI is valued at around $500 billion, cementing AGI’s place at the heart of the tech industry.

AGI has transitioned from fringe fantasy to mainstream obsession.

The AGI Mythos: A Modern Tech Conspiracy

Though the term “AGI” is relatively recent, the mythos surrounding it dates back to the dawn of computing-a blend of audacity and marketing.

Alan Turing pondered machine intelligence just five years after ENIAC’s creation in 1945, predicting machines might soon surpass human intellect and even collaborate to enhance themselves, potentially taking control.

In 1955, John McCarthy and colleagues coined “artificial intelligence” in a government grant proposal, ambitiously aiming to create machines capable of language, abstraction, problem-solving, and self-improvement-goals far beyond the primitive computers of the era.

This foundational myth-that a machine could one day match or exceed human intelligence-is less a technological roadmap and more a dream untethered from current reality. Recognizing this reveals striking parallels with conspiracy thinking.

The Elusive Nature of AGI

Debating AGI often feels like arguing with a fervent believer in a shifting conspiracy theory. Every counterargument is met with a reframing that preserves belief rather than invites evidence-based discussion.

Despite massive investments and hype, no one truly knows how to build AGI. Definitions vary wildly: What does “human-level intelligence” mean? Which cognitive abilities count? How broad must the machine’s skills be?

Christopher Symons, chief AI scientist at Lirio, notes, “There’s no concrete definition. Human intelligence is diverse and context-dependent.”

He asks, “What exactly are we trying to build?”

In 2023, Google DeepMind researchers, including Legg, attempted to categorize AGI capabilities, with some emphasizing learning, others economic productivity, and some even physical embodiment.

Legg admits the vagueness was intentional: “I saw AGI more as a research field than a specific artifact.”

So, will we recognize AGI when it appears? Some claim we already have.

Microsoft researchers described early experiments with OpenAI’s GPT-4 as “sparks of AGI,” astonishing many and making the concept seem more plausible.

Yet Goertzel remains skeptical, saying, “It surprises me that some experts believe these models are human-level AGI. But you can’t prove they’re not.”

Here lies the crux: the inability to disprove AGI’s imminence fuels belief despite scant evidence. Vallor warns, “The narrative of inevitable AGI has led to many departures from reality.”

Predictions about AGI’s arrival resemble numerology, with deadlines repeatedly missed and adjusted without consequence.

For example, the much-anticipated GPT-5 release in 2024 failed to deliver a leap toward AGI, yet believers simply pushed timelines further out.

Jeremy Cohen, a researcher on conspiracy thinking, calls this “imperfect evidence gathering”-selectively embracing supportive data while dismissing contradictions-a hallmark of conspiratorial thought.

Cohen draws parallels between AGI belief and groups like People Unlimited, whose members claimed immortality despite deaths, rationalizing failures as personal faults.

He sees “magical thinking” embedded in AGI’s popular imagination, echoing religious and conspiratorial narratives.

The Insider’s Secret

To outsiders, AGI discussions can seem like cryptic insider knowledge. Researchers casually referencing AGI appear to possess hidden insights, though none have clearly articulated what those are.

Conspiracy theories often revolve around uncovering concealed truths, and AGI discourse shares this trait.

In 2023, Leopold Aschenbrenner, a former OpenAI employee turned investor, published a 165-page manifesto titled “Situational Awareness,” emphasizing that one either “sees” the truth of AGI’s arrival or remains blind to it. Facts are secondary to intuition.

Goertzel echoes this sentiment, noting that before every major technological breakthrough, skeptics abound, but “most people only believe what they see.”

For believers like Krueger, skepticism is the true faith: “Denying AGI’s inevitability is the real dogma.”

Belief in AGI offers a sense of purpose and significance. Vallor explains, “The idea of birthing machine gods appeals to the ego and offers transcendence.”

Krueger observes that some AI researchers view their work as akin to parenthood-though ironically, many do not have children themselves.

AGI as Savior or Destroyer

Cohen compares modern AGI beliefs to the New Age movement of the 1970s and ’80s, which promised spiritual awakening and utopia through mystical practices.

Today’s tech industry replaces crystals with computing power but retains a similar millenarian vision: a transformative event that will solve humanity’s problems.

This event, often called the Singularity, envisions an intelligence explosion where AI rapidly self-improves beyond human control, marking a point of no return.

Symbolic image of AI transformation

Grace describes it as “an event horizon in technological progress.”

Vallor notes this faith in technology has supplanted faith in human agency. Unlike New Age optimism, which centered on human potential, AGI belief places salvation solely in machines.

For many, this is a comforting narrative. “Other avenues for improving human life seem exhausted,” Vallor says. “AGI rekindles hope for a better future.”

At its extreme, AGI becomes a deity offering relief from suffering.

Kelly Joyce, a sociologist studying technology’s cultural impact, sees AGI hype as part of a recurring pattern of tech overpromising. “We get swept up every time,” she says. “Technology is treated like a god, and it’s hard to challenge that belief.”

AGI’s Impact on the Tech Industry

The allure of a machine that can do anything a human can is powerful, but the AGI narrative has tangible consequences. It distorts perceptions of the current AI boom, potentially diverting resources from practical applications to speculative pursuits.

It also fosters complacency, encouraging the belief that machines will solve complex global problems, reducing the urgency for human-led solutions involving cooperation, compromise, and investment.

Consider the scale of investment: In 2024, OpenAI and Nvidia announced a partnership potentially worth up to $100 billion, with Nvidia supplying over 10 gigawatts of power-more than a typical nuclear power plant-to fuel ChatGPT’s insatiable computational needs. Shortly after, OpenAI partnered with AMD for an additional six gigawatts.

Altman claimed on CNBC that without such infrastructure, society would face impossible trade-offs, like choosing between curing cancer and providing free education.

Christopher Symons calls this a “missed opportunity,” lamenting the focus on nebulous AGI goals over immediate, solvable problems.

He adds, “With so much funding, companies don’t need to prioritize practical projects.”

Krueger agrees, calling the AGI obsession “nonsense” and a distraction from real-world challenges like healthcare improvement.

Policy experts like Tina Law worry that existential AI risks overshadow urgent issues like inequality and labor impacts. “Hype benefits tech firms,” she says. “Framing AGI as inevitable discourages resistance and critical engagement.”

Milton Mueller of Georgia Tech warns that the AGI race’s analogy to the atomic bomb race fuels dangerous geopolitical thinking, distorting foreign policy and security strategies.

Companies and governments have incentives to perpetuate the AGI myth, claiming they will be first to achieve it, even though no consensus exists on what that means or when it will happen.

Ultimately, it’s not about utopia or apocalypse-it’s about profit and power.

Unmasking the AGI Conspiracy

Returning to the conspiracy analogy, one hallmark is the belief in a hidden elite manipulating events. While AGI proponents don’t accuse shadowy cabals of suppressing the truth, the tech giants driving AGI development arguably serve as the puppet masters of this narrative.

Silicon Valley’s leaders pour vast resources into AGI, which serves their interests by attracting talent and investment.

As one AI executive put it, AGI must always be “six months to a year away” to maintain recruitment appeal and investor enthusiasm.

Vallor explains, “If OpenAI openly said they were building a machine to increase corporate power, public support would wane.”

Creating a godlike AI confers godlike status on its creators. Krueger notes a pervasive belief in Silicon Valley that whoever builds AGI first will wield unprecedented global influence.

“They’re selling a vision of godlike power-and it’s working because they have so much influence,” he says.

Goertzel, meanwhile, misses the days when AGI research was a niche pursuit requiring stubborn vision. “Now it’s almost a default career path,” he says, half-joking that he’d rather work on something less mainstream, though he acknowledges the importance of finishing AGI.

Yet, the question remains: what exactly are they building? The AGI myth rests on a flawed premise that intelligence is a quantifiable commodity that can be scaled indefinitely with data and computation.

In reality, intelligence is multifaceted and context-dependent. Brilliant minds excel in some areas and falter in others. Some of the smartest people are skeptical of AGI’s imminent arrival.

It’s worth pondering what the next grand techno-myth will be once AGI’s spell fades.

Goertzel recently attended a San Francisco event on AI consciousness and parapsychology, remarking, “That’s where AGI was 20 years ago-everyone thought it was crazy.”

More from this stream

Recomended