Host Chris Anderson, Sam Altman and the TED 2025: Humanity reimagined SESSION 11 speakers. April 7-11, 2025, Vancouver, BC. Photo: Jason Redmond/TED
Openai CEO Sam Altman revealed to the world that his company has grown to In a sometimes tense conversation at the TED 2025 conference in Vancouver last week.
Altman told TED’s Chris Anderson that he had never seen such growth in a company, whether he was involved or not. “The growth of ChatGPT – it’s really fun. I am deeply honored. It is a crazy experience to witness, and our teams are exhausted.
This interview, which concluded the final day of TED 2025 – Humanity Reimagined (19459059) showcased the success of OpenAI, but also the scrutiny that the company faces in transforming society at a rate that even some of their supporters find alarming.
OpenAI struggles to scale up amid unprecedented demand.
Altman painted the picture of a company that is struggling to keep pace with its own success. He noted that OpenAI’s graphics cards are “melting” because of the popularity of their new image generation features. “I call people all day and beg them for their GPUs. He said: “We are so incredibly limited.”
OpenAI is reported to be considering this exponential growth. CNBC reports that Facebook is launching a social network in order to compete with Elon Muskās X. Altman did not confirm or deny these reports during his TED interview.
Recently, the company closed. This $40 billion funding roundis valued at $300 billion, making it the largest private technology funding in history. This influx of capital should help address some infrastructure challenges.
Altman responds ‘Ring of Power” accusations. During the 47-minute interview, Anderson repeatedly pressed Altman about OpenAI’s transformation into a for-profit firm with a $300 billion valuation. Anderson expressed concerns that were shared by critics such as Elon Musk who claimed Altman had been “corrupted” by the Ring of Power, referencing “The Lord of the Rings.” Altman defended OpenAI: “Our goal was to make AGI, distribute it and make it safe for the benefit of the entire human race. I think we have made a lot of progress in this direction. Clearly, we have changed our tactics over time… We did not think that we would need to build a business around this. We learned a great deal about how these systems work and what they would cost in capital. I think you’ll get used to it… You are the same person. I’m certain I’m not different in many ways, but I do not feel any different.”
OpenAI will pay artists who are influenced by AI.
Altman acknowledged that OpenAI was working on a compensation system for artists whose style is emulated by AI.
Altman responded to a question about apparent new business models by saying, “I think that there are incredible new businesses models that we and other are excited to explore.” IP theft in AI generated images “If I say, ‘I would like to create art in the same style as these seven people who have all consented to it,’ how can you divide up the money to each of them?”
OpenAI’s image creator refuses requests to copy the style without consent of living artists, but will produce art in the styles of movements, genres or studios. Altman hinted that a revenue sharing model may be coming, but details are still scarce.
Autonomous AI agents: the’most consequential challenge’ OpenAI has had to face
Discussions became particularly heated when discussing ” Agentic AI— autonomous systems capable of taking actions on the Internet on behalf of a user. OpenAI’s ” Operatortool allows AIs to perform tasks such as booking restaurants, raising questions about safety and accountability.
Anderson challenged Altman, saying: “A single individual could let that agent out, and the agent might decide, “Well, in order for me to execute on that task, I have to copy myself everywhere.” Are there redlines that you’ve clearly drawn internally where you know the danger moments?”
Altman referred to OpenAI’s Preparedness Framework” but provided few details about how the company would stop misuse of autonomous agents. Altman admitted that
“AI is a riskier proposition if you give them access to your system, your data, and the ability to click on your computer… If they make a misstep, it’s incredibly high stakes,” Altman said. “You won’t use our agents if they don’t trust that they won’t empty your bank account or erase your data.”
’14 Definitions from 10 Researchers’: Inside OpenAIās struggle to define AGI.
Altman revealed in a moment of clarity that there is no consensus within OpenAI on what constitutes artificial intelligence general (AGI), the company’s stated objective. Altman said, “It’s a joke. If you had 10 OpenAI researchers together and asked them to define AGI, they would give you 14 definitions.”
Altman suggested that we should not focus on a specific time when AGI will arrive, but rather recognize that “the models will just get smarter and capable and smarter… We’re going have to contend and gain wonderful benefits from this amazing system.”
OpenAI’s new policy on content moderation: Loosening guardrails
Altman revealed that OpenAI had loosened restrictions for its image generation models. He explained that “we’ve given users much more freedom in what we would normally think of as speech harms.” “I think that part of model alignment is to follow what the user of the model wants it do within the very wide bounds of what society determines.”
The shift could signal a wider move towards giving users more control over AI outcomes, potentially aligning Altman’s preference for letting hundreds of millions users — instead of “small elite summits,” determine appropriate guardrails. Altman said that AI has the ability to communicate with everyone on Earth and learn what they value collectively. This is a great new feature.
Altman’s vision for an AI-powered world
Altman concluded the interview by reflecting on the future world that his newborn son will inherit, a world where AI will surpass human intelligence.
‘My kid will never surpass AI.’ He said that they would never live in a world without products and services that are incredibly intelligent, capable. “It will be a world with incredible material abundance… where change is happening at an incredibly fast rate and there are amazing new things.”
The billion-user balancing Act: How OpenAI navigates profit, power, and purpose
Altmanās TED appearance comes during a critical time for OpenAI and the broader AI sector. The company faces mounting challenges, including Copyright lawsuitsare being filed by authors and publishers. AI is also pushing the limits of what it can do. Recent advances like ChatGPT’s
The viral image generatoras well as the video generation tool Sora has demonstrated capabilities that were unimaginable just a few months ago. These tools have also sparked a lot of interest. Discussions about authenticity, copyright, and the future creative work. Altman’s willingness
to engage with difficult issues about safety, ethics, or the societal impact AI has on society shows a keen awareness of the stakes. Critics may be disappointed that no concrete answers were given to questions about specific policies and safeguards during the conversation.
This interview revealed the tensions that lie at the core of OpenAI’s mission. They include: moving quickly to advance AI technology, while ensuring safety. Balancing profit motives and societal benefits. Respecting creative rights and democratizing creativity tools. And navigating between elite expertise versus public preference.
Anderson stated in his final comment that the decisions Altman, his peers and the OpenAI team make in the next few years could have a profound impact on the future of humanity. OpenAI’s stated mission to ensure “all humanity benefits from artificial intelligence” is still to be determined.
If you’re looking to impress your boss, VB Daily can help. We provide you with the inside scoop on what companies do with generative AI. From regulatory shifts to practical implementations, we give you the insights you need to maximize ROI.
Read our privacy policy
Thank you for subscribing. Click here to view more VB Newsletters.
An error occured.