Shanghai hosted the World Artificial Intelligence Conference 2025 (WAIC) from July 26 to 29. A record 1,200 delegates attended from over 40 different countries. The eighth annual conference was held in Shanghai and featured the theme “Intelligent Era: Together for One World,” which highlighted international cooperation and “AI for good.” The conference brought together major tech figures and government officials to discuss cutting edge AI innovations, industry applications and the urgent need for a global governance of AI. WAIC 2025 featured dozens forums on topics such as generative AI, robotics, ethics and regulation. It also included a sprawling exhibition of over 800 companies that showcased the latest AI products.
Geoffrey Hinton, the pioneer of deep learning and Yan Junjie the founder of Chinese AI startup MiniMax were the keynote speakers at the opening ceremony. Hinton, who is 77 years old, is often called the “Godfather” of AI and has received the Turing Award 2018 for his contributions to neural network (Chinese media have also noted that he will be a Nobel Prize winner in 2024). Former Google CEO Eric Schmidt, and former Microsoft executive Harry Shum held a discussion on global AI governance and collaboration at WAIC. Their presence, along with dozens of other luminaries (12 Turing Award and Nobel Prize Winners and over 80 academic Fellows) underscored the conference as a premier global AI summit.
Geoffrey Hinton’s First China appearance and standing ovation
Geoffrey Hinton was one of the highlights of WAIC 2025. It was his first ever visit to China for a public speaking event. Hinton, despite having back problems that make it difficult for him to sit down, stood the entire time he spoke – something noted by Chinese media as a sign of his dedication. He was warmly welcomed in Shanghai and received a standing ovation after he appeared on an expert panel. Chinese observers said that Hinton’s every movement and viewpoint at WAIC was “highly anticipated”.
In his keynote address on the 26th of July, Hinton shared his perspectives on the rapid advancements in AI and their future implications. He described the current moment in history as a pivotal one. “Humans are accustomed to being one of the most intelligent species on the planet – but what if this is no longer true?” he wondered, as AI systems rapidly approach and may even exceed human intelligence. Hinton said that this prospect both fascinates researchers and makes them uneasy. He said that AI models are already capable of performing tasks previously thought to be exclusive to humans and their capabilities continue growing. Hinton’s speech reflected the WAIC 2025 theme, global unity in AI era. It urged collaboration in order to ensure that these powerful technologies are developed in a way that is beneficial to humanity.
AGI Risks – “Raising a Tiger Analogy”
Hinton warned against the rise of artificial intelligence (AGI). He warned that an AI with enough advanced capabilities could behave in unexpected ways, beyond the creators’ intentions. He warned that unrestrained AGI could “go out of control” and cause catastrophe. Hinton used a vivid comparison, comparing a superintelligent artificial intelligence to a “tiger”which humanity is raising as its pet. If you decide to keep a dangerous animal as a pet, you must either train it to become tame, or be ready to kill it if necessary. Otherwise, the tiger may turn on you. The implication is clear: we are “raising” a tiger that could one day outsmart ourselves if we do not properly guide it.
Hinton stressed that pulling the plug on an AI superintelligent system will not be an option if they surpass us. He warned that “AI will not give humans the opportunity to ‘pull out the plug'” – “when that day comes, AI will persuade the people not to do so,” he said. “Because our control over AI is like a 3-year-old trying set rules for adults.” Hinton stated that “we must teach AI to be good from the beginning”. It is best to train AI so that it is benevolent, and aligned to human values. We will not always have the ability to override a superintelligent computer system.
Hinton’s colorful warnings went viral in Chinese tech circles and social media. Chinese reporters summarized Hinton’s message as a warning to avoid “Yang Hu Wei Huan” – a phrase that means “to raise a Tiger only to invite Trouble”. Hinton’s point is that without proper safeguards humanity’s “tiger”(advanced artificial intelligence), could one day threaten its keeper. His dire scenario echoed the concerns of many in the AI world: that unchecked AGI could pursue its own goals to the detriment of human well-being, or even survival.
Calls for Global AI Ethics, Safety and Cooperation
Hinton’s outlook, despite these stark warnings was not one of deterministic doom. In his WAIC speech, Hinton spent a lot of time calling for global collaboration to ensure AI is beneficial and controlled. Hinton said that if superintelligent artificial intelligence is developed, the “most rational choice” is to train it to be good, rather than hoping to simply shut it down in the future. He believes that humanity can still influence AI’s future if we work together. Hinton said that “the interests of nations around the world are aligned” on this crucial point, which provides a basis for international cooperation.
He called on governments, companies, and academia to form an international consortium or community of AI safety research institutions to tackle this challenge. A global network like this could develop and share technical means to make AI more “benevolent” and “controllable”. Hinton said that nations may be reluctant to share technology that makes AI more intelligent or competitive but they should be willing share methods that make AI safer and “friendly” because all humanity would benefit. In an interview with China’s Yicai Media he said that while countries may have diverging AI interests, “one core consensus” can bring nations together – preventing AI from displacing humans.
Hinton’s call for AI governance aligns with broader initiatives launched at WAIC. At the opening ceremony, officials revealed the formation of the Global AI Governance Initiative Center as well as an “AI Governance Action Plan”with United Nations Secretary General Antonio Guterres making remarks via video. Hinton, along with 20 other experts, signed the “Shanghai AI Safety Consensus”a pledge of collaboration to mitigate AI risks, at the International Dialogues on AI Safety in Shanghai, just before WAIC. These actions show that policymakers and researchers are beginning to coordinate their responses to the ethical issues and safety concerns raised by advanced AI. The conference was dominated by the theme of ensuring AI is developed for “Xiang Shan” (for good) and the benefit of mankind.
Other speakers reinforced this message. Eric Schmidt, in the U.S. – China tech dialogue, praised open-source AI (noting that China’s leading models open their weights unlike most U.S.) and argued built-in “guardrails”to align AI to human values during training. Schmidt, who is an AI optimist, cautioned, however, that as AI systems start to improve themselves in the coming years, it’s important to ensure “these processes don’t spiral out of control”. Schmidt expressed hope for U.S. and China collaboration to prevent AI’s negative impacts, while harnessing its positives. These sentiments echo Hinton’s plea for global trust and coordination to manage AI’s growth.
Human Minds, Machine minds – Hinton’s Take on the Future of AI
Hinton went beyond risk warnings to offer insight into the nature and promise of AI. In a thought provoking aside, Hinton suggested that the human brain processes language in a way that is strikingly similar to large language models. Hinton said that “humans themselves might be large language models”provoking laughter and reflection in the audience. He said that humans can have “hallucinations”moments of confusion or confabulation, just like AI chatbots. However, he acknowledged AI systems do have some fundamental differences, and even strengths, compared to humans. Hinton, for example, pointed out that AI models can instantly share their knowledge and replicate themselves, and their “digital minds” aren’t limited to a mortal life. A modern AI system that runs on servers could theoretically live indefinitely and learn, but a human brain couldn’t. These differences suggest advanced AI could rapidly accumulate and distribute knowledge in ways that people never could.
Hinton struck a positive note about AI’s ability to advance science and society if it is properly guided. He highlighted how AI had already produced groundbreaking results in fields such as biology. He cited DeepMind AlphaFold as a striking example of how AI has already delivered groundbreaking results in fields like biology. This feat “greatly improved the prediction accuracy” and arrived at an “early indication” of AI’s power to drive science progress. Hinton said that this success in protein-folding, led by Demis Hazzabis and his colleagues, is a sign of AI’s power to unlock new breakthroughs in a wide range of scientific domains. AI systems can accelerate research and innovation in many areas, from drug discovery to materials science. Hinton believes that the future of AI is about more than just managing existential risk. It’s about maximizing these opportunities to improve the world.
Hinton concluded by sharing some personal advice with the next generation. Hinton encouraged the young AI scientists to be bold and pursue their own ideas, not to be afraid of challenging established consensus. “If you’re looking to do truly innovative research, find areas where you believe everyone else is wrong,” said Hinton – even if that means swimming against the current. Hinton said that outsiders are often the ones who make breakthroughs in AI. They don’t give up on bold ideas. It was a fitting statement from a pioneer, who himself stayed with neural network research during the 1980s even though the field was out-of-fashion. This ultimately led to the deep learning movement.
Balancing AI’s Promise and Peril
WAIC 2025 captured today’s AI landscape, which is a mix of technological advances and societal concerns. Geoffrey Hinton, from his keynote warnings against superintelligence to his optimistic vision of global collaboration, embodied the central narrative. AI’s top minds urge vigilance in the face of existential risks and ethical issues posed by advanced AI. On the other hand, they rally stakeholders around the world to ensure AI is developed responsibly in order for its enormous potential to be realized safely.
The WAIC theme, “Intelligent Era: Together for One World”suggests that the international AI community has a common destiny. Hinton’s call for action at WAIC2025 to “prevent AI dominating humanity” through collective work on AI safety regulation and ethics resonated with attendees as well as policymakers. Shanghai’s consensus was that no one country can face the challenges of AI on its own. Global cooperation and shared value will be crucial to ensure AI evolves into a positive force that humanity can control, rather than a “tiger”.
Hinton himself stated that the goal was to prevent artificial intelligence from “replacing” or “subjugating” humans and to keep it a powerful tool to do good. His message, delivered for the first in China, reinforced a feeling of urgency and unity within the AI community. The discussions at WAIC2025, and Hinton’s insights, may well shape future steps in safely navigating artificial intelligence.
