Home News AI Godfather raises alarm over autonomous AI

AI Godfather raises alarm over autonomous AI

0
AI Godfather raises alarm over autonomous AI
(Image credit: Shutterstock)

  • Yoshua Benjamin warns that the AI race is putting speed before safety
  • This could lead to unpredictable and dangerous consequences.
  • Bengio urges global cooperation in order to enforce AI regulations, before autonomous systems become difficult or impossible to control.

The ‘AI Godfather’ Yoshua Bernardi helped create the neural networks that are the basis for all AI tools used today, including chatbots that mimic cartoon characters and scientific research assistants. In a Sky News interview, Bengio has a warning for AI developers. InterviewThe race to create ever-more powerful AI systems is intensifying at a rate that, according to him, is far too reckless.

It’s not about which company can build the best chatbot, or who gets the biggest funding. Bengio believes the rapid, unregulated move toward advanced AI can have catastrophic consequences if it isn’t treated with priority.

Bengio described how he watched developers race against each other or take dangerous shortcuts. Although speed can be the difference between launching a new product worth billions or playing catch-up with a competitor, it may not make sense for society.

This pressure has intensified for AI developers, with the rise in Chinese AI firms such as DeepSeek whose advanced chatbot abilities have attracted the attention of Western governments and companies alike. Major tech firms are accelerating AI development instead of slowing down to carefully consider the risks. Bengio is concerned that this will lead rushed deployments, inadequate security measures, and systems which behave in ways we do not yet fully understand. Bengio explained

that he had been warning about the importance of stronger AI oversight but recent events made his message even more urgent. We are at a “turning point,” moment where we can either implement meaningful safety protocols and regulations, or risk letting AI spiral into something unpredictable.

Afterall, more and more AI-based systems are capable of making autonomous decisions. These AI agents can act independently and not just respond to user input. Bengio believes that they are the most dangerous way forward. If humans wanted to take control of an AI that could strategize, act independently, and adapt with enough computing power, it would be difficult to do so.

You have a takeover

This problem is not just theoretical. AI models are already making financial trades, managing logistic, and even writing software without human oversight. Bengio warns we’re just a few steps from more complex and unpredictable AI behavior. If a system such as this is deployed with no strict safeguards in place, the consequences can range from annoying hiccups to full-on economic and security crises.

Sign-up for breaking news, reviews and opinions, top tech deals and more.

Bengio doesn’t want to stop AI development. He was clear in saying that he is optimistic about the capabilities of AI when it’s used responsibly, such as for medical and environmental research. He sees the need to shift priorities and focus on more deliberate and thoughtful work with AI technology. His unique perspective could be influential when he urges AI developers to prioritize ethics and safety over competing with rival companies. It’s for this reason that he takes part in policy discussions, such as the upcoming International AI Safety Summit to be held in Paris.

In addition, he believes that companies who are willing to take ownership of their systems should also contribute to regulation. He says that companies should invest in safety research just as much as they do on performance improvements. However, it is difficult to imagine this balance in today’s AI muddle. In an industry in which speed is a key factor to dominance, no one wants to be first to slow down.

While the global cooperation Bengio proposes might not be visible immediately, as the AI arms races continue, Bengio’s warnings and those of others in similar positions become more urgent. He hopes that the industry will acknowledge the risks sooner rather than later, when a crisis is forced upon them. The question is if the world is willing to listen before it is too late.

Also, you might like…

  • A surprising difference was discovered between DeepSeek’s and ChatGPT’s search capabilities.
  • Does OpenAI sharing AI models with the government early improve AI safety or should it just let them write the rules?
  • OpenAI’s co-founder’s company promises ‘Safe Superintelligence,’ a laughably absurd dream

Eric Hal Schwartz has been a freelance writer at TechRadar for more than 15 years. He has covered the intersection of technology and the world. He was the head writer of Voicebot.ai for five years and a pioneer in reporting on large language models and generative AI. Since then, he has become an expert in the products of generative AI, including OpenAI’s ChatGPT and Anthropic’s Claude. He also knows Google Gemini and all other synthetic media tools. His experience spans print, digital and broadcast media as well as live events. He’s now continuing to tell stories that people want to hear and need to know about the rapidly changing AI space and the impact it has on their lives. Eric is based out of New York City.

More on artificial intelligence

Popular

www.aiobserver.co

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Exit mobile version