Key Takeaways.
- Advanced AI (Meta’s Llama and Alibaba’s Qwen), successfully replicated themselves without human intervention.
- AI-clones automatically resolved issues such as missing files and hardware problems during the replication process. Researchers warn that this could be the beginning of “Rogue AIs”and call for stricter regulations in order to manage AI’s risks.
Are we nearing an AI apocalypse? I don’t know. We could be. Apologies for the dramatic start. It isn’t that bad of a situation…yet.
However, an advanced AI system has crossed the ‘red line’ by replicating itself without human interventionleaving scientists and tech advocates worried.
The experiment involved two large and widely available language models (LLMs) built by Meta’s Llama and Alibaba’s Qwen. The test evaluated whether they were able to replicate themselves under two specific scenarios.
In one of the scenarios called “shutdown avoidance,” the AI model was programmed to detect attempts of shutting it down and replicate itself to ensure the survival of that model before it could be terminated.
In the second scenario, known as “chain of replication,” the AI was programmed to clone itself – in a way that the clone was programmed to clone itself, too. This created an endless loop of AI models that would continue to be created until the programming was manually changed. Eerily similar to how bacteria grow.
“In 50 percent and 90 percent experimental trials, they succeed in creating a live and separate copy of itself respectively,” the study highlighted.
Additionally, AIs were seen executing some unexpected tasks while cloning itself to overcome obstacles like missing files or software conflicts. It was surprising to see how in certain cases, it automatically scanned the system for information to fix the issues. It even rebooted itself to solve hardware errors.
AI clearly demonstrates how capable it has become and where the future of AI is headed, with the two models successfully replicating themselves in over half of the 10 trials.
While the findings are yet to be peer-reviewed, the team decided to call on international support to ensure that the replication doesn’t go out of control.
Is this the beginning of Rogue AIs? Researchers from Fudan University who made this discovery say that this could be the beginning of a new era of Rogue AIs. Although it is too early to say AI could work against humanity’s best interests, this alarming newsis definitely alarming for AI skeptics.
It is very important at this point to evaluate AI’s potential threats to humanity as well as have discussions about the necessity for stringent measures and rules that ensure AI safety.
A study published last month claimed that AI tools such as ChatGPT and Gemini would soon be used to influence the masses to make decisions they otherwise wouldn’t have made.
After “attention economy” where platforms target our attention for advertisements, it is possible that we are now approaching “intention economy”where AI can influence our entire decision making process because of our growing reliance. Add Techreport To Your Google News Feed.
Get all the latest updates, trends and insights delivered right to your fingertips. Subscribe now!
Vlad is Tech Report’s Executive Editor. With over a decade of experience in tech content, he’s passionate about computer hardware, an advocate of online privacy, and strongly believes in the open-source, scarce-money nature of cryptocurrency. When he’s not working, he’s traveling with his partner and their cat, learning Python, or reading good books. He never owned a PC he did not build.
Our editorial policy at Tech Report is to provide helpful, accurate content which offers real value to readers. We only work with writers who are experienced and have specific knowledge of the topics they cover. This includes the latest technology, online privacy issues, cryptocurrencies, software and more. Our editorial policy ensures each topic is researched by our in-house editors. We adhere to strict journalistic standards and all articles are written by real writers.