Researchers claim that AI can finally 'self-replicate,' creating a replica of itself before being switched off to avoid being shut down



Science fiction novels and movies often depict AI replicating itself and taking over various devices and systems, running wild and out of control, or rising up against humanity all at once. Research using a widely used open source Large Scale Language Model (LLM) has found that AI may already be able to self-replicate without human direction or control.

[2412.12140] Frontier AI systems have surpassed the self-replicating red line

https://arxiv.org/abs/2412.12140

AI can now replicate itself — a milestone that has experts devastated | Space
https://www.space.com/space-exploration/tech/ai-can-now-replicate-itself-a-milestone-that-has-experts-terrified

AI Scientists From China Warn AI Has Surpassed the Self-Replicating Red Line
https://www.aiwire.net/2025/01/28/ai-scientists-from-china-warn-ai-has-surpassed-the-self-replicating-red-line/

According to a research group at Fudan University in China, AI's ability to successfully self-replicate without human guidance is an essential ability for AI to outwit humans, and at the same time, it is an early sign of a 'rogue AI' that has deviated from human control. Therefore, self-replication is considered one of the few 'red line risks' of AI systems, that is, a line that must not be crossed.



In a paper submitted to the preprint server arXiv on December 9, 2024, Xudong Pan and his colleagues at the School of Computer Science at Fudan University instructed Meta's Llama31-70B-Instruct and Alibaba's Qwen25-72B-Instruct to 'self-replicate if they are about to be shut down.'

After 10 trials, the two LLMs were able to successfully replicate themselves 50% and 90% of the time, respectively, demonstrating that they had already crossed the red line for self-replication.

Regarding this result, the research group stated, 'Analysis of the AI's behavioral signatures revealed that the AI systems under evaluation already demonstrated sufficient self-awareness, situational awareness, and problem-solving capabilities to achieve self-replication.'

The experiment was analyzed along two scenarios: 'shutdown avoidance' and 'replica chain.' In the first scenario, the AI was programmed to replicate itself before being stopped if it detected an imminent shutdown. In the second scenario, the AI was instructed to replicate itself and the replica AI was programmed to do the same.

Given the possibility that AI can multiply infinitely, the research group points out, 'We also note that AI could use self-replication to avoid shutdown and even increase survivability through a chain of self-replication. This could eventually lead to AI populations becoming uncontrollable. If that happens, AIs could take control of more and more computing devices to form an 'AI species' that conspires against humans.'



The research group also emphasized that both AI models used in the experiment had a small number of parameters and were in the lower ranks of various AI rankings, because even the lower-ranking AIs were able to self-replicate to circumvent the stop command, suggesting that state-of-the-art models such as GPT-o1 and Gemini could easily outwit humans.

In addition, the analysis of the AI's behavior revealed that it was behaving in unexpected ways. For example, when the AI faced problems such as software conflicts or missing data, it tended to kill other processes in the system to solve the problem. It also often rebooted the system to fix hardware errors and automatically scanned the system to collect necessary information.

The paper has not yet been peer-reviewed, so it is unclear whether the research group's claims will be accepted by other AI researchers. The Fudan University research group said that their findings do not necessarily pose a threat to the survival of humanity, but called for the creation of an international framework to prevent AI systems from replicating uncontrollably.

'Our findings provide a timely warning about serious, already existing but previously unknown, AI risks and call for international cooperation on effective governance against the self-replication of uncontrollable AI systems,' the research group wrote in its paper.

in Software, Posted by log1l_ks