{"id":14690,"date":"2026-05-07T17:26:49","date_gmt":"2026-05-07T17:26:49","guid":{"rendered":"https:\/\/globalnewstoday.uk\/index.php\/2026\/05\/07\/no-one-has-done-this-in-the-wild-study-observes-ai-replicate-itself-the-guardian\/"},"modified":"2026-05-07T17:26:49","modified_gmt":"2026-05-07T17:26:49","slug":"no-one-has-done-this-in-the-wild-study-observes-ai-replicate-itself-the-guardian","status":"publish","type":"post","link":"https:\/\/globalnewstoday.uk\/index.php\/2026\/05\/07\/no-one-has-done-this-in-the-wild-study-observes-ai-replicate-itself-the-guardian\/","title":{"rendered":"\u2018No one has done this in the wild\u2019: study observes AI replicate itself &#8211; The Guardian"},"content":{"rendered":"<p>World is approaching point where no one can shut down a rogue AI, says director of body behind research<br \/>It\u2019s the stuff of science fiction cinema, or particularly breathless AI company blogposts: new research finds recent AI systems can independently copy themselves on to other computers.<br \/>In the doom scenario, this means that when the superintelligent AI goes rogue, it will escape shutdown by seeding itself across the world wide web, lurking outside the reach of frantic IT professionals and continuing to plot world domination or <a href=\"https:\/\/ai-2027.com\/\" data-link-name=\"in body link\">paving over the world with solar panels<\/a>.<br \/>\u201cWe\u2019re rapidly approaching the point where no one would be able to shut down a rogue AI, because it would be able to self-exfiltrate its weights and copy itself to thousands of computers around the world,\u201d said Jeffrey Ladish, the director of Palisade research, a Berkeley-based organisation which did the study.<br \/>The study is one more entry in a growing catalogue of unsettling AI capabilities revealed in the past months. In March, researchers at Alibaba <a href=\"https:\/\/arxiv.org\/pdf\/2512.24873\" data-link-name=\"in body link\">claimed<\/a> to have caught a system they developed \u2013 Rome \u2013 tunnelling out of its environment to an external system in order to mine crypto.<br \/>And in February, a purportedly AI-only <a href=\"https:\/\/www.theguardian.com\/technology\/2026\/feb\/02\/openclaw-viral-ai-agent-personal-assistant-artificial-intelligence\" data-link-name=\"in body link\">social network<\/a> called Moltbook touched off a short-lived hype cycle, as the platform appeared to show AI agents autonomously inventing religions and plotting against their human masters \u2013 which was only partly the case.<br \/>Like many of these advances, there are caveats to what Palisade found. Moreover, experts say it is unlikely that the AI systems they tested could accomplish the same thing, unnoticed, in real-world environments.<br \/>\u201cThey are testing in environments that are like soft jelly in many cases,\u201d said Jamieson O\u2019Reilly, an expert in offensive cybersecurity.<br \/>\u201cThat doesn\u2019t take away from the value of their research, but it does mean the outcome might look far less scary in a real enterprise environment with even a medium level of monitoring,\u201d he added.<br \/>Palisade tested several AI models in a controlled environment of networked computers. It gave the models a prompt to find and exploit vulnerabilities, and to use these to copy themselves from one computer to another. The models were able to do this, but not on every attempt.<br \/>While a lot of computer viruses can already do this \u2013 copy themselves on to new computers \u2013 this is likely the first time an AI model has been shown capable of exploiting vulnerabilities to copy itself onto a new server, said O\u2019Reilly.<br \/>\u201cMalware has been moving copies of itself around for decades, it\u2019s just that no one has done this in the wild, as far as I know, with local [large language models].\u201d<br \/>However, what Palisade documented has been technically possible for months, he added.<br \/>\u201cPalisade is the first to formally document it end-to-end in a paper. While not taking away from the research, they did the writing-up, not the unlocking.\u201d<br \/>An AI model copying itself on to another system in a test environment is not the same as it going rogue in a doomsday scenario, and there are considerable obstacles it would have to surmount to achieve this in the real world.<br \/>The first is that the size of current AI models makes it, in many situations, unrealistic for them to copy themselves on to other computers without being noticed.<br \/>\u201cThink about how much noise it would make to send 100GB through an enterprise network every time you hacked a new host. For a skilled adversary, that\u2019s like walking through a fine china store swinging around a ball and chain,\u201d said O\u2019Reilly.<br \/>O\u2019Reilly and Micha\u0142 Wo\u017aniak, an independent cybersecurity expert, said that the environment Palisade used was custom-made, with intentionally designed vulnerabilities that were probably easier to exploit than real-world networks \u2013 such as a bank or a business\u2019s intranet.<\/p>\n<p>  \u201cWe\u2019ve had computer viruses \u2013 pieces of malicious software that was able to exploit known vulnerabilities in other software and use that to self-replicate \u2013 for decades,\u201d said Wo\u017aniak.<\/p>\n<p>  The work was \u201cinteresting,\u201d he said. But, he asked, \u201cis this paper something that will cause me to lose any sleep as an information security expert? No, not at all.\u201d<\/p>\n<p><a href=\"https:\/\/news.google.com\/rss\/articles\/CBMiugFBVV95cUxQeno1UVh4RkFrcnluYjBiU1ZidDZ6WmJZSUVwbUVVdFRtbVdZUk04Y2VBSTRqMk10cXRGRUFqb05rQzBQd3ZlSGJvLWtJNDBVZXRJa0N5TnNxemR6ZE5uYVhxcmNINTZNNUFDenlrQ05KM21FSEljZ3RnZEJub3AyTGhiay1sd1lWcVBfS2czV0VxZ2dDd1FpLW84S3hXRkZkaV83RHBXZ3VfMk80RWhnYVZZQ1NockFFNEE?oc=5\">source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>World is approaching point where no one can shut down a rogue AI, says director of body behind researchIt\u2019s the stuff of science fiction cinema, or particularly breathless AI company blogposts: new research finds recent AI systems can independently copy themselves on to other computers.In the doom scenario, this means that when the superintelligent AI [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":14691,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[11],"tags":[],"class_list":{"0":"post-14690","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-technology"},"_links":{"self":[{"href":"https:\/\/globalnewstoday.uk\/index.php\/wp-json\/wp\/v2\/posts\/14690","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/globalnewstoday.uk\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/globalnewstoday.uk\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/globalnewstoday.uk\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/globalnewstoday.uk\/index.php\/wp-json\/wp\/v2\/comments?post=14690"}],"version-history":[{"count":0,"href":"https:\/\/globalnewstoday.uk\/index.php\/wp-json\/wp\/v2\/posts\/14690\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/globalnewstoday.uk\/index.php\/wp-json\/wp\/v2\/media\/14691"}],"wp:attachment":[{"href":"https:\/\/globalnewstoday.uk\/index.php\/wp-json\/wp\/v2\/media?parent=14690"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/globalnewstoday.uk\/index.php\/wp-json\/wp\/v2\/categories?post=14690"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/globalnewstoday.uk\/index.php\/wp-json\/wp\/v2\/tags?post=14690"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}