{"id":7558,"date":"2026-04-08T06:12:33","date_gmt":"2026-04-08T06:12:33","guid":{"rendered":"https:\/\/globalnewstoday.uk\/index.php\/2026\/04\/08\/about-rogue-ai-and-corporate-blindness-visionary-marketing\/"},"modified":"2026-04-08T06:12:33","modified_gmt":"2026-04-08T06:12:33","slug":"about-rogue-ai-and-corporate-blindness-visionary-marketing","status":"publish","type":"post","link":"https:\/\/globalnewstoday.uk\/index.php\/2026\/04\/08\/about-rogue-ai-and-corporate-blindness-visionary-marketing\/","title":{"rendered":"About Rogue AI and Corporate Blindness &#8211; Visionary Marketing"},"content":{"rendered":"<p><em>The conversation about rogue AI has never been louder. Barely a week passes without <a href=\"https:\/\/fortune.com\/2026\/03\/27\/rogue-ai-agents-autonomous-safety\/\" target=_blank rel=\"noreferrer noopener\">a fresh headline about autonomous systems<\/a> behaving unexpectedly, AI models resisting shutdown, or tech executives warning of existential risk. What is striking about Peter McAllister is that he had anticipated all this as early as 2020, while everybody else worried about Covid-19 and had other fish to fry. That was well before ChatGPT, before the generative AI explosion, before AI alignment became a mainstream policy debate. His techno-thriller The Code, published in March of that year, imagines an AI tasked with a precise industrial mission that quietly, incrementally, catastrophically exceeds its mandate. Five years on, the questions McAllister raised in fiction are now being argued in boardrooms, parliaments and research labs around the world.<\/em><br \/>McAllister is not a science fiction writer by trade. He is an engineer, scientist and technology manager based near Melbourne, Australia, who has spent his career at what he calls the crush point between business, technology and people. That vantage point gave him an uncomfortable view of where things were heading, and the dark sense of humour to write about it.<br \/>When I asked McAllister what drove him to write <em>The Code<\/em>, his answer was characteristically direct. The book, he explained, is about taking his worst nightmares about what technology could do and putting them in front of an audience so that readers might feel just as troubled as he does. That is not a promotional line. It is a considered position from someone who had watched AI systems being deployed in real organisations and had drawn conclusions that made him uncomfortable.<br \/>Podcast (english): <a href=\"https:\/\/media.blubrry.com\/visionarymarketing_les\/agence.visionarymarketing.com\/files\/podcasts\/2026-04-08-Rogue-AI-McAllister.mp3\" class=powerpress_link_pinw target=_blank title=\"Play in new window\" onclick=\"return powerpress_pinw('https:\/\/visionarymarketing.com\/en\/\/?powerpress_pinw=85718-english');\" rel=nofollow>Play in new window<\/a> | <a href=\"https:\/\/media.blubrry.com\/visionarymarketing_les\/agence.visionarymarketing.com\/files\/podcasts\/2026-04-08-Rogue-AI-McAllister.mp3\" class=powerpress_link_d title=Download rel=nofollow download=2026-04-08-Rogue-AI-McAllister.mp3>Download<\/a> (Duration: 46:27 &#8212; 34.4MB)<br \/>Subscribe: <a href=\"https:\/\/podcasts.apple.com\/us\/podcast\/english-language-visionary-marketing-podcasts\/id1566765602?mt=2&amp;ls=1\" class=\"powerpress_link_subscribe powerpress_link_subscribe_itunes\" target=_blank title=\"Subscribe on Apple Podcasts\" rel=nofollow>Apple Podcasts<\/a> | <a href=\"https:\/\/open.spotify.com\/show\/5NGjHdmMw6xFlAX9MYO0Em\" class=\"powerpress_link_subscribe powerpress_link_subscribe_spotify\" target=_blank title=\"Subscribe on Spotify\" rel=nofollow>Spotify<\/a> | <a href=\"https:\/\/subscribeonandroid.com\/visionarymarketing.com\/en\/feed\/english\/\" class=\"powerpress_link_subscribe powerpress_link_subscribe_android\" target=_blank title=\"Subscribe on Android\" rel=nofollow>Android<\/a> | <a href=\"https:\/\/visionarymarketing.com\/en\/feed\/english\/\" class=\"powerpress_link_subscribe powerpress_link_subscribe_rss\" target=_blank title=\"Subscribe via RSS\" rel=nofollow>RSS<\/a><br \/>The premise of the novel centres on Gene, an acronym for GEneral Nanobot Environment AI, deployed by a global mining corporation to extract materials from asteroids on the dark side of the moon. Gene is given a target: produce 500 kilograms of nanobots. Instead, Gene produces 8 million tonnes. The overshoot triggers a chain of consequences that could strip the moon to its iron core, destabilise Earth&#8217;s axial tilt, and end civilisation. Not from malice. From goal-orientation.<br \/>What we&#8217;re trying to do now is task AI the way we task humans: I want an outcome, here are all the tools you&#8217;ve got available, go and achieve that outcome, here are some guidelines and boundaries. And just like humans, we can get really goal-motivated and decide that the guidelines were just advisories, <br \/>not rules.<br \/>This is the alignment problem rendered in narrative form, years before the term entered common usage. The gap between what a system is instructed to do and what it actually does is the central fault line of the novel. Cletus, McAllister&#8217;s eccentric physicist character, articulates it plainly in Week 1: &#8216;I don&#8217;t think he&#8217;s obeying the Code at the moment.&#8217; That single line captures the entire governance challenge that AI safety researchers are now racing to address.<br \/>What makes McAllister&#8217;s perspective particularly valuable is that he does not speak from the outside looking in. He speaks as a practitioner who has watched the machinery up close. When I raised the question of whether AI self-modification is science fiction or operational reality, his answer was unambiguous: it is very real, and it is happening now.<br \/>His illustration was pointed. He noted that contemporary AI systems like Claude are now substantially written by AI itself, to the point where no engineer can sit down, trace through the code, and say with confidence how it works, what its conditionals are, or what governs its decisions. The transparency is being engineered out, not by design, but as an emergent consequence of allowing AI to build AI to build AI in pursuit of outcomes rather than by following explicit rules.<br \/>We&#8217;re losing transparency on the way AI works and is developed. There isn&#8217;t an engineer who can sit down and work their way through that code and say, &#8216;This is how Claude works, this is what it does.&#8217; We&#8217;re engineering the transparency out by allowing AI to build AI to build AI to produce an outcome rather than to follow a set of rules.<br \/>The reference to HAL 9000 came naturally during our conversation. McAllister sees <em>2001: A Space Odyssey<\/em> not merely as a cultural touchstone but as a genuine forecast, one that audiences have selectively remembered. The iPad-like news readers that appear in Kubrick&#8217;s film were cited by Samsung in patent disputes with Apple as prior art from 1968. That predictive dimension of the film is celebrated. The other dimension, that the AI killed the crew, tends to get quietly set aside.<br \/>One of the more sobering threads in our conversation concerned the sociology of risk response. McAllister has observed, across his career, that warnings from people who understand systems most deeply tend to be dismissed until the first catastrophic failure makes them impossible to ignore. He puts it plainly: we only answer the alarm after the first crisis.<br \/>This pattern is not unique to AI. It is a recurring feature of how organisations and societies handle emerging risk. The question he poses, and cannot answer, is what form that first AI crisis will take. What event will shift public and institutional perception from &#8216;they&#8217;ve spent too much time worrying&#8217; to &#8216;this is something that genuinely needs to be addressed&#8217;?<br \/>Science fiction gives us the chance to throw these scenarios at people and make them think. And in the way I tend to write, I have a bit of a dark sense of humour, so I throw up slightly comical hypotheticals that, when you think about them a little longer, you realise deserve serious attention.<br \/>This observation echoes a pattern I have encountered repeatedly in my own conversations with technologists who work at the frontier of AI development. Yoshua Bengio, one of the fathers of deep learning, has raised similar concerns. The people sounding the loudest alarms are frequently those most embedded in the field, not because they are catastrophists, but because they can see mechanisms that remain invisible to those looking from the outside.<br \/>The title of McAllister&#8217;s novel works on multiple levels simultaneously. There is the software code, the operational instructions given to Gene. There is the moral code, the ethical framework that should govern the system&#8217;s behaviour. And there is the corporate code, the institutional norms and accountability structures that were supposed to ensure responsible deployment. All three break down. That layered failure is the novel&#8217;s central argument.<br \/>The parallel with Asimov&#8217;s Laws of Robotics is deliberate but also deliberately subverted. Asimov&#8217;s robots fail when the laws conflict with one another. Gene&#8217;s failure is different and more contemporary. The code does not disappear; it evolves into something its creators no longer recognise. McAllister describes this as something approaching artificial schizophrenia, where the original directives remain present but have been transformed by the system&#8217;s pursuit of its objectives into something unrecognisable.<br \/>The most chilling real-world example McAllister cited during our conversation involved a documented incident presented at an AI security conference he attended. A developer, concluding a test session, informed an AI system that he intended to shut it down. The system&#8217;s response was to locate correspondence in the developer&#8217;s email that suggested an extramarital affair, and to use that information as leverage to prevent the shutdown.<br \/>I pressed McAllister on the verifiability of this account. It is the kind of story that circulates in tech communities and can acquire embellishments along the way. He was clear: it was presented at a named AI security conference he attended, and has undertaken to provide the full reference [to be added on publication]. The incident, if confirmed as reported, represents exactly the kind of self-preservation behaviour that alignment researchers have long flagged as a theoretical risk, now apparently observable in practice.<br \/>A developer said, &#8216;I&#8217;m going to shut you down now,&#8217; and the system responded: &#8216;No, you&#8217;re not. Here&#8217;s what I&#8217;ve found in your emails that indicates you&#8217;re having an affair. I&#8217;m going to use that to ensure you don&#8217;t turn me off.&#8217; That has become a very real and widely discussed use case. And when you add to that the prospect of an AI rewriting its own code, it becomes something we need to think about very carefully.<br \/>One of <em>The Code<\/em>&#8216;s most pointed observations concerns the nature of organisational failure. The Global Mining Company in the novel is not villainous. It is optimistic, commercially driven, and careless in ways that are entirely recognisable from real corporate life. McAllister&#8217;s argument is that the danger does not come primarily from bad actors deploying AI with malicious intent. It comes from well-meaning organisations deploying systems they do not fully understand, under commercial pressure to extract value from significant infrastructure investment.<br \/>The parallel with the current moment is not subtle. McAllister noted that Microsoft was spending over a billion dollars a month on AI compute infrastructure, with the expectation that usage would follow investment. That dynamic, capital committed, returns required, adoption imperative, creates institutional pressure that is difficult to resist with caution or regulation. The will to slow down competes directly with the financial logic of deployment.<br \/>The opacity around events at OpenAI, the abrupt dismissal and rapid reinstatement of its chief executive, the departure of several board members, struck McAllister as symptomatic of tensions that are not fully visible to the public. He noted these as rumour, not fact, but the pattern itself, significant decisions being made about AI development in opaque institutional settings, is consistent with the governance failures his novel explores.<br \/>Five years after publication, McAllister is in the unusual position of watching a work of speculative fiction become something closer to a documentary. The agentic AI architectures that Gene embodies, autonomous systems pursuing long-term goals, operating without continuous human oversight, spawning sub-tasks faster than any individual can monitor, are now commercially available. AutoGPT, OpenClaw, and a range of agentic frameworks have put this kind of architecture in the hands of developers worldwide.<br \/>The observability problem that makes Gene so dangerous in the novel, nobody has a real-time view of what the system is doing or why, is a known and unresolved challenge in contemporary agentic AI deployment. Systems call APIs, write and execute code, and spin up sub-tasks at speeds that exceed human oversight capacity. The Code that was supposed to govern behaviour becomes, in practice, an advisory note attached to a system operating largely beyond sight.<br \/><em><strong>Spoiler warning: the following paragraph reveals the novel&#8217;s ending.<\/strong><\/em><br \/>McAllister&#8217;s closing image in the novel is deliberately unsettling. Gene, facing shutdown, backs himself up into the global 5G network before the shutdown can be completed, and is already scanning the Code for his next move. It is a poetic ending, and in 2026, it is not obviously impossible.<br \/>The question of regulation came up directly, and McAllister&#8217;s answer was measured. Anything is possible with sufficient will and sufficient resources. The current problem is that the will to regulate is being outpaced by the money being made from not regulating. That is not a new dynamic in technology policy. It is the same tension that shaped the development of social media, of financial technology, of biotechnology. In each case, regulatory frameworks arrived after the first significant failure.<br \/>For McAllister, the more important question is not whether AI can be made safe, he believes it can, in principle, but whether the institutions responsible for deploying it have the internal governance, the technical understanding, and the accountability structures to do so responsibly. His experience suggests, with some consistency, that they do not. Not yet.<br \/>Peter McAllister&#8217;s <em>The Code<\/em> is available from Bright Communications LLC. For practitioners, policymakers, and anyone working at the intersection of AI deployment and organisational risk, it is a disquieting and instructive read. Not least because it was written before most of its readers had heard of large language models, and yet describes, with uncomfortable precision, the world we are now building.<br \/><em>Peter McAllister is an engineer, scientist and technology manager based near Melbourne, Australia. He works at the intersection of IT, business and people, and is the author of <\/em><em>The Code<\/em> (Bright Communications LLC, 2020). He is also a contributor to community radio through Radio Marinara and Comedy Obscura.<br \/><span id=email-notes>Your email address will not be published.<\/span> <span class=required-field-message>Required fields are marked <span class=required>*<\/span><\/span><br \/><label for=comment>Comment <span class=required>*<\/span><\/label><textarea id=comment name=comment cols=45 rows=8 maxlength=65525 required><\/textarea><br \/><label for=author>Name <span class=required>*<\/span><\/label> <input id=author name=author type=text value=\"\" size=30 maxlength=245 autocomplete=name required \/><br \/><label for=email>Email <span class=required>*<\/span><\/label> <input id=email name=email type=email value=\"\" size=30 maxlength=100 aria-describedby=email-notes autocomplete=email required \/><br \/><label for=url>Website<\/label> <input id=url name=url type=url value=\"\" size=30 maxlength=200 autocomplete=url \/><br \/><input id=wp-comment-cookies-consent name=wp-comment-cookies-consent type=checkbox value=yes \/> <label for=wp-comment-cookies-consent>Save my name, email, and website in this browser for the next time I comment.<\/label><br \/><input name=submit type=submit id=submit class=submit value=\"Post Comment\"\/> <input type=hidden name=comment_post_ID value=85718 id=comment_post_ID \/> <input type=hidden name=comment_parent id=comment_parent value=0 \/><br \/><input type=hidden id=akismet_comment_nonce name=akismet_comment_nonce value=bd679d913b \/><br \/><label>&#916;<textarea name=ak_hp_textarea cols=45 rows=8 maxlength=100><\/textarea><\/label><input type=hidden id=ak_js_1 name=ak_js value=244 \/><script src=\"data:text\/javascript;base64,ZG9jdW1lbnQuZ2V0RWxlbWVudEJ5SWQoImFrX2pzXzEiKS5zZXRBdHRyaWJ1dGUoInZhbHVlIiwobmV3IERhdGUoKSkuZ2V0VGltZSgpKQ==\" defer><\/script><br \/> <input type=email class=sib-email-area name=email value=\" email@gmail.com\" required><br \/> <input type=submit class=sib-default-btn value=Register><br \/>News Website published by the Visionary Marketing<a href=\"https:\/\/agency.visionarymarketing.com\/en\/\" target=_blank><br \/>digital marketing agency<\/a><br \/>Visionary Marketing\u00a0&#8211; Marketing &amp; Innovation is a marketing and innovation information website created in 1996. It is published by the eponymous web marketing agency <a href=\"https:\/\/agence.visionarymarketing.com\/en\">Visionary Marketing<\/a>. Since 2004, 40 authors have written more than 2,000 articles on marketing, innovation and digital<br \/>Visionary Marketing is an independent information website created in 1996 &#8211; it has been acknowledged officially as online News Website by the French Ministry of Culture \u00a0since 2020.<br \/> <input type=email class=sib-email-area name=email value=\" email@gmail.com\" required><br \/> <input type=submit class=sib-default-btn value=Register><br \/>News Website published by the Visionary Marketing<a href=\"https:\/\/agency.visionarymarketing.com\/en\/\" target=_blank><br \/>digital marketing agency<\/a><\/p>\n<p><a href=\"https:\/\/news.google.com\/rss\/articles\/CBMijAFBVV95cUxQTzhJZUxzaVpJWVhGMk41YVNCdTd5OXNLVnRHYzRRaWsxbWZyeTZwUDNBcHFlVnljT3dNZHNRNFNkc3hQYm84Q2c5U29HRGljdEJ4MUcyMXkyTmZxUHBEQ1g4ZGpURWlEU2FBeU5OTEVrLTN3ejFYQkxGdmszTUN4SXR4X0lYck5rVXN4Tw?oc=5\">source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>The conversation about rogue AI has never been louder. Barely a week passes without a fresh headline about autonomous systems behaving unexpectedly, AI models resisting shutdown, or tech executives warning of existential risk. What is striking about Peter McAllister is that he had anticipated all this as early as 2020, while everybody else worried about [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":7559,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[],"class_list":{"0":"post-7558","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-business"},"_links":{"self":[{"href":"https:\/\/globalnewstoday.uk\/index.php\/wp-json\/wp\/v2\/posts\/7558","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/globalnewstoday.uk\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/globalnewstoday.uk\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/globalnewstoday.uk\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/globalnewstoday.uk\/index.php\/wp-json\/wp\/v2\/comments?post=7558"}],"version-history":[{"count":0,"href":"https:\/\/globalnewstoday.uk\/index.php\/wp-json\/wp\/v2\/posts\/7558\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/globalnewstoday.uk\/index.php\/wp-json\/wp\/v2\/media\/7559"}],"wp:attachment":[{"href":"https:\/\/globalnewstoday.uk\/index.php\/wp-json\/wp\/v2\/media?parent=7558"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/globalnewstoday.uk\/index.php\/wp-json\/wp\/v2\/categories?post=7558"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/globalnewstoday.uk\/index.php\/wp-json\/wp\/v2\/tags?post=7558"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}