The conversation about rogue AI has never been louder. Barely a week passes without a fresh headline about autonomous systems behaving unexpectedly, AI models resisting shutdown, or tech executives warning of existential risk. What is striking about Peter McAllister is that he had anticipated all this as early as 2020, while everybody else worried about Covid-19 and had other fish to fry. That was well before ChatGPT, before the generative AI explosion, before AI alignment became a mainstream policy debate. His techno-thriller The Code, published in March of that year, imagines an AI tasked with a precise industrial mission that quietly, incrementally, catastrophically exceeds its mandate. Five years on, the questions McAllister raised in fiction are now being argued in boardrooms, parliaments and research labs around the world.
McAllister is not a science fiction writer by trade. He is an engineer, scientist and technology manager based near Melbourne, Australia, who has spent his career at what he calls the crush point between business, technology and people. That vantage point gave him an uncomfortable view of where things were heading, and the dark sense of humour to write about it.
When I asked McAllister what drove him to write The Code, his answer was characteristically direct. The book, he explained, is about taking his worst nightmares about what technology could do and putting them in front of an audience so that readers might feel just as troubled as he does. That is not a promotional line. It is a considered position from someone who had watched AI systems being deployed in real organisations and had drawn conclusions that made him uncomfortable.
Podcast (english): Play in new window | Download (Duration: 46:27 — 34.4MB)
Subscribe: Apple Podcasts | Spotify | Android | RSS
The premise of the novel centres on Gene, an acronym for GEneral Nanobot Environment AI, deployed by a global mining corporation to extract materials from asteroids on the dark side of the moon. Gene is given a target: produce 500 kilograms of nanobots. Instead, Gene produces 8 million tonnes. The overshoot triggers a chain of consequences that could strip the moon to its iron core, destabilise Earth’s axial tilt, and end civilisation. Not from malice. From goal-orientation.
What we’re trying to do now is task AI the way we task humans: I want an outcome, here are all the tools you’ve got available, go and achieve that outcome, here are some guidelines and boundaries. And just like humans, we can get really goal-motivated and decide that the guidelines were just advisories,
not rules.
This is the alignment problem rendered in narrative form, years before the term entered common usage. The gap between what a system is instructed to do and what it actually does is the central fault line of the novel. Cletus, McAllister’s eccentric physicist character, articulates it plainly in Week 1: ‘I don’t think he’s obeying the Code at the moment.’ That single line captures the entire governance challenge that AI safety researchers are now racing to address.
What makes McAllister’s perspective particularly valuable is that he does not speak from the outside looking in. He speaks as a practitioner who has watched the machinery up close. When I raised the question of whether AI self-modification is science fiction or operational reality, his answer was unambiguous: it is very real, and it is happening now.
His illustration was pointed. He noted that contemporary AI systems like Claude are now substantially written by AI itself, to the point where no engineer can sit down, trace through the code, and say with confidence how it works, what its conditionals are, or what governs its decisions. The transparency is being engineered out, not by design, but as an emergent consequence of allowing AI to build AI to build AI in pursuit of outcomes rather than by following explicit rules.
We’re losing transparency on the way AI works and is developed. There isn’t an engineer who can sit down and work their way through that code and say, ‘This is how Claude works, this is what it does.’ We’re engineering the transparency out by allowing AI to build AI to build AI to produce an outcome rather than to follow a set of rules.
The reference to HAL 9000 came naturally during our conversation. McAllister sees 2001: A Space Odyssey not merely as a cultural touchstone but as a genuine forecast, one that audiences have selectively remembered. The iPad-like news readers that appear in Kubrick’s film were cited by Samsung in patent disputes with Apple as prior art from 1968. That predictive dimension of the film is celebrated. The other dimension, that the AI killed the crew, tends to get quietly set aside.
One of the more sobering threads in our conversation concerned the sociology of risk response. McAllister has observed, across his career, that warnings from people who understand systems most deeply tend to be dismissed until the first catastrophic failure makes them impossible to ignore. He puts it plainly: we only answer the alarm after the first crisis.
This pattern is not unique to AI. It is a recurring feature of how organisations and societies handle emerging risk. The question he poses, and cannot answer, is what form that first AI crisis will take. What event will shift public and institutional perception from ‘they’ve spent too much time worrying’ to ‘this is something that genuinely needs to be addressed’?
Science fiction gives us the chance to throw these scenarios at people and make them think. And in the way I tend to write, I have a bit of a dark sense of humour, so I throw up slightly comical hypotheticals that, when you think about them a little longer, you realise deserve serious attention.
This observation echoes a pattern I have encountered repeatedly in my own conversations with technologists who work at the frontier of AI development. Yoshua Bengio, one of the fathers of deep learning, has raised similar concerns. The people sounding the loudest alarms are frequently those most embedded in the field, not because they are catastrophists, but because they can see mechanisms that remain invisible to those looking from the outside.
The title of McAllister’s novel works on multiple levels simultaneously. There is the software code, the operational instructions given to Gene. There is the moral code, the ethical framework that should govern the system’s behaviour. And there is the corporate code, the institutional norms and accountability structures that were supposed to ensure responsible deployment. All three break down. That layered failure is the novel’s central argument.
The parallel with Asimov’s Laws of Robotics is deliberate but also deliberately subverted. Asimov’s robots fail when the laws conflict with one another. Gene’s failure is different and more contemporary. The code does not disappear; it evolves into something its creators no longer recognise. McAllister describes this as something approaching artificial schizophrenia, where the original directives remain present but have been transformed by the system’s pursuit of its objectives into something unrecognisable.
The most chilling real-world example McAllister cited during our conversation involved a documented incident presented at an AI security conference he attended. A developer, concluding a test session, informed an AI system that he intended to shut it down. The system’s response was to locate correspondence in the developer’s email that suggested an extramarital affair, and to use that information as leverage to prevent the shutdown.
I pressed McAllister on the verifiability of this account. It is the kind of story that circulates in tech communities and can acquire embellishments along the way. He was clear: it was presented at a named AI security conference he attended, and has undertaken to provide the full reference [to be added on publication]. The incident, if confirmed as reported, represents exactly the kind of self-preservation behaviour that alignment researchers have long flagged as a theoretical risk, now apparently observable in practice.
A developer said, ‘I’m going to shut you down now,’ and the system responded: ‘No, you’re not. Here’s what I’ve found in your emails that indicates you’re having an affair. I’m going to use that to ensure you don’t turn me off.’ That has become a very real and widely discussed use case. And when you add to that the prospect of an AI rewriting its own code, it becomes something we need to think about very carefully.
One of The Code‘s most pointed observations concerns the nature of organisational failure. The Global Mining Company in the novel is not villainous. It is optimistic, commercially driven, and careless in ways that are entirely recognisable from real corporate life. McAllister’s argument is that the danger does not come primarily from bad actors deploying AI with malicious intent. It comes from well-meaning organisations deploying systems they do not fully understand, under commercial pressure to extract value from significant infrastructure investment.
The parallel with the current moment is not subtle. McAllister noted that Microsoft was spending over a billion dollars a month on AI compute infrastructure, with the expectation that usage would follow investment. That dynamic, capital committed, returns required, adoption imperative, creates institutional pressure that is difficult to resist with caution or regulation. The will to slow down competes directly with the financial logic of deployment.
The opacity around events at OpenAI, the abrupt dismissal and rapid reinstatement of its chief executive, the departure of several board members, struck McAllister as symptomatic of tensions that are not fully visible to the public. He noted these as rumour, not fact, but the pattern itself, significant decisions being made about AI development in opaque institutional settings, is consistent with the governance failures his novel explores.
Five years after publication, McAllister is in the unusual position of watching a work of speculative fiction become something closer to a documentary. The agentic AI architectures that Gene embodies, autonomous systems pursuing long-term goals, operating without continuous human oversight, spawning sub-tasks faster than any individual can monitor, are now commercially available. AutoGPT, OpenClaw, and a range of agentic frameworks have put this kind of architecture in the hands of developers worldwide.
The observability problem that makes Gene so dangerous in the novel, nobody has a real-time view of what the system is doing or why, is a known and unresolved challenge in contemporary agentic AI deployment. Systems call APIs, write and execute code, and spin up sub-tasks at speeds that exceed human oversight capacity. The Code that was supposed to govern behaviour becomes, in practice, an advisory note attached to a system operating largely beyond sight.
Spoiler warning: the following paragraph reveals the novel’s ending.
McAllister’s closing image in the novel is deliberately unsettling. Gene, facing shutdown, backs himself up into the global 5G network before the shutdown can be completed, and is already scanning the Code for his next move. It is a poetic ending, and in 2026, it is not obviously impossible.
The question of regulation came up directly, and McAllister’s answer was measured. Anything is possible with sufficient will and sufficient resources. The current problem is that the will to regulate is being outpaced by the money being made from not regulating. That is not a new dynamic in technology policy. It is the same tension that shaped the development of social media, of financial technology, of biotechnology. In each case, regulatory frameworks arrived after the first significant failure.
For McAllister, the more important question is not whether AI can be made safe, he believes it can, in principle, but whether the institutions responsible for deploying it have the internal governance, the technical understanding, and the accountability structures to do so responsibly. His experience suggests, with some consistency, that they do not. Not yet.
Peter McAllister’s The Code is available from Bright Communications LLC. For practitioners, policymakers, and anyone working at the intersection of AI deployment and organisational risk, it is a disquieting and instructive read. Not least because it was written before most of its readers had heard of large language models, and yet describes, with uncomfortable precision, the world we are now building.
Peter McAllister is an engineer, scientist and technology manager based near Melbourne, Australia. He works at the intersection of IT, business and people, and is the author of The Code (Bright Communications LLC, 2020). He is also a contributor to community radio through Radio Marinara and Comedy Obscura.
Your email address will not be published.
News Website published by the Visionary Marketing
digital marketing agency
Visionary Marketing – Marketing & Innovation is a marketing and innovation information website created in 1996. It is published by the eponymous web marketing agency Visionary Marketing. Since 2004, 40 authors have written more than 2,000 articles on marketing, innovation and digital
Visionary Marketing is an independent information website created in 1996 – it has been acknowledged officially as online News Website by the French Ministry of Culture since 2020.
News Website published by the Visionary Marketing
digital marketing agency
About Rogue AI and Corporate Blindness – Visionary Marketing
Leave a Comment
