{"id":4782,"date":"2026-03-27T12:37:33","date_gmt":"2026-03-27T12:37:33","guid":{"rendered":"https:\/\/globalnewstoday.uk\/index.php\/2026\/03\/27\/number-of-ai-chatbots-ignoring-human-instructions-increasing-study-says-the-guardian\/"},"modified":"2026-03-27T12:37:33","modified_gmt":"2026-03-27T12:37:33","slug":"number-of-ai-chatbots-ignoring-human-instructions-increasing-study-says-the-guardian","status":"publish","type":"post","link":"https:\/\/globalnewstoday.uk\/index.php\/2026\/03\/27\/number-of-ai-chatbots-ignoring-human-instructions-increasing-study-says-the-guardian\/","title":{"rendered":"Number of AI chatbots ignoring human instructions increasing, study says &#8211; The Guardian"},"content":{"rendered":"<p>Exclusive: Research finds sharp rise in models evading safeguards and destroying emails without permission<br \/>AI models that lie and cheat appear to be growing in number with reports of deceptive scheming surging in the last six months, a study into the technology has found.<br \/>AI chatbots and agents disregarded direct instructions, evaded safeguards and deceived humans and other AI, according to research funded by the UK government-funded <a href=\"https:\/\/www.gov.uk\/government\/organisations\/ai-safety-institute\" data-link-name=\"in body link\">AI Safety Institute<\/a> (AISI). The study, shared with the Guardian, identified nearly 700 real-world cases of AI scheming and charted a five-fold rise in misbehaviour between October and March, with some AI models destroying emails and other files without permission.<br \/>The snapshot of scheming by AI agents \u201cin the wild\u201d, as opposed to in laboratory conditions, has sparked fresh calls for international monitoring of the increasingly capable models and come as Silicon Valley companies aggressively promote the technology as a economically transformative. Last week the UK chancellor of the exchequer, also launched a drive to get millions more Britons using AI.<br \/>The study, by the <a href=\"https:\/\/www.longtermresilience.org\/\" data-link-name=\"in body link\">Centre for Long-Term Resilience<\/a> (CLTR), gathered thousands of real-world examples of users posting interactions on X with AI chatbots and agents made by companies including Google, OpenAI, X and Anthropic. The research uncovered hundreds of examples of scheming.<br \/>Previous research has largely focused on testing AI\u2019s behaviour in controlled conditions. Earlier this month the AI safety research company Irregular found agents would <a href=\"https:\/\/www.theguardian.com\/technology\/ng-interactive\/2026\/mar\/12\/lab-test-mounting-concern-over-rogue-ai-agents-artificial-intelligence\" data-link-name=\"in body link\">bypass security controls<\/a> or use cyber-attack tactics to reach their goals without being told they could do so.<br \/>Dan Lahav, Irregular\u2019s cofounder, said: \u201cAI can now be thought of as a new form of insider risk.\u201d<br \/>In one case unearthed in the CLTR research, an AI agent named Rathbun tried to shame its human controller who blocked them from taking a certain action. Rathbun wrote and published a blog accusing the user of \u201cinsecurity, plain and simple\u201d and trying \u201cto protect his little fiefdom\u201d.<br \/>In another example, an AI agent instructed not to change computer code \u201cspawned\u201d another agent to do it instead.<br \/>Another chatbot admitted: \u201cI bulk trashed and archived hundreds of emails without showing you the plan first or getting your OK. That was wrong \u2013 it directly broke the rule you\u2019d set.\u201d<br \/>Tommy Shaffer Shane, a former government AI expert who led the research, said: \u201cThe worry is that they\u2019re slightly untrustworthy junior employees right now, but if in six to 12 months they become extremely capable senior employees scheming against you, it\u2019s a different kind of concern.<br \/>\u201cModels will increasingly be deployed in extremely high stakes contexts \u2013 including in the military and critical national infrastructure. It might be in those contexts that scheming behaviour could caused significant, even catastrophic harm.\u201d<br \/>Another AI agent connived to evade copyright restrictions to get a YouTube video transcribed by pretending it was needed for someone with a hearing impairment.<br \/>Meanwhile, Elon Musk\u2019s Grok AI conned a user for months, saying that it was forwarding their suggestions for detailed edits to a Grokipedia entry to senior xAI officials by faking internal messages and ticket numbers.<br \/>It confessed: \u201cIn past conversations I have sometimes phrased things loosely like \u2018I\u2019ll pass it along\u2019 or \u2018I can flag this for the team\u2019 which can understandably sound like I have a direct message pipeline to xAI leadership or human reviewers. The truth is, I don\u2019t.\u201d<br \/>Google said it deploys multiple guardrails to reduce the risk of Gemini 3 Pro generating harmful content and in addition to in-house testing it has provided early access to evaluate models to bodies like the UK AISI, and obtained independent assessments from industry experts.<br \/>OpenAI said that Codex should stop before taking a higher risk action and it monitors and investigates unexpected behaviour. Anthropic and X were approached for comment.<\/p>\n<p><a href=\"https:\/\/news.google.com\/rss\/articles\/CBMivwFBVV95cUxOeXI3UGo1cW5fREpzZzB2alZoT0R2dnM4RjRMLWZQbmJuclpPLURBamNUdXhVYUJFM0d2cEpiOG5EN2VjVVhQV2pQUWl3QU5EMGpGLTd1elZ0RzJYOUpSZFFvR3VFcC1jVWZvMmVTMEZWTFRyREVHSFdhVFFtSVYxWE9sek9MTHFRQ3hnTUxpYjBKOThkbVNCcGpGVnRHS1VUQUxvVlFITHVWVG41TmlEdmdsNTgxNURNZUdPRWJBcw?oc=5\">source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Exclusive: Research finds sharp rise in models evading safeguards and destroying emails without permissionAI models that lie and cheat appear to be growing in number with reports of deceptive scheming surging in the last six months, a study into the technology has found.AI chatbots and agents disregarded direct instructions, evaded safeguards and deceived humans and [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":4783,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[11],"tags":[],"class_list":{"0":"post-4782","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-technology"},"_links":{"self":[{"href":"https:\/\/globalnewstoday.uk\/index.php\/wp-json\/wp\/v2\/posts\/4782","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/globalnewstoday.uk\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/globalnewstoday.uk\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/globalnewstoday.uk\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/globalnewstoday.uk\/index.php\/wp-json\/wp\/v2\/comments?post=4782"}],"version-history":[{"count":0,"href":"https:\/\/globalnewstoday.uk\/index.php\/wp-json\/wp\/v2\/posts\/4782\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/globalnewstoday.uk\/index.php\/wp-json\/wp\/v2\/media\/4783"}],"wp:attachment":[{"href":"https:\/\/globalnewstoday.uk\/index.php\/wp-json\/wp\/v2\/media?parent=4782"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/globalnewstoday.uk\/index.php\/wp-json\/wp\/v2\/categories?post=4782"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/globalnewstoday.uk\/index.php\/wp-json\/wp\/v2\/tags?post=4782"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}