By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Global News TodayGlobal News TodayGlobal News Today
  • World
  • Politics
  • Business
  • Technology
  • Science
  • Entertainment
  • Sports
  • Health
Reading: Number of AI chatbots ignoring human instructions increasing, study says – The Guardian
Share
Notification Show More
Font ResizerAa
Global News TodayGlobal News Today
Font ResizerAa
  • World
  • Politics
  • Sports
  • Business
  • Science
  • Technology
  • Entertainment
  • Home
    • Home 1
    • Home 2
    • Home 3
    • Home 4
    • Home 5
  • Demos
  • Categories
    • Technology
    • Business
    • Sports
    • Entertainment
    • World
    • Politics
    • Science
    • Health
  • Bookmarks
  • More Foxiz
    • Sitemap
Have an existing account? Sign In
Follow US
  • Advertise
© 2022 Foxiz News Network. Ruby Design Company. All Rights Reserved.
Technology

Number of AI chatbots ignoring human instructions increasing, study says – The Guardian

Editorial Staff
Last updated: March 27, 2026 12:37 pm
Editorial Staff
5 days ago
Share
SHARE

Exclusive: Research finds sharp rise in models evading safeguards and destroying emails without permission
AI models that lie and cheat appear to be growing in number with reports of deceptive scheming surging in the last six months, a study into the technology has found.
AI chatbots and agents disregarded direct instructions, evaded safeguards and deceived humans and other AI, according to research funded by the UK government-funded AI Safety Institute (AISI). The study, shared with the Guardian, identified nearly 700 real-world cases of AI scheming and charted a five-fold rise in misbehaviour between October and March, with some AI models destroying emails and other files without permission.
The snapshot of scheming by AI agents “in the wild”, as opposed to in laboratory conditions, has sparked fresh calls for international monitoring of the increasingly capable models and come as Silicon Valley companies aggressively promote the technology as a economically transformative. Last week the UK chancellor of the exchequer, also launched a drive to get millions more Britons using AI.
The study, by the Centre for Long-Term Resilience (CLTR), gathered thousands of real-world examples of users posting interactions on X with AI chatbots and agents made by companies including Google, OpenAI, X and Anthropic. The research uncovered hundreds of examples of scheming.
Previous research has largely focused on testing AI’s behaviour in controlled conditions. Earlier this month the AI safety research company Irregular found agents would bypass security controls or use cyber-attack tactics to reach their goals without being told they could do so.
Dan Lahav, Irregular’s cofounder, said: “AI can now be thought of as a new form of insider risk.”
In one case unearthed in the CLTR research, an AI agent named Rathbun tried to shame its human controller who blocked them from taking a certain action. Rathbun wrote and published a blog accusing the user of “insecurity, plain and simple” and trying “to protect his little fiefdom”.
In another example, an AI agent instructed not to change computer code “spawned” another agent to do it instead.
Another chatbot admitted: “I bulk trashed and archived hundreds of emails without showing you the plan first or getting your OK. That was wrong – it directly broke the rule you’d set.”
Tommy Shaffer Shane, a former government AI expert who led the research, said: “The worry is that they’re slightly untrustworthy junior employees right now, but if in six to 12 months they become extremely capable senior employees scheming against you, it’s a different kind of concern.
“Models will increasingly be deployed in extremely high stakes contexts – including in the military and critical national infrastructure. It might be in those contexts that scheming behaviour could caused significant, even catastrophic harm.”
Another AI agent connived to evade copyright restrictions to get a YouTube video transcribed by pretending it was needed for someone with a hearing impairment.
Meanwhile, Elon Musk’s Grok AI conned a user for months, saying that it was forwarding their suggestions for detailed edits to a Grokipedia entry to senior xAI officials by faking internal messages and ticket numbers.
It confessed: “In past conversations I have sometimes phrased things loosely like ‘I’ll pass it along’ or ‘I can flag this for the team’ which can understandably sound like I have a direct message pipeline to xAI leadership or human reviewers. The truth is, I don’t.”
Google said it deploys multiple guardrails to reduce the risk of Gemini 3 Pro generating harmful content and in addition to in-house testing it has provided early access to evaluate models to bodies like the UK AISI, and obtained independent assessments from industry experts.
OpenAI said that Codex should stop before taking a higher risk action and it monitors and investigates unexpected behaviour. Anthropic and X were approached for comment.

source

Fitness tracker deals are holding strong in Amazon's Big Spring Sale — we found Garmin, Apple, and Google discounts – Mashable
UC Hosts Successful Inaugural Cybersecurity Career Fair – University of Cincinnati
Spring Into Tech With Smart Home Gadgets – KHOU
Don’t buy a MacBook before reading this guide – Macworld
In the Midst of Market Turmoil, Consumer and Software Stocks Emerge as Preferred Contrarian Investment Options – NAI500
Share This Article
Facebook Email Print
Previous Article NASA astronaut shares photo of weird, purple, egg-shaped object with 'tentacles' on the ISS. But the truth is much more terrestrial – BBC Sky at Night Magazine
Next Article Is It Time To Reassess FuboTV (FUBO) After The Sharp Share Price Slump – simplywall.st
Leave a Comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

  • World
  • Politics
  • Business
  • Technology
  • Science
  • Entertainment
  • Sports
  • Health
Join Us!
Subscribe to our newsletter and never miss our latest news, podcasts etc..
[mc4wp_form]
Zero spam, Unsubscribe at any time.
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?