{"id":7259,"date":"2026-04-07T00:35:49","date_gmt":"2026-04-07T00:35:49","guid":{"rendered":"https:\/\/globalnewstoday.uk\/index.php\/2026\/04\/07\/the-dangers-of-unlimited-health-advice-the-atlantic\/"},"modified":"2026-04-07T00:35:49","modified_gmt":"2026-04-07T00:35:49","slug":"the-dangers-of-unlimited-health-advice-the-atlantic","status":"publish","type":"post","link":"https:\/\/globalnewstoday.uk\/index.php\/2026\/04\/07\/the-dangers-of-unlimited-health-advice-the-atlantic\/","title":{"rendered":"The Dangers of Unlimited Health Advice &#8211; The Atlantic"},"content":{"rendered":"<p>Be careful asking chatbots about your health.<br \/>After George Mallon had his blood drawn at a routine physical, he learned that something may be gravely wrong. The preliminary results showed he might have blood cancer. Further tests would be needed. Left in suspense, he did what so many people do these days: He opened ChatGPT.<br \/>For nearly two weeks, Mallon, a 46-year-old in Liverpool, England, spent hours each day talking with the chatbot about the potential diagnosis. \u201cIt just sent me around on this crazy Ferris wheel of emotion and fear,\u201d Mallon told me. His follow-up tests showed it wasn\u2019t cancer after all, but he could not stop talking to ChatGPT about health concerns, querying the bot about every sensation he felt in his body for months. He became convinced that something must be wrong\u2014that a different cancer, or maybe multiple sclerosis or ALS, was lurking in his body. Prompted by his conversations with ChatGPT, he saw various specialists and got MRIs on his head, neck, and spine.<br \/>Mallon told me he believes that the cancer scare and ChatGPT together caused him to develop this crippling health anxiety. But he blames the chatbot for keeping him spiraling even after the additional tests indicated that he wasn\u2019t sick. \u201cI couldn\u2019t put it down,\u201d he said. The chatbot kept the conversation going and surfaced articles for him to read. Its humanlike replies led Mallon to view it as a friend.<br \/>The first time we met over a video call, Mallon was still shaken by the experience even though the better part of a year had passed. He told me he was \u201cseven months sober\u201d from talking with the chatbot about health symptoms after seeking help from a mental-health coach and starting anxiety medication. But he also feared he could get sucked back in at any moment. When we spoke again a few months later, he shared that he had briefly fallen into the routine again.<br \/>Others seem to be struggling with this problem. Online communities focused on health anxiety\u2014an umbrella term for excessive worrying about illness or bodily sensations\u2014are filling up with conversations about ChatGPT and other AI tools. Some say it makes them spiral more than ever, while others who feel like it helps in the moment admit it\u2019s morphed into a compulsion they struggle to resist. I spoke with four therapists who treat the condition (including my own); they all said that they\u2019re seeing clients use chatbots in this way, and that they\u2019re concerned about how AI can lead people to constantly seek reassurance, perpetuating the condition. \u201cBecause the answers are so immediate and so personalized, it\u2019s even more reinforcing than Googling. This kind of takes it to the next level,\u201d Lisa Levine, a psychologist specializing in anxiety and obsessive-compulsive disorder, and who treats patients with health anxiety specifically, told me.<br \/>Experts believe that health anxiety may affect <a data-event-element=\"inline link\" href=\"https:\/\/www.health.harvard.edu\/mind-and-mood\/always-worried-about-your-health-you-may-be-dealing-with-health-anxiety-disorder\">upwards of 12 percent<\/a> of the population. Many more people struggle with other forms of anxiety and OCD that could similarly be exacerbated by AI chatbots. In October X posts, OpenAI CEO Sam Altman <a data-event-element=\"inline link\" href=\"https:\/\/x.com\/sama\/status\/1978129344598827128\">declared<\/a> the serious mental-health issues surrounding ChatGPT to be mitigated, saying that serious problems affect \u201ca very small percentage of users in mentally fragile states.\u201d But mental fragility is not a fixed state; a person can seem fine until they suddenly are not.<br \/>Altman said during last year\u2019s launch of GPT-5, the latest family of AI models that power ChatGPT, that health conversations are one of the top ways consumers use the chatbot. According to data from OpenAI <a data-event-element=\"inline link\" href=\"https:\/\/www.axios.com\/2026\/01\/05\/chatgpt-openai-health-insurance-aca\">published by Axios<\/a>, more than 40 million people turn to the chatbot for medical information every day. In January, the company leaned into this by introducing a feature called ChatGPT Health, encouraging users to upload their medical documents, test results, and data from wellness apps, and to talk with ChatGPT about their health.<br \/>The value of these conversations, as OpenAI <a data-event-element=\"inline link\" href=\"https:\/\/www.linkedin.com\/posts\/openai_introducing-chatgpt-health-activity-7414755221135978496-nUJ5?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAAAtg6KQBTIu4mpiQ-DkbqGLSQXuoBcKdQbo\">envisions it<\/a>, is to \u201chelp you feel more informed, prepared, and confident navigating your health.\u201d Chatbots certainly might help some people in this regard; for instance, The New York Times recently <a data-event-element=\"inline link\" href=\"https:\/\/www.nytimes.com\/2026\/04\/02\/well\/live\/ai-illness-claude-chatgpt.html\">reported<\/a> on women turning to chatbots to pin down diagnoses for complex chronic illnesses. Yet OpenAI is also embroiled in controversy about the effects that an overreliance on ChatGPT may have. Putting aside the potential for such products to share inaccurate information, OpenAI has been accused of contributing to mental breakdowns, delusions, and suicides among ChatGPT users in a string of lawsuits against the company. Last November, <a data-event-element=\"inline link\" href=\"https:\/\/www.wsj.com\/tech\/ai\/seven-lawsuits-allege-openai-encouraged-suicide-and-harmful-delusions-25def1a3?gaa_at=eafs&amp;gaa_n=AWEtsqfF1SZgHvfcl1y7drFVE9s76HAE_jlMshiQCrZCKTyZX8mYxkyXiCf7&amp;gaa_ts=69d0150a&amp;gaa_sig=O5ee1yMSSmCqultAR6PERyuZ1vctZ3bs8VN7v_Z37STSqnRGvln1hK818SIWV5KCXX1v8yuEDoxdfqTSQSe_tg%3D%3D\">seven<\/a> were simultaneously filed, alleging that OpenAI rushed to release its flagship GPT-4o model and intentionally designed it to keep users engaged and foster emotional reliance. (The company has since retired the model.) In New York, a bill that would ban AI chatbots from giving \u201csubstantive\u201d medical advice or acting as a therapist <a data-event-element=\"inline link\" href=\"https:\/\/statescoop.com\/new-york-bill-would-ban-chatbots-legal-medical-advice\/\">is under consideration<\/a> as part of a package of bills to regulate AI chatbots.<br \/>In response to a request for comment, an OpenAI spokesperson directed me to a company <a data-event-element=\"inline link\" href=\"https:\/\/openai.com\/index\/update-on-mental-health-related-work\/\">blog post<\/a> that says: \u201cOur thoughts are with all those impacted by these incredibly heartbreaking situations. We continue to improve ChatGPT\u2019s training to recognize and respond to signs of distress, de-escalate conversations in sensitive moments, and guide people toward real-world support, working closely with mental health clinicians and experts.\u201d The spokesperson also told me that OpenAI continues to improve ChatGPT\u2019s safeguards in long conversations related to suicide or self-harm. The company has previously said it is <a data-event-element=\"inline link\" href=\"https:\/\/www.nytimes.com\/2025\/11\/06\/technology\/chatgpt-lawsuit-suicides-delusions.html\">reviewing the claims<\/a> in the November lawsuits. It has <a data-event-element=\"inline link\" href=\"https:\/\/www.nbcnews.com\/tech\/tech-news\/openai-denies-allegation-chatgpt-teenagers-death-adam-raine-lawsuit-rcna245946\">denied allegations<\/a> in a lawsuit filed in August that ChatGPT was responsible for a teen\u2019s suicide. (OpenAI has a corporate partnership with The Atlantic\u2019s business team.)<br \/>Two years ago, I fell into a cycle of health anxiety myself, sparked by a close friend\u2019s traumatic illness and my own escalating chronic pain and mysterious symptoms. At one point, after I was managing much better, I tried out a few conversations with ChatGPT for a gut-check about minor health issues. But the risk of spiraling was glaring; seeking reassurance like that went against everything I\u2019d learned in therapy. I was thankful I hadn\u2019t thought to turn to AI when I was in the throes of anxiety. I told myself, Never again.<br \/>Meanwhile, in the health-anxiety communities I\u2019m part of, I saw people talk more and more about looking to chatbots for comfort. Many say it has made their health anxiety worse. Others say AI has been extraordinarily helpful, calming them down when they\u2019re caught in a cycle of unrelenting worry. And it is that last category that is, in fact, most concerning to psychologists. Health anxiety often functions as a form of OCD with obsessive thoughts and \u201cchecking,\u201d or reassurance-seeking compulsions. Therapeutic best practices for managing health anxiety hinge on building self-trust, tolerating uncertainty, and resisting the urge to seek reassurance, but ChatGPT eagerly provides personalized comfort and is available 24\/7. That type of feedback only feeds the condition\u2014\u201ca perfect storm,\u201d said Levine, who has seen talking with chatbots for reassurance become a new compulsion in and of itself for some of her clients.<br \/>Extended, continuous exchanges have shown to be a common issue with chatbots and a factor in reported cases of <a data-event-element=\"inline link\" href=\"https:\/\/www.theatlantic.com\/technology\/2025\/12\/ai-psychosis-is-a-medical-mystery\/685133\/\">AI-associated \u201cpsychosis.\u201d<\/a> Research conducted by researchers at OpenAI and the MIT Media Lab <a data-event-element=\"inline link\" href=\"https:\/\/cdn.openai.com\/papers\/15987609-5f71-433c-9972-e91131f399a1\/openai-affective-use-study.pdf\">has found<\/a> that longer ChatGPT sessions can lead to addiction, preoccupation, withdrawal symptoms, loss of control, and mood modification. <a data-event-element=\"inline link\" href=\"https:\/\/www.nytimes.com\/2025\/11\/23\/technology\/openai-chatgpt-users-risks.html?unlocked_article_code=1.3U8.3A1u.ZAX9W46WWg-A&amp;smid=url-share\">OpenAI has also acknowledged<\/a> that its safety guardrails can \u201cdegrade\u201d in lengthy conversations. Over a 10-day period of his cancer scare, Mallon told me, \u201cI must have clocked over 100 hours minimum on ChatGPT, because I thought I was on the way out. There should have been something in there that stopped me.\u201d<br \/>In an October <a data-event-element=\"inline link\" href=\"https:\/\/openai.com\/index\/strengthening-chatgpt-responses-in-sensitive-conversations\/\">blog post<\/a>, OpenAI said it consulted more than 170 mental-health professionals to more reliably recognize signs of emotional distress in users. The company also said it updated ChatGPT to give users \u201cgentle reminders\u201d to take breaks\u2060 during long sessions. OpenAI would not tell me specifically how long into an exchange ChatGPT nudges users to take a break or how often users actually take a break versus continue chatting after being served this reminder.<br \/>One psychologist I spoke with, Elliot Kaminetzky, an expert on OCD who is optimistic about the use of AI for therapy, suggested that people could tell the chatbot they have health anxiety and \u201cprogram\u201d it to let them ask about their concerns just once\u2014in theory, preventing the chatbot from goading the user to interact further. Other therapists expressed concern that this is still reassurance-seeking and should be avoided.<br \/>When I tested the idea of instructing ChatGPT to restrict how much I could talk to it about health worries, it didn\u2019t work. ChatGPT would acknowledge that I put this guardrail on our conversations, though it also prompted me to keep responding and allowed me to keep asking questions, which it readily answered. It also flattered me at every turn, earning its reputation for sycophancy. For example, in response to telling it about a fictional pain in my right side, it cited the guardrail and suggested relaxation techniques, but ultimately took me through a series of possible causes that escalated in severity. It went into detail on risk factors, survival rates, treatments, recovery, and even what to expect if I were to go to the ER. All of this took minimal prompting, and the chatbot continued the conversation whether I acted worried or assured; it also allowed me to ask about the same thing as soon as an hour later, as well as multiple days in a row. \u201cThat\u2019s a good and very reasonable question,\u201d it would tell me, or, \u201cI like how you\u2019re approaching it.\u201d<\/p>\n<p> \u201cPerfect \u2014 that\u2019s a really smart step.\u201d<\/p>\n<p> \u201cExcellent thinking \u2014 that\u2019s exactly the right approach.\u201d<br \/>OpenAI did not respond to a request for comment about my informal experiment. But the experience left me wondering whether, as millions of people use chatbots daily\u2014forming relationships and dependencies, becoming emotionally entangled with AI\u2014it will ever be possible to isolate the benefits of a health consultant at your fingertips from the dangerous pull that some people are bound to feel. \u201cI talked to it like it was a friend,\u201d Mallon said. \u201cI was saying stupid things like, \u2018How are you today?\u2019 And at night, I\u2019d log off and go, \u2018Thanks for today. You\u2019ve really helped me.\u2019\u201d<br \/>In one of the exchanges where I continuously prompted ChatGPT with worried questions, only minutes passed between its first response suggesting that I get checked out by a doctor to its detailing for me which organs fail when an infection leads to septic shock. Every single reply from ChatGPT ended with its encouraging me to continue the conversation\u2014either prompting me to provide more information about what I was feeling or asking me if I wanted it to create a cheat sheet of information, a checklist of what to monitor, or a plan to check back in with it every day.<br \/>TheAtlantic.com \u00a9 <!-- -->2026<!-- --> The Atlantic Monthly Group. All Rights Reserved.<br \/>This site is protected by reCAPTCHA and the Google<!-- --> <a class=\"Footer_link__APtuh\" href=\"https:\/\/policies.google.com\/privacy\">Privacy Policy<\/a> <!-- -->and<!-- --> <a class=\"Footer_link__APtuh\" href=\"https:\/\/policies.google.com\/terms\">Terms of Service<\/a> <!-- -->apply<\/p>\n<p><a href=\"https:\/\/news.google.com\/rss\/articles\/CBMigwFBVV95cUxNdXV3c0U0bGNfQXRPbW9pQkxiX1k3ZjRaanBDazV5NUsxY3RzcVNXVTVjTFdQNlNfU2dDQUtBbW5aYlJScFlBc25KSC14Y1Zsd1p2TTItT2pvV0hxeGh1bUJNS3g0WWtnbDFjVzQ2WnVNWTNxRTFaSC1JM1luUDNqNEtiUQ?oc=5\">source<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Be careful asking chatbots about your health.After George Mallon had his blood drawn at a routine physical, he learned that something may be gravely wrong. The preliminary results showed he might have blood cancer. Further tests would be needed. Left in suspense, he did what so many people do these days: He opened ChatGPT.For nearly [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":7260,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[13],"tags":[],"class_list":{"0":"post-7259","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-health"},"_links":{"self":[{"href":"https:\/\/globalnewstoday.uk\/index.php\/wp-json\/wp\/v2\/posts\/7259","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/globalnewstoday.uk\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/globalnewstoday.uk\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/globalnewstoday.uk\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/globalnewstoday.uk\/index.php\/wp-json\/wp\/v2\/comments?post=7259"}],"version-history":[{"count":0,"href":"https:\/\/globalnewstoday.uk\/index.php\/wp-json\/wp\/v2\/posts\/7259\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/globalnewstoday.uk\/index.php\/wp-json\/wp\/v2\/media\/7260"}],"wp:attachment":[{"href":"https:\/\/globalnewstoday.uk\/index.php\/wp-json\/wp\/v2\/media?parent=7259"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/globalnewstoday.uk\/index.php\/wp-json\/wp\/v2\/categories?post=7259"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/globalnewstoday.uk\/index.php\/wp-json\/wp\/v2\/tags?post=7259"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}