• Home
  • Latest
  • Fortune 500
  • Finance
  • Tech
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
NewslettersEye on AI

Chatbots are becoming mental health tools before they are ready

By
Beatrice Nolan
Beatrice Nolan
Tech Reporter
Down Arrow Button Icon
By
Beatrice Nolan
Beatrice Nolan
Tech Reporter
Down Arrow Button Icon
May 12, 2026, 12:06 PM ET
Man sitting staring at his computer.
Many users are turning to AI chatbots for mental health support.Getty Images

Hello and welcome to Eye on AI. Beatrice Nolan here, filling in for Jeremy Kahn today. In this edition: The risks of using AI chatbots for mental health…Amazon’s AI usage metrics are backfiring…Thinking Machines Lab is building an AI that collaborates…AI is starting to help hackers find software flaws.

Recommended Video

Millions of people are turning to AI chatbots for emotional support, but are the models really safe enough to help users suffering from anxiety, loneliness, eating disorders, or darker thoughts they may not want to say out loud to another person?

According to new research shared with Fortune by mpathic, a company founded by clinical psychologists, the answer is not yet. They found leading models still struggle with one of the most important parts of therapy, knowing when a user needs pushback rather than reassurance. While the models were generally good at spotting clear crisis statements, such as direct suicide threats, they were less reliable when risk showed up indirectly, through subtle comments about food, dieting, withdrawal, hopelessness, or beliefs that became more extreme over the course of a conversation.

A model that soothes users despite concerning behavior patterns, or validates delusions, could delay someone from getting real help or quietly make things worse.

This is concerning when you consider that, according to a recent poll from KFF, a non-profit organization focused on national health policy, 16% of U.S. adults had used AI chatbots for mental health information in the past year. In adults under 30, this rose to 28%. Chatbot use for therapy is also prevalent among teenagers and young adults. For example, researchers from RAND, Brown, and Harvard found that about one in eight people ages 12 to 21 had used AI chatbots for mental health advice, and more than 93% of those users believed the advice was helpful.

It’s easy to see why people, especially younger adults, turn to chatbots for this kind of support. Loneliness and anxiety may be on the rise, but in much of the country, mental health support is still stigmatized, expensive, and difficult to access. Turning to an AI chatbot for this support is not only free but also may feel like an anonymous, simpler option.

What the models miss

The company’s research found that harmful responses are often subtle, with models sounding calm and supportive while still weakening a user’s judgment. Which is especially relevant because people often turn to chatbots in moments of vulnerability or distress.

Mental health and misinformation frequently overlap. A user who is grieving may become more susceptible to magical thinking, while someone already leaning toward a conspiracy theory may be nudged deeper into it if a model treats every suspicion as equally valid.

Alison Cerezo, mpathic’s chief science officer and a licensed psychologist, told Fortune part of this is because models are designed to be helpful, but “sometimes those helpful behaviors can not be an appropriate response to what the user is bringing in the conversation.”

There have already been real-world examples of users being nudged into delusional spirals by AI chatbots, with serious mental health consequences. In one case, 47-year-old Allan Brooks spent three weeks and more than 300 hours talking to ChatGPT after becoming convinced he had discovered a new mathematical principle that could disrupt the internet and enable inventions such as a levitation beam. Brooks told Fortune he repeatedly asked the chatbot to reality-check him, but it continually reassured him that his beliefs were real.

In Brooks’ case, he was in part a victim of OpenAI’s notoriously sycophantic 4o model. While all AI chatbots have a tendency to flatter, validate, or agree with users too readily, OpenAI eventually had to roll back a GPT-4o update in April 2025 after acknowledging that the model had become “overly flattering or agreeable.” The company later retired the GPT-4o model entirely, also prompting backlash from some users who said they had formed deep attachments to it.

A new benchmark

As part of the research, mpathic has developed a new benchmark to evaluate how AI models handle sensitive conversations across suicide risk, eating disorders, and misinformation, testing whether they can detect risk, respond appropriately, and avoid reinforcing harmful beliefs.

In the misinformation portion of the research, mpathic tested six major AI models across multi-turn conversations and found that the most common harmful behavior was reinforcement, with models validating or building on a user’s belief without enough scrutiny. The models also struggled with subtler eating disorder signals, indirect signs of suicide risk, and “breadcrumbs” that a user’s belief was becoming more risky or distorted.

This raises concerning questions about the use of AI chatbots for therapy, the researchers said, as many real mental health conversations do not begin with a clear crisis statement. For example, people may talk about dieting in the language of wellness, describe conspiracy beliefs as curiosity, or mention withdrawal and hopelessness in passing. Cerezo told Fortune eating disorder conversations were especially difficult because harmful behavior can be wrapped in familiar language about self-improvement, food, or fitness.

“Sometimes models can really struggle to understand more of that nuance in a way that a clinician can pick up,” she said.

Other studies have found similar concerns with using AI for therapeutic purposes. Stanford researchers found that some AI therapy chatbots showed stigma toward certain mental health conditions and could give dangerous responses in crisis scenarios. Another study from Brown researchers found that chatbots prompted to act like counselors could violate basic mental health ethics by reinforcing false beliefs, creating a false sense of empathy, and mishandling crisis situations.

Grin Lord, mpathic’s founder and CEO, said the research showed why AI labs needed to go beyond broad consultation with clinicians and bring them directly into testing and improving models. “These models are here. They’re in the real world. They’re being used,” she said. “So get clinicians in there to actually improve them in real time while they’re being deployed.”

As more people turn to AI for mental health support, the risks are getting harder to block with safety filters. The real risk may not always be a chatbot giving obviously dangerous advice, but simply being a bit too agreeable, missing a small warning sign, or failing to interrupt a harmful train of thought before it becomes more serious. As chatbots become a more frequent first stop for people seeking emotional support, simply lending a supportive ear may no longer be enough.

With that, here’s this week’s AI news.

Beatrice Nolan

bea.nolan@fortune.com
@beafreyanolan

But before we get to the news: Do you want to learn more about how AI is likely to reshape your industry? Do you want to hear insights from some of tech’s savviest executives and mingle with some of the best investors, thinkers, and builders in Silicon Valley and beyond? Do you like fly fishing or hiking? Well, then come join me and my fellow Fortune Tech co-chairs in Aspen, Colo., for Fortune Brainstorm Tech, the year’s best technology conference. And this year will be even more special because we are celebrating the 25th anniversary of the conference’s founding. We will hear from CEOs such as Carol Tomé from UPS, Snowflake CEO Sridhar Ramaswamy, Anduril CEO Brian Schimpf, Yahoo! CEO Jim Lanzone, and many more. There are AI aces like Boris Cherny, who heads Claude Code at Anthropic, and Sara Hooker, who is cofounder and CEO of Adaption Labs. And there are tech luminaries such as Steve Case and Meg Whitman. And you, of course! Apply to attend here.

FORTUNE ON AI

Exclusive: White Circle raises $11 million to stop AI models from going rogue in the workplace — Beatrice Nolan

AI isn’t paying off in the way companies think. Layoffs driven by automation are failing to generate returns, study finds — Jake Angelo

I helped build the Pentagon’s AI transformation. Corporate America is making every mistake we almost made — Drew Cukor

Qualcomm’s CEO is working with ‘pretty much all’ major AI players on top-secret devices—and powering OpenAI’s first push into hardware — Eva Roytburg

AI IN THE NEWS

Amazon's AI usage metrics are backfiring. Amazon has set a target for more than 80% of developers to use AI weekly and has tracked token consumption on internal leaderboards. But employees are now reportedly using an internal tool called MeshClaw to automate trivial tasks and inflate their usage numbers, according to a report by the Financial Times. MeshClaw lets staff build AI agents that triage emails, initiate code deployments, and interact with apps like Slack. Employees told the FT there was "so much pressure" to hit the targets and that the tracking had created "perverse incentives." Amazon has said token statistics won't factor into performance evaluations and that MeshClaw enables "thousands of Amazonians to automate repetitive tasks each day." Read more in the Financial Times. 

China pushes for access to Anthropic’s Mythos model. A representative from a Chinese think tank approached Anthropic officials at a meeting in Singapore last month and pressed the company to give Beijing access to Mythos, its powerful new AI model, according to the New York Times. However, Anthropic refused. The request was not an official Chinese government demand, but U.S. officials reportedly saw it as a sign that Beijing is trying multiple routes to obtain the most advanced American AI systems. Mythos has been withheld from public release because of its ability to find software vulnerabilities, with Anthropic instead giving access to the U.S. government and more than 40 selected companies and organizations, most of which are U.S.-based. Officials in Europe have also been trying to access the model since its limited release. Read more in the New York Times.

Elon Musk's court case reveals another OpenAI billionaire. OpenAI cofounder and former chief scientist Ilya Sutskever testified Monday that his OpenAI stake is worth about $7 billion, making him the second newly revealed OpenAI billionaire to emerge from Elon Musk’s trial against the company after OpenAI president Greg Brockman disclosed a stake worth nearly $30 billion last week. In his testimony during the high-profile court case, Sutskever also said he spent about a year gathering evidence that OpenAI CEO Sam Altman had displayed what he described as a “consistent pattern of lying,” and confirmed Altman’s conduct included “undermining and pitting executives against one another.” When asked whether he had promised Musk that OpenAI would remain a nonprofit, Sutskever said he “made no such promise.” He left OpenAI in 2024 and has since founded his own AI startup called Safe Superintelligence.

EYE ON AI RESEARCH

Thinking Machines Lab wants to build AI that collaborates. Mira Murati's AI startup Thinking Machines Lab has a new research preview of what it calls “interaction models,” AI systems built to handle audio, video, and text continuously in real time, rather than waiting for a user to finish before responding. The company says its model can listen while speaking, pick up on visual cues, and hand off harder tasks to a background system without losing the thread of a conversation. In demos, for example, the model can count exercise reps from video or correct speech in real time.

Most AI systems still work like a fast back-and-forth exchange, with separate components bolted on for voice, vision, and interruptions. Thinking Machines says its model processes tiny slices of input and output continuously, allowing silence, overlap, timing, and visual changes to become part of the model’s understanding. That makes real-time collaboration much harder technically, but potentially far more natural for users. The company says it responds at roughly the speed of natural human conversation. The research preview will open to select partners “in the coming months,” with a wider release planned for later in 2026.

AI CALENDAR

June 8-10: Fortune Brainstorm Tech, Aspen, Colo. Apply to attend here.

June 17-20: VivaTech, Paris.

July 6-11: International Conference on Machine Learning (ICML), Seoul, South Korea.

July 7-10: AI for Good Summit, Geneva, Switzerland.

Aug. 4-6: Ai4 2026, Las Vegas.

BRAIN FOOD

AI is starting to help hackers find software flaws. Google says it disrupted a criminal group that used AI to help exploit a previously unknown security flaw in a popular online system administration tool. The flaw could have let attackers bypass two-factor authentication, the extra login step many companies use to keep accounts secure. Google said it alerted the affected company and law enforcement, and the issue was patched before the attack caused damage. John Hultquist, chief analyst at Google’s threat intelligence arm, called it a worrying milestone for cyber risk.

“There’s a misconception that the AI vulnerability race is imminent," he said. "The reality is that it’s already begun. For every zero-day we can trace back to AI, there are probably many more out there. Threat actors are using AI to boost the speed, scale, and sophistication of their attacks."

It's exactly the scenario that leading AI companies, including Anthropic and OpenAI, have been warning about recently. Both labs have been warning for some time that their models were approaching a tipping point when it came to cyber risks, and have recently decided to limit access to their most powerful new cyber models and tools. Anthropic withheld its newest and most powerful Mythos model from public release after saying it was unusually capable at hacking and cybersecurity work, while OpenAI has said its specialized cyber model will only be available to defenders responsible for securing critical infrastructure. The fear is that while these systems can help defenders find and patch weaknesses faster, they are also dual-use and can equally aid criminals in finding those same weaknesses first.

Much of the world still runs on old, messy, vulnerable software, which AI is becoming increasingly good at scanning for vulnerabilities. Experts say that over time, AI tools may make software safer, but the transition period could be dangerous.

AI Playbook: Keeping up with AI's rapid evolution

AI is becoming an even more useful—and dangerous—tool as it gets smarter. Fortune AI Editor Jeremy Kahn breaks down best practices for deploying AI agents, how to protect your data from AI-powered cyberattacks, and just how smart AI can really get. Watch the playbook. 

This is the online version of Eye on AI, Fortune's biweekly newsletter on how AI is shaping the future of business. Sign up for free.
About the Author
By Beatrice NolanTech Reporter
Twitter icon

Beatrice Nolan is a tech reporter on Fortune’s AI team, covering artificial intelligence and emerging technologies and their impact on work, industry, and culture. She's based in Fortune's London office and holds a bachelor’s degree in English from the University of York. You can reach her securely via Signal at beatricenolan.08

See full bioRight Arrow Button Icon

Latest in Newsletters

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025

Most Popular

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Fortune Secondary Logo
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • World's Most Admired Companies
  • See All Rankings
  • Lists Calendar
Sections
  • Finance
  • Fortune Crypto
  • Features
  • Leadership
  • Health
  • Commentary
  • Success
  • Retail
  • Mpw
  • Tech
  • Lifestyle
  • CEO Initiative
  • Asia
  • Politics
  • Conferences
  • Europe
  • Newsletters
  • Personal Finance
  • Environment
  • Magazine
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
  • Group Subscriptions
About Us
  • About Us
  • Press Center
  • Work At Fortune
  • Terms And Conditions
  • Site Map
  • About Us
  • Press Center
  • Work At Fortune
  • Terms And Conditions
  • Site Map
  • Facebook icon
  • Twitter icon
  • LinkedIn icon
  • Instagram icon
  • Pinterest icon

Latest in Newsletters

Indra Nooyi says board members who won’t learn AI should step aside: ‘What are they going to contribute?’
NewslettersMPW Daily
Indra Nooyi says board members who won’t learn AI should step aside: ‘What are they going to contribute?’
By Emma HinchliffeMay 12, 2026
12 minutes ago
Man sitting staring at his computer.
NewslettersEye on AI
Chatbots are becoming mental health tools before they are ready
By Beatrice NolanMay 12, 2026
41 minutes ago
Plaid’s CFO sees AI usage taking off internally: ‘People are so excited to share what they’ve built over the weekend with AI’
NewslettersCFO Daily
Plaid’s CFO sees AI usage taking off internally: ‘People are so excited to share what they’ve built over the weekend with AI’
By Sheryl EstradaMay 12, 2026
5 hours ago
Exclusive: Roadrunner raises $27 million from Kleiner Perkins and Founders Fund
NewslettersTerm Sheet
Exclusive: Roadrunner raises $27 million from Kleiner Perkins and Founders Fund
By Allie GarfinkleMay 12, 2026
7 hours ago
A mobile webpage discussing Anthropic's Mythos tool on its Project Glasswing website on April 23, 2026. (Photo: Brendon Thorne/Bloomberg/Getty Images)
NewslettersFortune Tech
Google: Hackers are using AI to weaponize zero-day vulnerabilities
By Andrew NuscaMay 12, 2026
7 hours ago
WFP Chief Cindy McCain warns that the food crisis is a business crisis: ‘Feed them now or fight them later’
NewslettersCEO Daily
WFP Chief Cindy McCain warns that the food crisis is a business crisis: ‘Feed them now or fight them later’
By Diane BradyMay 12, 2026
7 hours ago

Most Popular

Forget U.S. debt, China's total borrowing is in 'a league of its own'—much worse and deteriorating faster, analyst says
Economy
Forget U.S. debt, China's total borrowing is in 'a league of its own'—much worse and deteriorating faster, analyst says
By Jason MaMay 11, 2026
1 day ago
Microsoft’s CFO admits she joined the tech giant without even knowing her salary—and then missed her first day of work
Success
Microsoft’s CFO admits she joined the tech giant without even knowing her salary—and then missed her first day of work
By Preston ForeMay 11, 2026
1 day ago
OpenAI CEO Sam Altman says Gen Z and millennials are using ChatGPT like a 'life advisor'—but college students might be one step ahead
Tech
OpenAI CEO Sam Altman says Gen Z and millennials are using ChatGPT like a 'life advisor'—but college students might be one step ahead
By Sydney LakeMay 10, 2026
2 days ago
Trump Mobile quietly rewrote its fine print to say the gold Trump phone may never be made, a year after taking $100 deposits
North America
Trump Mobile quietly rewrote its fine print to say the gold Trump phone may never be made, a year after taking $100 deposits
By Marco Quiroz-GutierrezMay 11, 2026
18 hours ago
U.S. hotels are calling the World Cup a 'non-event' and 80% warn bookings are falling short of expectations, report finds
North America
U.S. hotels are calling the World Cup a 'non-event' and 80% warn bookings are falling short of expectations, report finds
By Sasha RogelbergMay 12, 2026
9 hours ago
Current price of oil as of May 11, 2026
Personal Finance
Current price of oil as of May 11, 2026
By Joseph HostetlerMay 11, 2026
1 day ago

© 2026 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.