Hello and welcome to Eye on AI. It’s Beatrice Nolan here, filling in today for AI reporter Sharon Goldman. In this edition…OpenAI debuts its own highly-capable cybersecurity model…Anthropic launches Claude Opus 4.7…a startup wants to use AI to evaluate journalism in a way that freedom of the press advocates fear will chill reporting that relies on whistleblowers.
On Friday, a man hurled a Molotov cocktail at the gate in front of OpenAI CEO Sam Altman’s house. Daniel Moreno-Gama, 20, from Spring, Texas, was arrested on suspicion of the attack an hour later. At the time, he was outside OpenAI’s headquarters, allegedly trying to smash his way in with a chair.
On Sunday, two more people were arrested after a gun was fired near Altman’s property (it remains unclear whether that shooting was targeting Altman in any way).
Online, some have laid the blame for the attacks at the door of so-called “AI doomers”—those who believe AI poses an existential threat to society. And while it’s true the man accused of attacking Altman’s home had a manifesto warning of humanity’s “extinction” at the hands of AI, it’s also true that a less extreme, but extremely broad-based, anti-AI sentiment has been building for years.
People are increasingly aware of, and concerned about, the technology’s environmental impacts, automation of jobs, and AI use in warfare. Then there are the cases of psychological harm linked to the technology which have already generated wave of lawsuits that blame the tech for multiple deaths, including those of teenagers. Some people, particularly those who grew up during the rise of social media, are also increasingly worried about the potential of becoming addicted or too reliant on AI tools.
Part of this is a messaging problem, one that is often fueled by the AI labs themselves. For years, tech executives have been touting AI as a dangerous technology. It could help people perpetrate cyberattacks, build bioweapons, and almost certainly lead to mass unemployment. Oh, and it also just might lead to human extinction. Just last week, Anthropic launched its “Mythos” model, which it said was too dangerous to be in public hands. (In this case, that fear might be justified. But fear, it turns out, is also pretty effective marketing—it’s hard to think of another consumer product whose makers have so consistently warned the public that it might destroy civilization.)
Either way, it seems the public has been listening.
Low poll numbers
A March NBC News poll found just 26% of voters hold positive views of AI, versus 46% who hold negative ones—only the Democratic Party and Iran were less popular.
Anti-AI sentiment is particularly sharp among the younger generation, who are already dealing with a tough job market. A Gallup poll published last week found Gen Z excitement about AI collapsed from 36% to 22% in a single year, while anger rose from 22% to 31%—driven, Gallup said, by fears the technology is killing off entry-level jobs.
It’s debated how much AI is to blame for the tough labor market for recent graduates or whether it’s just a convenient excuse for layoffs and hiring reductions in a tough economic climate. But after years of executives citing the tech for headcount reductions, the public seems to have accepted the narrative.
The negative environmental consequences of AI also resonate with the public. Between April and June 2025 alone, 20 proposed data center projects worth a combined $98 billion were blocked or delayed due to local resistance. Communities have raised concerns over the strain on local energy grids, rising electricity bills, and the vast amounts of water required to cool the facilities, not to mention the dust and light pollution created during the construction. (The water consumption of most AI data centers may not be as high as some initial estimates asserted, but the idea that AI consumes vast quantities of water has stuck in the public imagination. It is also true that in some places data centers have, in fact, hurt local water supplies and that entire life cycle of AI chip production consumes a lot of water.) This anger has grown loud enough to shift legislative agendas, with New York State recently proposing a three-year moratorium on new data center permits.
Altman may be paying a price for being the most visible face of the AI industry. Ask most people outside major hubs to name an AI company, and the answer is almost always OpenAI (or, “that company that made ChatGPT”)—if they can name one at all. Notably, the attacks on Altman were not the first security incident at OpenAI. In November, employees were told to shelter in place after a man threatened to carry out attacks on staff at its San Francisco offices.
AI insiders belatedly admit they have an image problem
Even within the labs, some employees are starting to acknowledge their companies might have a marketing problem. Roon—widely believed to be a pseudonym for OpenAI researcher Tarun Gogineni—posted this on X earlier in the week:
“The ai labs, in competing with each other, are burning huge amounts of the commons on public trust in ai to win minor points against the others. their lobbyists, pr machines, lawsuits. it’s the very opposite of what marxist class struggle analysis would tell you”
While the labs have done a pretty good job of making AI feel ubiquitous, they’ve done a far worse one of making it feel worthwhile to everyday people. Most people understand that AI can help you write emails faster or optimize some workflows, but far fewer know it’s being used to accelerate drug discovery (and to be fair to the public, no drug created with AI has yet to make it to market, although dozens are now in the pipeline), model climate change, or diagnose rare diseases.
Until that changes, the gap between what the industry believes it’s building and what the public thinks it’s getting will keep widening.
With that, here’s more AI news.
Beatrice Nolan
bea.nolan@fortune.com
@beafreyanolan
FORTUNE ON AI
Anthropic is facing a wave of user backlash over reports of performance issues with its Claude AI chatbot—by Beatrice Nolan
Pause AI and Stop AI: Meet the anti-AI groups facing questions after the attack on Sam Altman—by Sharon Goldman
Exclusive: Artemis raises $70M to help fight AI-powered attacks with AI—by Sharon Goldman
AI IN THE NEWS
OpenAI launches a specialist cybersecurity model. The company has released GPT-5.4-Cyber, a model designed to autonomously identify software vulnerabilities, to a select group of vetted customers through a trusted access program it established in February. The launch comes a week after Anthropic announced it was rolling out its powerful Mythos model to a select group of companies. Anthropic has said Mythos has already detected thousands of severe vulnerabilities across every major operating system and web browser, some undetected for decades. OpenAI, which says an earlier product called Codex Security helped fix more than 3,000 critical flaws since March, trained GPT-5.4-Cyber with fewer restrictions than its standard models to boost capability. Read more in Reuters.
Anthropic shifts enterprise Claude pricing to usage-based billing. Anthropic has overhauled pricing for Claude Enterprise, moving large customers from a flat-rate model—up to $200 per user per month—to a hybrid structure combining a $20 base fee with consumption-based charges on top. The change was prompted by surging demand for Claude Code and Claude Cowork, its agentic workplace tool. Some heavy users could see bills double or triple, according to The Information. Anthropic says the old model was causing usage interruptions for high-volume customers while others paid for capacity they never used. The shift mirrors moves by Salesforce, ServiceNow, and AI coding rivals Replit and Cursor, all of which have turned to consumption pricing as inference costs bite into margins. Anthropic's annualized revenue hit $30 billion as of early April.
This Thiel-backed startup wants AI to fact-check journalists. A new startup called Objection, backed by Peter Thiel and Balaji Srinivasan, is using AI to adjudicate the accuracy of published journalism—for a $2,000 fee per challenge. Founded by Aron D'Souza, who helped lead the lawsuit that bankrupted Gawker, the platform scores reporting via an "Honor Index" built from evidence weighed by a jury of large language models from OpenAI, Anthropic, xAI, Mistral, and Google. Anonymous sources rank near the bottom of its evidence hierarchy, a feature critics say will chill whistleblowing. The platform launched this week with seed funding and is already active on X, flagging stories in real time while investigations are pending. Read more in TechCrunch.
EYE ON AI NUMBERS
4.7
That's the number Anthropic has given its latest model, Claude Opus 4.7, released Thursday. The company says the upgrade is notably better at software engineering than previous models—and those were already not too shabby—with particular gains on the most difficult tasks. The model's "vision" is also substantially better, processing images at more than three times the resolution of prior Claude models. Beyond code, Anthropic is pitching Opus 4.7 as stronger all-around for complex, long-running tasks. It scores higher on finance and legal benchmarks, produces fewer errors on document reasoning, and is better at following instructions precisely — though Anthropic warns that its more literal interpretation of prompts may require users to re-tune their existing system prompts.
The release also marks the first real-world test of Anthropic's new cyber safeguards, designed to automatically detect and block high-risk security requests—part of the company's stated plan to eventually release its more powerful, and more restricted, Mythos model to the public. Anthropic says security professionals who want to use Opus 4.7 for legitimate purposes, such as penetration testing, can apply to join Anthropic's new Cyber Verification Program.
AI CALENDAR
June 8-10: Fortune Brainstorm Tech, Aspen, Colo. Apply to attend here.
July 6-11: International Conference on Machine Learning (ICML), Seoul, South Korea.
July 7-10: AI for Good Summit, Geneva, Switzerland.
Aug. 4-6: Ai4, Las Vegas.











