• Home
  • Latest
  • Fortune 500
  • Finance
  • Tech
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
TechAI

OpenAI updated its safety framework—but no longer sees mass manipulation and disinformation as a critical risk

By
Sharon Goldman
Sharon Goldman
and
Jeremy Kahn
Jeremy Kahn
Down Arrow Button Icon
By
Sharon Goldman
Sharon Goldman
and
Jeremy Kahn
Jeremy Kahn
Down Arrow Button Icon
April 16, 2025, 3:09 PM ET
Sam Altman holds a microphone and speaks amid a bright multicolor backdrop.
Sam Altman, CEO of OpenAI, whose AI agent has set a new standard of performance on Humanity’s Last Exam.Nathan Laine—Bloomberg/Getty Images

OpenAI said it will stop assessing its AI models prior to releasing them for the risk that they could persuade or manipulate people, possibly helping to swing elections or create highly effective propaganda campaigns.

The company said it would now address those risks through its terms of service, restricting the use of its AI models in political campaigns and lobbying, and monitoring how people are using the models once they are released for signs of violations.

Recommended Video

OpenAI also said it would consider releasing AI models that it judged to be “high risk” as long as it has taken appropriate steps to reduce those dangers—and would even consider releasing a model that presented what it called “critical risk” if a rival AI lab had already released a similar model. Previously, OpenAI had said it would not release any AI model that presented more than a “medium risk.”

The changes in policy were laid out in an update to OpenAI’s “Preparedness Framework” yesterday. That framework details how the company monitors the AI models it is building for potentially catastrophic  dangers—everything from the possibility the models will help someone create a biological weapon to  their ability to assist hackers to the possibility that the models will self-improve and escape human control. 

The policy changes split AI safety and security experts. Several took to social media to commend OpenAI for voluntarily releasing the updated framework, noting improvements such as clearer risk categories and a stronger emphasis on emerging threats like autonomous replication and safeguard evasion. 

However, others voiced concerns, including Steven Adler, a former OpenAI safety researcher who criticized the fact that the updated framework no longer requires safety tests of fine-tuned models. ”OpenAI is quietly reducing its safety commitments,” he wrote on X. Still, he emphasized that he appreciated OpenAI’s efforts:  “I’m overall happy to see the Preparedness Framework updated,” he said. “This was likely a lot of work, and wasn’t strictly required.” 

Some critics highlighted the removal of persuasion from the dangers the Preparedness Framework addresses. 

“OpenAI appears to be shifting its approach,” said Shyam Krishna, a research leader in AI policy and governance at RAND Europe. “Instead of treating persuasion as a core risk category, it may now be addressed either as a higher-level societal and regulatory issue or integrated into OpenAI’s existing guidelines on model development and usage restrictions.” It remains to be seen how this will play out in areas like politics, he added, where AI’s persuasive capabilities are “still a contested issue.”

Courtney Radsch, a senior fellow at Brookings, the Center for International Governance Innovation, and the Center for Democracy and Technology working on AI ethics went further, calling the framework in a message to Fortune “another example of the technology sector’s hubris.” She emphasized that the decision to downgrade ‘persuasion’ “ignores context – for example, persuasion may be existentially dangerous to individuals such as children or those with low AI literacy or in authoritarian states and societies.”

Oren Etzioni, former CEO of the Allen Institute for AI and founder of TrueMedia, which offers tools to fight AI-manipulated content, also expressed concern. “Downgrading deception strikes me as a mistake given the increasing persuasive power of LLMs,” he said in an email. “One has to wonder whether OpenAI is simply focused on chasing revenues with minimal regard for societal impact.”

However, one AI safety researcher not affiliated with OpenAI told Fortune that it seems reasonable to simply address any risks from disinformation or other malicious persuasion uses through OpenAI’s terms of service. The researcher, who asked to remain anonymous because he is not permitted to speak publicly without authorization from his current employer, added that persuasion/manipulation risk is difficult to evaluate in pre-deployment testing. In addition, he pointed out that this category of risk is more amorphous and ambivalent compared to other critical risks, such as the risk AI will help someone perpetrate a chemical or biological weapons attack or will help someone in a cyberattack.

It is notable that some Members of the European Parliament have also voiced concern that the latest draft of the proposed code of practice for complying with the EU AI Act also downgraded mandatory testing of AI models for the possibility that they could spread disinformation and undermine democracy to a voluntary consideration.

Studies have found AI chatbots to be highly persuasive, although this capability itself is not necessarily dangerous. Researchers at Cornell University and MIT, for instance, found that dialogues with chatbots were effective at getting people question conspiracy theories.

Another criticism of OpenAI’s updated framework centered on a line where OpenAI states: “If another frontier AI developer releases a high-risk system without comparable safeguards, we may adjust our requirements.”

Max Tegmark, the president of the Future of Life Institute, a non-profit that seeks to address existential risks, including threats from advanced AI systems, said in a statement to Fortune that “the race to the bottom is speeding up. These companies are openly racing to build uncontrollable artificial general intelligence—smarter-than-human AI systems designed to replace humans—despite admitting the massive risks this poses to our workers, our families, our national security, even our continued existence.”

“They’re basically signaling that none of what they say about AI safety is carved in stone,” said longtime OpenAI critic Gary Marcus in a LinkedIn message, who said the line forewarns a race to the bottom. “What really governs their decisions is competitive pressure—not safety. Little by little, they’ve been eroding everything they once promised. And with their proposed new social media platform, they’re signaling a shift toward becoming a for-profit surveillance company selling private data—rather than a nonprofit focused on benefiting humanity.” 

Overall, it is useful that companies like OpenAI are sharing their thinking around their risk management practices openly, Miranda Bogen, director of the AI governance lab at the Center for Democracy & Technology, told Fortune in an email. 

That said, she added she is concerned about moving the goalposts. “It would be a troubling trend if, just as AI systems seem to be inching up on particular risks, those risks themselves get deprioritized within the guidelines companies are setting for themselves,” she said. 

She also criticized the framework’s focus on ‘frontier’ models when OpenAI and other companies have used technical definitions of that term as an excuse to not publish safety evaluations of recent, powerful models.(For example, OpenAI released its 4.1 model yesterday without a safety report, saying that it was not a frontier model). In other cases, companies have either failed to publish safety reports or been slow to do so, publishing them months after the model has been released.

“Between these sorts of issues and an emerging pattern among AI developers where new models are being launched well before or entirely without the documentation that companies themselves promised to release, it’s clear that voluntary commitments only go so far,” she said.

Update, April 16: This story has been updated to include a comments from Future of Life Institute President Max Tegmark.

Join us at the Fortune Workplace Innovation Summit May 19–20, 2026, in Atlanta. The next era of workplace innovation is here—and the old playbook is being rewritten. At this exclusive, high-energy event, the world’s most innovative leaders will convene to explore how AI, humanity, and strategy converge to redefine, again, the future of work. Register now.
About the Authors
Sharon Goldman
By Sharon GoldmanAI Reporter
LinkedIn icon

Sharon Goldman is an AI reporter at Fortune and co-authors Eye on AI, Fortune’s flagship AI newsletter. She has written about digital and enterprise tech for over a decade.

See full bioRight Arrow Button Icon
Jeremy Kahn
By Jeremy KahnEditor, AI
LinkedIn iconTwitter icon

Jeremy Kahn is the AI editor at Fortune, spearheading the publication's coverage of artificial intelligence. He also co-authors Eye on AI, Fortune’s flagship AI newsletter.

See full bioRight Arrow Button Icon

Latest in Tech

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025

Most Popular

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Fortune Secondary Logo
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Fortune Crypto
  • Features
  • Leadership
  • Health
  • Commentary
  • Success
  • Retail
  • Mpw
  • Tech
  • Lifestyle
  • CEO Initiative
  • Asia
  • Politics
  • Conferences
  • Europe
  • Newsletters
  • Personal Finance
  • Environment
  • Magazine
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
  • Group Subscriptions
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map
  • Facebook icon
  • Twitter icon
  • LinkedIn icon
  • Instagram icon
  • Pinterest icon

Latest in Tech

The ‘death of SaaS’ could be the best thing to ever happen to SaaS M&A
NewslettersTerm Sheet
The ‘death of SaaS’ could be the best thing to ever happen to SaaS M&A
By Allie GarfinkleMarch 31, 2026
2 minutes ago
Peter Doyle (left) and Hussain Kader smile
AIIT
Exclusive: Treeline raises $25 million in  Andreessen Horowitz-led funding to streamline IT services with AI
By Lily Mae LazarusMarch 31, 2026
16 minutes ago
Varun Sivaram, chief executive officer of Emerald AI, at the CERAWeek by S&P Global conference in Houston, Texas, US, on Thursday, March 26, 2026. The event convenes more than 10,000 participants from over 2,350 companies across 89 countries for dialogue on the agenda ahead as the world enters a new era of energy transition. Photographer: Aaron M. Sprecher/Bloomberg via Getty Images
AINvidia
Emerald AI raises $25 million from Nvidia and others to build a fast pass for data centers connecting to the grid
By Jordan BlumMarch 31, 2026
1 hour ago
Sycophantic AI tells users they’re right 49% more than humans do, and a Stanford study claims it’s making them worse people
AITech
Sycophantic AI tells users they’re right 49% more than humans do, and a Stanford study claims it’s making them worse people
By Marco Quiroz-GutierrezMarch 31, 2026
2 hours ago
Microsoft revamps Copilot—with Anthropic
NewslettersFortune Tech
Microsoft revamps Copilot—with Anthropic
By Alexei OreskovicMarch 31, 2026
2 hours ago
Is Europe too regulated to win the AI race—or ready for a second act?
Magazineregulation
Is Europe too regulated to win the AI race—or ready for a second act?
By Francesca CassidyMarch 31, 2026
3 hours ago

Most Popular

413,793 KitKat bars stolen: 'Whilst we appreciate the criminals’ exceptional taste, the fact remains that cargo theft is an escalating issue'
Europe
413,793 KitKat bars stolen: 'Whilst we appreciate the criminals’ exceptional taste, the fact remains that cargo theft is an escalating issue'
By Fortune EditorsMarch 28, 2026
3 days ago
Jerome Powell says the $39 trillion national debt is ‘not unsustainable,’ but warns the trajectory ‘will not end well’
Economy
Jerome Powell says the $39 trillion national debt is ‘not unsustainable,’ but warns the trajectory ‘will not end well’
By Fortune EditorsMarch 30, 2026
15 hours ago
A man used AI to call 3,000 Irish bartenders to track the cost of Guinness. Now pubs are lowering their prices to compete
AI
A man used AI to call 3,000 Irish bartenders to track the cost of Guinness. Now pubs are lowering their prices to compete
By Fortune EditorsMarch 30, 2026
19 hours ago
A CEO trying to reindustrialize America says blue-collar pay is headed for 'massive hyperinflation' and kids should skip college to become welders
Success
A CEO trying to reindustrialize America says blue-collar pay is headed for 'massive hyperinflation' and kids should skip college to become welders
By Fortune EditorsMarch 30, 2026
20 hours ago
Current price of gold as of March 30, 2026
Personal Finance
Current price of gold as of March 30, 2026
By Fortune EditorsMarch 30, 2026
23 hours ago
Some cried. Others were speechless. How frontline workers walked away with checks averaging $240,000, nearly equal Wall Street bonuses, when KKR sold their company
Personal Finance
Some cried. Others were speechless. How frontline workers walked away with checks averaging $240,000, nearly equal Wall Street bonuses, when KKR sold their company
By Fortune EditorsMarch 29, 2026
2 days ago

© 2026 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.