• Home
  • Latest
  • Fortune 500
  • Finance
  • Tech
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
TechAI

Google’s latest AI model is missing a key safety report in apparent violation of promises made to the U.S. government and at international summits

By
Beatrice Nolan
Beatrice Nolan
and
Jeremy Kahn
Jeremy Kahn
Down Arrow Button Icon
By
Beatrice Nolan
Beatrice Nolan
and
Jeremy Kahn
Jeremy Kahn
Down Arrow Button Icon
April 9, 2025, 8:19 AM ET
A mobile phone screen displaying the Google Gemini logo
Google released its latest AI model, Gemini 2.5 Pro, without a key safety report in apparent violation of promises the company made to the U.S. government and at international AI safety summits.Samuel Boivin—NurPhoto via Getty Images

Google’s latest AI model, Gemini 2.5 Pro, was released without a key safety report—a move that critics say violates public commitments the company made to both the U.S. government and at an international summit last year.

Launched in late March, the company has touted the newest Gemini model as the most capable yet, claiming it “leads common benchmarks by meaningful margins.”

Recommended Video

But Google’s failure to release an accompanying safety research report—also known as a “system card” or “model card”—reneges on previous commitments the company made. 

At a July 2023 meeting convened by then-President Joe Biden’s administration at the White House, Google was among a number of leading AI companies that signed a series of commitments, including a pledge to publish reports for all major public model releases more powerful than the current state of the art at the time. Google 2.5 Pro almost certainly would be considered “in scope” of these White House Commitments.  

At the time, Google agreed the reports “should include the safety evaluations conducted (including in areas such as dangerous capabilities, to the extent that these are responsible to publicly disclose), significant limitations in performance that have implications for the domains of appropriate use, discussion of the model’s effects on societal risks such as fairness and bias, and the results of adversarial testing conducted to evaluate the model’s fitness for deployment.”

After the G7 meeting in Hiroshima, Japan, in October 2023, Google was among the companies that committed to comply with a voluntary code of conduct on the development of advanced AI. That G7 code includes a commitment to “publicly report advanced AI systems’ capabilities, limitations and domains of appropriate and inappropriate use, to support ensuring sufficient transparency,
thereby contributing to increase accountability.”

Then, at an international summit on AI safety held in Seoul, South Korea, in May 2024, the company reiterated similar promises—committing to publicly disclose model capabilities, limitations, appropriate and inappropriate use cases, and to provide transparency around its risk assessments and outcomes.

In an emailed statement, a spokesperson for Google DeepMind, which is the Google division responsible for developing Google’s Gemini models, told Fortune that the latest Gemini has undergone “pre-release testing, including internal development evaluations and assurance evaluations which had been conducted before the model was released.” They added that a report with additional safety information and model cards was “forthcoming.” But the spokesperson first issued that statement to Fortune on April 2, and since then no model card has been published.

Tech companies are backsliding on promises, experts fear

Google is not the only one facing scrutiny over its commitment to AI safety. Earlier this year, OpenAI also failed to release a timely model card for its Deep Research model, instead publishing a system card weeks after the project had been initially released. Meta’s recent safety report for Llama 4 has been criticized for its lack of length and detail. 

“These failures by some of the major labs to have their safety reporting keep pace with their actual model releases are particularly disappointing considering those same companies voluntarily committed to the U.S. and the world to produce those reports—first in commitments to the Biden administration in 2023 and then through similar pledges in 2024 to comply with the AI code of conduct adopted by the G7 nations at their AI summit in Hiroshima,” Kevin Bankston, an advisor on AI governance at the Center for Democracy and Technology, told Fortune.

Information on government safety evaluations also absent

The Google spokesperson’s statement also failed to address questions as to whether Google 2.5 Pro had been submitted to external evaluation by the U.K. AI Security Institute or the U.S. AI Safety Institute.

Google had previously provided earlier generations of its Gemini models to the U.K. AI Safety Institute for evaluation.

At the Seoul Safety Summit, Google signed up to the “Frontier AI Safety Commitments” which included a pledge to “provide public transparency on the implementation” of safety evaluations, except when doing so “would increase risk or divulge sensitive commercial information to a degree disproportionate to the societal benefit.” The pledge said that more detailed information that could not be publicly-shared should still be shared the governments of the countries in which the companies are based, which would be the U.S. in the case of Google.

The companies also committed to “explain how, if at all, external actors, such as governments, civil society, academics, and the public are involved in the process of assessing risks of their AI models.” Google is potentially in violation of that commitment too by not answering direct questions on whether it has submitted Gemini 2.5 Pro to either U.S. or U.K. government evaluators. 

Deployment over transparency 

The missing safety report has prompted concerns that Google may be favoring rapid deployment over transparency.

“Responsible research and innovation means that you are transparent about the capabilities of the system,” Sandra Wachter, a professor and senior researcher at the Oxford Internet Institute, told Fortune. “If this was a car or a plane, we wouldn’t say: let’s just bring this to market as quickly as possible and we will look into the safety aspects later. Whereas with generative AI there’s an attitude of putting this out there and worrying, investigating, and fixing the issues with it later.”

Recent political changes along with a growing rivalry between Big Tech companies may be causing some companies to row back on previous safety commitments as they race to deploy AI models.

“The pressure point for these companies of being faster, being quicker, being the first, being the best, being dominant, is more prevalent than it used to be,” Wachter said, adding that safety standards were falling across the industry. These slipping standards could be driven by a growing concern among tech countries and some governments that AI safety procedures are getting in the way of innovation. 

In the U.S., the Trump administration has signalled that it plans to approach AI regulation with a significantly lighter touch than the Biden administration. The new government has already rescinded a Biden-era executive order on AI and has been cosying up to tech leaders. At the recent AI summit in Paris, U.S. Vice President JD Vance said that “pro-growth AI policies” should be prioritised over safety and AI was “an opportunity that the Trump administration will not squander.”

At the summit, both the UK and the U.S. refused to sign an international agreement on artificial intelligence that pledged an “open”, “inclusive” and “ethical” approach to the technology’s development.

“If we can’t count on these companies to fulfill even their most basic safety and transparency commitments when releasing new models—commitments that they themselves voluntarily made—then they are clearly releasing models too quickly in their competitive rush to dominate the field,” Bankston said. 

“Especially to the extent AI developers continue to stumble in these commitments, it will be incumbent on lawmakers to develop and enforce clear transparency requirements that the companies can’t shirk,” he added.

Join us at the Fortune Workplace Innovation Summit May 19–20, 2026, in Atlanta. The next era of workplace innovation is here—and the old playbook is being rewritten. At this exclusive, high-energy event, the world’s most innovative leaders will convene to explore how AI, humanity, and strategy converge to redefine, again, the future of work. Register now.
About the Authors
By Beatrice NolanTech Reporter
Twitter icon

Beatrice Nolan is a tech reporter on Fortune’s AI team, covering artificial intelligence and emerging technologies and their impact on work, industry, and culture. She's based in Fortune's London office and holds a bachelor’s degree in English from the University of York. You can reach her securely via Signal at beatricenolan.08

See full bioRight Arrow Button Icon
Jeremy Kahn
By Jeremy KahnEditor, AI
LinkedIn iconTwitter icon

Jeremy Kahn is the AI editor at Fortune, spearheading the publication's coverage of artificial intelligence. He also co-authors Eye on AI, Fortune’s flagship AI newsletter.

See full bioRight Arrow Button Icon

Latest in Tech

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025

Most Popular

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Fortune Secondary Logo
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Fortune Crypto
  • Features
  • Leadership
  • Health
  • Commentary
  • Success
  • Retail
  • Mpw
  • Tech
  • Lifestyle
  • CEO Initiative
  • Asia
  • Politics
  • Conferences
  • Europe
  • Newsletters
  • Personal Finance
  • Environment
  • Magazine
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
  • Group Subscriptions
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map
  • Facebook icon
  • Twitter icon
  • LinkedIn icon
  • Instagram icon
  • Pinterest icon

Latest in Tech

lancaster
AIschools
Two private school boys get probation for using AI to create 350 fake nudes of their classmates
By Mark Scolforo and The Associated PressMarch 25, 2026
2 hours ago
melania
PoliticsWhite House
Enter Melania Trump, escorted by humanoid robot: ‘I’m Figure 03, a humanoid built for the United States of America’
By Darlene Superville and The Associated PressMarch 25, 2026
3 hours ago
bernie
AICongress
Bernie Sanders and AOC launch bill to ban new data-center construction
By Matthew Daly and The Associated PressMarch 25, 2026
3 hours ago
Big TechSocial Media
A court just ruled that tech addiction is real—and dangerous. It could be Meta and YouTube’s Big Tobacco moment
By Kristin StollerMarch 25, 2026
3 hours ago
Warner gestures
AIAmerican Politics
New college grad unemployment will spike to 35% in 2 years, senator warns, forcing ‘Dario, Sam’ to quit AI fear-mongering
By Jacqueline MunisMarch 25, 2026
5 hours ago
Big TechMeta
Meta and YouTube found liable in landmark child social media harm case, ordered to pay $3 million—with punitive damages still to come
By Kaitlyn Huamani, Barbara Ortutay and The Associated PressMarch 25, 2026
5 hours ago

Most Popular

Magazine
The youngest-ever female CEO of a Fortune 500 company is fighting Trump's cuts to keep Medicaid strong
By Fortune EditorsMarch 24, 2026
2 days ago
Commentary
The Treasury just declared the U.S. insolvent. The media missed it
By Fortune EditorsMarch 23, 2026
2 days ago
Success
Palantir’s billionaire CEO says only two kinds of people will succeed in the AI era: trade workers — ‘or you’re neurodivergent’
By Fortune EditorsMarch 24, 2026
1 day ago
Energy
Nobel laureate Paul Krugman calls it 'treason': $580 million in suspicious oil futures traded minutes before Trump's Iran reversal
By Fortune EditorsMarch 24, 2026
1 day ago
Success
The job market is so bad that ‘reverse recruiters’ are charging $1,500 a month just to help people look for jobs
By Fortune EditorsMarch 25, 2026
16 hours ago
Success
JPMorgan has started monitoring the keystrokes, video calls, and meetings of its junior investment bankers—and they say it's for employee well-being
By Fortune EditorsMarch 24, 2026
1 day ago

© 2026 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.