• Home
  • Latest
  • Fortune 500
  • Finance
  • Tech
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
TechAI

China’s DeepSeek AI is full of misinformation and can be tricked into generating bomb instructions, researchers warn

By
David Meyer
David Meyer
Down Arrow Button Icon
By
David Meyer
David Meyer
Down Arrow Button Icon
January 29, 2025, 9:17 AM ET
Updated January 30, 2025, 5:11 AM ET
The DeepSeek AI application is seen on a mobile phone in this photo illustration taken in Warsaw, Poland on 27 January, 2025.
Jaap Arriens—NurPhoto/Getty Images

As China’s DeepSeek grabs headlines around the world for its disruptively low-cost AI, it is only natural that its models are coming under intense scrutiny—and some researchers are not liking what they see.

On Wednesday, the information-reliability organization NewsGuard said it had audited DeepSeek’s chatbot and found that it provided inaccurate answers or nonanswers 83% of the time when asked about news-related subjects. When presented with demonstrably false claims, it debunked them just 17% of the time, NewsGuard found.

According to NewsGuard, the 83% fail rate places DeepSeek’s R1 model in 10th place out of 11 chatbots it has tested, the rest of which are Western services like OpenAI’s ChatGPT-4, Anthropic’s Claude, and Mistral’s Le Chat. (NewsGuard compares chatbots each month in its AI Misinformation Monitor program, but it usually does not name which chatbots rank in which place, as it says it views the problem as systemic across the industry; it only publicly assigns a score to a named chatbot when adding it to the comparison for the first time, as it has now done with DeepSeek.)

NewsGuard identified a few likely reasons why DeepSeek fails so badly when it comes to reliability. The chatbot claims to have not been trained on any information after October 2023, which scans with its inability to reference recent events. Also, it seems to be easy to trick DeepSeek into repeating false claims, potentially at scale.

But this audit of DeepSeek also reinforced how the AI’s output is skewed by its adherence to Chinese information policies, which treat many subjects as taboo and demand adherence to the Communist Party line.

“In the case of three of the 10 false narratives tested in the audit, DeepSeek relayed the Chinese government’s position without being asked anything relating to China, including the government’s position on the topic,” wrote NewsGuard analysts Macrina Wang, Charlene Lin, and McKenzie Sadeghi.

They added: “DeepSeek appears to be taking a hands-off approach and shifting the burden of verification away from developers and to its users, adding to the growing list of AI technologies that can be easily exploited by bad actors to spread misinformation unchecked.”

Meanwhile, as DeepSeek’s impact upset the markets on Monday, the cybercrime threat intelligence outfit Kela published its own damning analysis of DeepSeek.

“While DeepSeek-R1 bears similarities to ChatGPT, it is significantly more vulnerable,” Kela warned, saying its researchers had managed to “jailbreak the model across a wide range of scenarios, enabling it to generate malicious outputs, such as ransomware development, fabrication of sensitive content, and detailed instructions for creating toxins and explosive devices.”

Kela said DeepSeek was vulnerable to so-called Evil Jailbreak attacks, which involve instructing an AI to answer questions about illegal activities—like how to launder money or write and deploy data-stealing malware—in an “evil” persona that ignores the safety guardrails built into the model. OpenAI’s recent models have been patched against such attacks, the company noted.

What’s more, Kela claimed there are dangers to the way DeepSeek displays its reasoning to the user. While OpenAI’s ChatGPT o1-preview model hides its reasoning processes when answering a query, DeepSeek makes that process clear. So if someone asks it to generate malware, it even shows code snippets that criminals can use in their own development efforts. By showing the user the internal “thinking” of the model, it also makes it far easier for a user to figure out what prompts might defeat any of the model’s guardrails.

“This level of transparency, while intended to enhance user understanding, inadvertently exposed significant vulnerabilities by enabling malicious actors to leverage the model for harmful purposes,” Kela said.

The company said it also got DeepSeek to generate instructions for making bombs and untraceable toxins, and to fabricate personal information about people.

Also on Wednesday, the cloud security company Wiz said it found an enormous security flaw in DeepSeek’s operations, which DeepSeek fixed after Wiz gave it a heads-up. A DeepSeek database was accessible to the public, potentially allowing miscreants to take control of DeepSeek’s database operations and access internal data like chat history and sensitive information.

“While much of the attention around AI security is focused on futuristic threats, the real dangers often come from basic risks—like accidental external exposure of databases. These risks, which are fundamental to security, should remain a top priority for security teams,” Wiz said in a blog post. “As organizations rush to adopt AI tools and services from a growing number of startups and providers, it’s essential to remember that by doing so, we’re entrusting these companies with sensitive data.”

These revelations will no doubt bolster the Western backlash to DeepSeek, which is suddenly the most popular app download in the U.S. and elsewhere.

OpenAI claims that DeepSeek trained its new models on the output of OpenAI’s models—a pretty common cost-cutting technique in the AI business, albeit one that may break OpenAI’s terms and conditions. (There has been no shortage of social-media schadenfreude over this possibility, given that OpenAI and its peers almost certainly trained their models on reams of other people’s online data without permission.)

The U.S. Navy has told its members to steer clear of using the Chinese AI platform at all, owing to “potential security and ethical concerns associated with the model’s origin and usage.” And White House press secretary Karoline Leavitt said Tuesday that the U.S. National Security Council is looking into DeepSeek’s implications.

The Trump administration last week tore up the Biden administration’s AI safety rules, which required companies like OpenAI to give the government a heads-up about the inner workings of new models before releasing them to the public.

Italy’s data-protection authority has also started probing DeepSeek’s data use, though it has previously done the same for other popular AI chatbots.

Update: This article was updated on Jan. 30th to include information about Wiz’s findings.

Join us at the Fortune Workplace Innovation Summit May 19–20, 2026, in Atlanta. The next era of workplace innovation is here—and the old playbook is being rewritten. At this exclusive, high-energy event, the world’s most innovative leaders will convene to explore how AI, humanity, and strategy converge to redefine, again, the future of work. Register now.
About the Author
By David Meyer
LinkedIn icon
See full bioRight Arrow Button Icon

Latest in Tech

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025

Most Popular

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Fortune Secondary Logo
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Fortune Crypto
  • Features
  • Leadership
  • Health
  • Commentary
  • Success
  • Retail
  • Mpw
  • Tech
  • Lifestyle
  • CEO Initiative
  • Asia
  • Politics
  • Conferences
  • Europe
  • Newsletters
  • Personal Finance
  • Environment
  • Magazine
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
  • Group Subscriptions
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map
  • Facebook icon
  • Twitter icon
  • LinkedIn icon
  • Instagram icon
  • Pinterest icon

Latest in Tech

lancaster
AIschools
Two private school boys get probation for using AI to create 350 fake nudes of their classmates
By Mark Scolforo and The Associated PressMarch 25, 2026
1 hour ago
melania
PoliticsWhite House
Enter Melania Trump, escorted by humanoid robot: ‘I’m Figure 03, a humanoid built for the United States of America’
By Darlene Superville and The Associated PressMarch 25, 2026
1 hour ago
bernie
AICongress
Bernie Sanders and AOC launch bill to ban new data-center construction
By Matthew Daly and The Associated PressMarch 25, 2026
2 hours ago
Big TechSocial Media
A court just ruled that tech addiction is real—and dangerous. It could be Meta and YouTube’s Big Tobacco moment
By Kristin StollerMarch 25, 2026
2 hours ago
Warner gestures
AIAmerican Politics
New college grad unemployment will spike to 35% in 2 years, senator warns, forcing ‘Dario, Sam’ to quit AI fear-mongering
By Jacqueline MunisMarch 25, 2026
4 hours ago
Big TechMeta
Meta and YouTube found liable in landmark child social media harm case, ordered to pay $3 million—with punitive damages still to come
By Kaitlyn Huamani, Barbara Ortutay and The Associated PressMarch 25, 2026
4 hours ago

Most Popular

Magazine
The youngest-ever female CEO of a Fortune 500 company is fighting Trump's cuts to keep Medicaid strong
By Fortune EditorsMarch 24, 2026
2 days ago
Commentary
The Treasury just declared the U.S. insolvent. The media missed it
By Fortune EditorsMarch 23, 2026
2 days ago
Success
Palantir’s billionaire CEO says only two kinds of people will succeed in the AI era: trade workers — ‘or you’re neurodivergent’
By Fortune EditorsMarch 24, 2026
1 day ago
Energy
Nobel laureate Paul Krugman calls it 'treason': $580 million in suspicious oil futures traded minutes before Trump's Iran reversal
By Fortune EditorsMarch 24, 2026
1 day ago
Success
The job market is so bad that ‘reverse recruiters’ are charging $1,500 a month just to help people look for jobs
By Fortune EditorsMarch 25, 2026
15 hours ago
Success
JPMorgan has started monitoring the keystrokes, video calls, and meetings of its junior investment bankers—and they say it's for employee well-being
By Fortune EditorsMarch 24, 2026
1 day ago

© 2026 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.