• Home
  • Latest
  • Fortune 500
  • Finance
  • Tech
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
CommentaryAnthropic

Anthropic’s most powerful AI model just exposed a crisis in corporate governance. Here’s the framework every CEO needs.

By
Jeffrey Sonnenfeld
Jeffrey Sonnenfeld
,
Stephen Henriques
Stephen Henriques
,
Dan Kent
Dan Kent
, and
Holden Lee
Holden Lee
Down Arrow Button Icon
By
Jeffrey Sonnenfeld
Jeffrey Sonnenfeld
,
Stephen Henriques
Stephen Henriques
,
Dan Kent
Dan Kent
, and
Holden Lee
Holden Lee
Down Arrow Button Icon
May 2, 2026, 8:00 AM ET
dario
Dario Amodei, co-founder and chief executive officer of Anthropic, at Bloomberg House during the World Economic Forum (WEF) in Davos, Switzerland, on Tuesday, Jan. 20, 2026. The annual Davos gathering of political leaders, top executives and celebrities runs from Jan. 19-23. Chris Ratcliffe/Bloomberg via Getty Images

In early April, Anthropic sent shudders through the tech community with Claude’s Mythos Preview model. Mythos marked a paradigm shift in AI capabilities, reportedly delivering processing power that enables superhuman coding and reasoning, a massive performance leap over previous models. While testing the model, Anthropic discovered decades-old software flaws and bugs that had evaded millions of previous attempts. Addressing such concerns is very different from the familiar parallel in public policy debates over how AI raises such concerns for protecting privacy and intellectual property in the age of spiraling entrepreneurial opportunities and ferocious global competition. These new challenges speak to shared concerns by all parties across sectors. 

Recommended Video

For example, Mytho’s model’s agentic abilities pose severe security risks as they can autonomously execute multi-step attacks and generate exploits at a fraction of the cost of humans. In response, Anthropic launched Project Glasswing, a coalition providing restricted access to the U.S. Cybersecurity and Infrastructure Security Agency (CISA) and a consortium of U.S. corporates, including Microsoft, Apple, and J.P. Morgan, to help identify and fix critical system vulnerabilities before Mythos’ potential public release.

The emergence of Mythos underscores the urgent need for robust AI governance. When given profit-at-all-costs prompts, agentic systems have exhibited aggressive behavior, such as threatening a competitor with supply cutoffs in simulations. As these systems scale in performance and usage, companies must regard AI not just as chatbots but as a system of autonomous agents requiring strict oversight. Without governance, Agentic AI risks writing unverified, hostile code and sensitive interactions with external vendors without oversight. In multi-step agentic pipelines, even small drops in accuracy can cause cascading errors, making sovereign AI architecture and central monitoring essential for oversight of autonomous decisions.

While leaders in the artificial intelligence industry dubbed 2025 the year of Agentic AI, 2026 marks the shift from capability to execution. Unlike large language models, AI agents can interact with external tools, execute multiple steps to complete a task, learn from their results, and iterate. Yet even as Agentic AI systems evolve rapidly across industries, governance and regulatory policy are moving far more slowly.

Without governance that addresses accountability, transparency, bias, and data privacy, enterprise deployment will stall on its most significant risks. But rollout varies sharply across industries, and leaders face similar yet distinct questions about what to assess before deployment, what to govern during it, and which companies are already navigating it well. 

To map the answers, Yale’s Chief Executive Leadership Institute conducted a cross-industry review of Agentic AI deployments and the governance practices emerging from them. Governance, in this pure definition, is not an evaluation of threats from the Trump administration to preempt state AI laws, debates about the economic and national security effects of a patchwork of disharmonious state regulations, the oversight of “frontier” AI model developers, or the protection of consumers and children from potential abuses of AI technologies. Rather, this analysis looks further ahead to the collective system safeguards and practices that the private sector must institutionalize now, not only to ensure Agentic AI will scale effectively but also to ensure it operates as designed at the enterprise level.

A View of Current Regulation and Governance

Currently, a patchwork of domestic and international regimes governs AI. Key domestic frameworks include the NIST AI Risk Management Framework and the National Policy Framework for Artificial Intelligence. States and localities have been active as well, including California’s SB 53, New York’s RAISE Act, and certain New York City regulations on automated hiring. Internationally, influential governance models include the EU Artificial Intelligence Act, South Korea’s Framework Act, Singapore’s Model AI Governance Framework, and China’s set of AI regulations. More will follow.

These regimes differ in critical ways. Some are legally binding (California, New York, China, the EU); others issue voluntary guidance (NIST, Singapore). They vary in target, whether model developers, deployers, or systems, and in requirements, from mandatory reporting to specific safety thresholds. What meets standards in one jurisdiction may fall short in another, creating a fragmented and at times unworkable compliance environment.

Regulation has historically lagged innovation. State and national standards for automobiles took decades to emerge. The Clinton administration’s light-touch approach shaped internet governance for a generation. Social media is still working through foundational questions, as the Section 230 debate shows.

Private-sector governance models for agentic deployment will be critical to building consumer confidence and ensuring safe, accountable integration into the workplace.

A View Forward

With governance still taking shape, leaders need a working framework. Eight variables anchor it.

Four of these variables matter most before deployment. Transparency asks whether stakeholders can reconstruct how the agent reached its decision, through explainability, disclosure, and auditable pathways. Accountability asks who bears responsibility when things go wrong, and how humans intervene and remediate. Bias asks whether the system perpetuates, amplifies, or introduces systematic disadvantage, including through feedback loops where biased outputs reinforce biased inputs. Data privacy asks how the organization protects information that agents access and combine across systems without per-transaction human review. A single workflow may trigger several regulatory regimes at once: HIPAA, GLBA, CCPA/GDPR, bar rules, IRS Circular 230, and trade secret law.

Four more variables matter once deployed, and these are what most differentiate one industry’s challenge from another’s. Decision reversibility sets the upper bound on tolerable error. Stakeholder impact scope determines whether governance must be transactional, with per-decision audits, or systemic, with architecture-level controls. Regulatory prescription shapes the work itself—banking’s SR 11-7 dictates model risk management in detail, while retail has almost no sector-specific AI regulation. Structural systems governability determines how easily governance can be built, whether workflows decompose naturally into discrete, measurable, audit-ready steps, or deliver value through fluid judgment that must be engineered into structure.

By considering these together, we can create a governance diagnostic matrix that generates cross-cutting questions and applied examples for each matrix cell, based on our industry review.

The four industries that follow occupy distinct positions on these dimensions. 

Where existing regulation is extensive, errors are difficult to reverse, and the impact remains at the transaction level, the banking archetype applies. Agent governance maps onto existing infrastructure, with privacy and reversibility as the binding constraints. 

Where regulation is extensive but the consequences involve human well-being, the healthcare archetype holds. Bifurcate, move on administrative use cases now, and invest the runway in the data integration and human-in-the-loop architecture clinical adoption requires. 

Where regulation is minimal and errors are reversible, the retail archetype applies. Experiment at scale, treat deployment as a learning function, and build the patterns that industries with less room to borrow will eventually adopt. 

Where errors cascade across networks, the supply chain and logistics archetype holds. Governance must be architectural, with checkpoints on the highest-leverage decisions, audit logs across all agent actions, and validation layers before execution. 

Organizations whose profiles do not cleanly match should weight reversibility and blast radius most heavily. They determine the consequences when governance fails.

The eight variables define where governance must be tightest, and where leaders can move faster. CEOs can use these as reference archetypes to map their organization against, identify the one that most closely matches its profile, and draw from the lessons that follow.

Financial Services/Banking: Dynamic, but Highly Regulated

For financial services, agentic adoption is not optional. Near-term, agents promise major back-office savings that competitive pressure will quickly hand to consumers. In the medium term, customers will use their own agents to shop rates and switch providers, eroding the inertia that has long protected incumbent relationships. The industry must adapt its business model and integrate agents into customer-facing technology, and quickly.

The good news is that banking’s existing regulatory scaffolding is an asset rather than a hindrance. The frameworks that have long constrained the industry now supply much of the architecture agentic governance requires.

On transparency, SR 11-7’s “Guidance on Model Risk Management” already requires banks to provide specific reasons for model decisions, a requirement that extends to agents. Existing audit and reporting obligations cover much of the ground, though they must expand to track multi-step workflows. The same pattern holds for bias. The Equal Credit Opportunity Act already addresses the most acute risks in agent-outsourced tasks like credit scoring, where errors can disproportionately affect low-income customers. Sandbox testing of both individual models and agent interactions before deployment should be standard.

Decision reversibility is the harder constraint. In credit, anti-money laundering (AML), and fraud, errors are difficult to undo, demanding continuous monitoring as agents take on more ambitious tasks and their behavior shifts. Banks must test full workflows and inter-agent interactions, where unforeseen risks emerge. Identity management—assigning each agent its own ID—enables tracking, and workspaces will need to evolve to allow humans to supervise dozens of agents at once.

Privacy is the hardest problem, and the one that leaders flag most. Industry leaders cite data privacy (77%) and data quality (65%) as their top scaling barriers. Agents are prone to leaking personal data when interacting with external tools and other agents, and exposure cannot be reversed. Since fraud detection and AML require deep data access, banks must tightly constrain how agents use it outside predefined tasks. 

Banks are positioned to deploy agents faster than most industries. The sector’s advantage accrues to those who map agent governance onto existing infrastructure rather than treat it as net-new work.

Healthcare: Slower Adoption, but High Potential

Healthcare is heavily regulated, but unlike banking, it faces fewer immediate competitive pressures to deploy. The result is a bifurcated trajectory—fast adoption on the administrative side and deliberate integration on the clinical side. Leaders who recognize the split will capture near-term wins while building the governance required for the bigger prize.

Administrative wins are already real. Hospitals are seeing efficiency gains in documentation and claims processing, and physicians are seeing more patients through faster order entry, per a Mayo Clinic interview we conducted. Primary care and nursing integration are on the near horizon.

Clinical integration is the harder problem because errors are irreversible. Misrouted referrals or faulty diagnostic recommendations can have life-threatening consequences. The stakes demand transparency as every clinical recommendation must be traceable to its underlying sources. Brazilian nonprofit NoHarm’s prescription-review tool, deployed across 200+ hospitals and screening millions of prescriptions monthly, illustrates both the value at stake and the scale at which a single failure mode would harm patients. Yet, accountability is undercooked. Federal regulators set guardrails only for AI-enabled medical devices, leaving systems to build their own guardrails.

Bias is one of healthcare’s deepest exposures. Decades of underrepresentation in medical training and clinical trials carry forward in training data, and pattern-based specialties like radiology and pathology could amplify those inequities without active mitigation.

Privacy is governed by HIPAA, but the harder operational problem is access. 62% of hospitals report data silos across EHRs, labs, pharmacy, and claims. Agents need data to function, and silos both limit utility and elevate the risk of improper access. Encryption, anonymization, and tight controls help, but do not fix the underlying integration gap.

Healthcare should continue to move on administrative use cases, and invest the runway now in the data integration, bias auditing, and human-in-the-loop architecture that clinical adoption will require. The deliberate pace is appropriate to the stakes — and the governance built today is the moat tomorrow.

Retail: Lower Barriers to Entry

Retail is the industry where Agentic AI is moving fastest, and the one with the most to teach the rest of the economy. Light regulation, decomposable workflows, and reversible errors mean retailers can experiment at scale, iterate quickly, and build governance approaches in live conditions rather than on paper. Moving quickly captures these early returns and will be important for developing institutional muscle from which other industries can eventually learn. 

The trajectory is already visible, with 51% of retailers having deployed AI across six or more functions. Visa and AWS recently published a blueprint for shopping agents across the sales pipeline. And Mastercard’s Agent Pay, launched in 2025, lets registered digital agents browse, select, and purchase on behalf of users, a working example of the sector’s structural advantages stitched into one product.

The industry’s advantages stack with transparency: 54% of U.S. consumers say they do not care whether support comes from AI or humans, as long as it is fast. Retail can deploy without fully solving the disclosure problem first. On accountability, the returns and refunds infrastructure already handles error correction, and escalation is largely automated, leaving retailers well-positioned for agentic accountability without a net-new architecture.

Decision reversibility is the single biggest enabler. Most agent actions, including product selection, cart assembly, pricing, and even completed purchases, are correctable through returns, refunds, or post-transaction adjustments. OpenTable’s agentic customer service resolved 73% of cases within weeks, scaling swiftly precisely because errors carry no irreversible cost. More sophisticated controls—delegated consent, spending limits, audit trails—will mature as the sector does.

The variable to watch is stakeholder impact. Individual purchase errors are trivial, but vendor-side failures in pricing algorithms, inventory, or multi-agent workflows can cascade. Companies are responding by implementing observability tools and centralized monitoring that track agent decisions throughout the transaction lifecycle. AWS’s Amazon Connect suite is one example.

Low regulatory prescription, combined with high structural governability, means retailers are largely building governance from scratch but onto workflows that already cooperate. APIs, standardized catalogs, checkout systems, and payment protocols like AP2 make agent integration natural. Shopify is embedding governance directly into infrastructure, linking identity, payment authorization, and transaction logging, so controls live in the system rather than around it.

Retail’s tailwinds are real, but the strategic value is not just speed. It represents an opportunity to develop and stress-test governance practices that will set the template for industries with less room to experiment. Retailers who treat their deployments as a learning function, not just an efficiency play, will be the ones whose approach shapes adoption across the rest of the economy.

Supply Chain and Logistics: Consequential Transformation

Supply chain and logistics is the fastest-moving industrial sector in agentic deployment, and the industry where governance is most architecturally consequential. The same multi-agent orchestration that enables the speed also makes errors systemic. A single mispriced quote, customs misclassification, or routing error can cascade across suppliers, carriers, plants, and customers in hours. The transformation underway is consequential in both directions—outsized returns for early movers, and outsized exposure if governance lags.

The pace is real and well past the pilot stage. C.H. Robinson’s Always-On Logistics Planner runs over 30 AI agents across the shipment lifecycle, processing over three million tasks and capturing 318,000 freight-tracking updates from phone calls in September alone, with price quotes delivered in 32 seconds, where hours were the standard. UPS used Agentic AI to clear 90% of the 112,000 daily customs packages without manual intervention in September 2025. Uber Freight is running a 30+ agent platform on its AI infrastructure, which already manages roughly $20 billion in freight.

The risk profile is also qualitatively different from earlier industries. In banking, an erroneous decision affects a transaction. In supply chain, it can affect an entire network, and multi-agent networks also widen the vulnerabilities. Sensitive data on pricing, routing, customer identity, and cargo contents moves across systems, where a single compromised credential can have a far-reaching impact. Even DHL, which is using agents for customs clearance and data cleansing, has flagged that recommendations and decisions still require human-in-the-loop oversight and auditability.

This dynamic makes governance a matter of embedding engineering constraints into the system itself, rather than reviewing each decision after the fact. Leaders need human-in-the-loop checkpoints on the highest-leverage decisions—high-value quotes, customs classifications, contractual commitments—alongside mandatory audit logs and version control across all agent actions. Continuous monitoring for data drift, red-teaming of multi-agent interactions, and data validation layers before execution belong in the baseline architecture, not the bolt-on. Deloitte frames Agentic AI in the industry as a system of agents that coordinate across suppliers, plants, and logistics partners, but only within defined guardrails.

Supply chain is where multi-agent governance gets stress-tested at scale. Companies that get the architecture right early will set the patterns the rest of the economy adopts when its agentic systems start orchestrating across organizational boundaries, which they will.

Three takeaways travel across all four industries. Existing regulatory architecture is an asset instead of a brake. The industries best positioned to deploy quickly are those whose systems most naturally accommodate the eight variables that shape agentic behavior. Banking’s scaffolding is proof; healthcare’s deliberate clinical pace is the right response when irreversibility and bias raise the stakes. The patterns built today are the templates of tomorrow. Retail’s identity frameworks and supply chain’s architectural guardrails will be borrowed by those still catching up. Rather than whether to deploy, the question is how to govern at the scale and pace each environment requires.

The renowned Enlightenment philosopher John Locke advised: “Where there is no law, there is no freedom.” When rule-making is enacted properly, its impact is not to abolish our freedoms nor restrain our lives, but rather to protect and expand our freedom by preventing others from violating our rights. AI developers, businesses, governments, and the public interest should all be on the same side across parties and continents on this front. 

Done well, governance is what makes adoption durable. The companies that establish it intelligently, neither uniformly fast nor uniformly slow, are the ones whose agentic systems will still be running and trusted five years from now.

**This article is part three of a four-part series from the Yale Chief Executive Leadership Institute (CELI) on the state of Agentic AI adoption across industries and sectors. The research is designed to help CEOs understand the current and expected pace at which agentic systems are being deployed—and the strategic decisions that pace forces on them. Over the past six months, CELI researchers analyzed hundreds of company materials and industry analyses and conducted dozens of conversations with senior technology leaders across the U.S. The industries analyzed include Financial Services, Consumer Packaged Goods, Food & Beverage, Healthcare, Insurance, Manufacturing, Professional Services, Real Estate & Housing, Retail, Supply Chain & Logistics, Telecommunications, and Travel & Hospitality, as well as the public sector. The series examines four implications of the findings: labor market effects, data infrastructure readiness, governance and regulatory policy, and customer experience.

With research contribution from Catherine Dai, Zander Jeinthanuttkanont, Yevheniia Podurets, Jasmine Garry, Johan Griesel, Andrew Alam-Nist, Peter Yu, and Christian Ruiz Angulo

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

Join us at the Fortune Workplace Innovation Summit May 19–20, 2026, in Atlanta. The next era of workplace innovation is here—and the old playbook is being rewritten. At this exclusive, high-energy event, the world’s most innovative leaders will convene to explore how AI, humanity, and strategy converge to redefine, again, the future of work. Register now.
About the Authors
By Jeffrey Sonnenfeld

Jeffrey Sonnenfeld is the Lester Crown Professor in Management Practice and Senior Associate Dean at Yale School of Management.

See full bioRight Arrow Button Icon
By Stephen Henriques
See full bioRight Arrow Button Icon
By Dan Kent
See full bioRight Arrow Button Icon
By Holden Lee
See full bioRight Arrow Button Icon

Latest in Commentary

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025

Most Popular

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Fortune Secondary Logo
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • World's Most Admired Companies
  • See All Rankings
  • Lists Calendar
Sections
  • Finance
  • Fortune Crypto
  • Features
  • Leadership
  • Health
  • Commentary
  • Success
  • Retail
  • Mpw
  • Tech
  • Lifestyle
  • CEO Initiative
  • Asia
  • Politics
  • Conferences
  • Europe
  • Newsletters
  • Personal Finance
  • Environment
  • Magazine
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
  • Group Subscriptions
About Us
  • About Us
  • Press Center
  • Work At Fortune
  • Terms And Conditions
  • Site Map
  • About Us
  • Press Center
  • Work At Fortune
  • Terms And Conditions
  • Site Map
  • Facebook icon
  • Twitter icon
  • LinkedIn icon
  • Instagram icon
  • Pinterest icon
Jeffrey Sonnenfeld is Lester Crown Professor of Leadership Practice at the Yale School of Management and founder of the Yale Chief Executive Leadership Institute. A leadership and governance scholar, he created the world’s first school for incumbent CEOs and he has advised five U.S. presidents across political parties. His latest book, Trump’s Ten Commandments, will be published by Simon & Schuster in March 2026.
Stephen Henriques is a senior research fellow of the Yale Chief Executive Leadership Institute. He was a consultant at McKinsey & Company and a policy analyst for the governor of Connecticut. 
Dan Kent and Holden Lee are research assistants with the Yale Chief Executive Leadership Institute.

Latest in Commentary

jason corso
Commentarydisruption
AI models are choking on junk data
By Jason CorsoMay 3, 2026
3 hours ago
blake
CommentaryHousing
I spent a decade selling homes to the ultra-wealthy. What I saw explains the housing market’s nepo problem
By Blake O'ShaughnessyMay 3, 2026
5 hours ago
Can the ‘blue economy’ deliver on its promise? Investors are starting see the ocean as an asset worth protecting
CommentaryConservation
Can the ‘blue economy’ deliver on its promise? Investors are starting see the ocean as an asset worth protecting
By Natalie Sum Yue ChungMay 2, 2026
17 hours ago
old
Commentaryaffordability
The American household just took an 81% margin cut. Wall Street hasn’t priced it in
By Katica RoyMay 2, 2026
1 day ago
dario
CommentaryAnthropic
Anthropic’s most powerful AI model just exposed a crisis in corporate governance. Here’s the framework every CEO needs.
By Jeffrey Sonnenfeld, Stephen Henriques, Dan Kent and Holden LeeMay 2, 2026
1 day ago
mackenzie
Commentaryphilanthropy
Stop donating to Harvard and the Ivy League. There’s a better option that MacKenzie Scott already figured out
By Ed Smith-LewisMay 2, 2026
1 day ago

Most Popular

Scott Bessent on financial literacy: 'it drives me crazy' to see young men in blue-collar construction jobs playing the lottery
Personal Finance
Scott Bessent on financial literacy: 'it drives me crazy' to see young men in blue-collar construction jobs playing the lottery
By Fatima Hussein and The Associated PressMay 1, 2026
2 days ago
Gen Z is rebelling against the economy with ‘disillusionomics,’ tackling near 6-figure debt by turning life into a giant list of income streams
Economy
Gen Z is rebelling against the economy with ‘disillusionomics,’ tackling near 6-figure debt by turning life into a giant list of income streams
By Jacqueline MunisMay 2, 2026
1 day ago
The American household just took an 81% margin cut. Wall Street hasn’t priced it in
Commentary
The American household just took an 81% margin cut. Wall Street hasn’t priced it in
By Katica RoyMay 2, 2026
1 day ago
America got rich and got sad. A top economist says 2020 broke something that hasn't healed
Economy
America got rich and got sad. A top economist says 2020 broke something that hasn't healed
By Nick LichtenbergMay 3, 2026
6 hours ago
Stop donating to Harvard and the Ivy League. There's a better option that MacKenzie Scott already figured out
Commentary
Stop donating to Harvard and the Ivy League. There's a better option that MacKenzie Scott already figured out
By Ed Smith-LewisMay 2, 2026
1 day ago
China dominates the world's lithium supply. The U.S. just found 328 years' worth in its own backyard
North America
China dominates the world's lithium supply. The U.S. just found 328 years' worth in its own backyard
By Jake AngeloApril 30, 2026
3 days ago

© 2026 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.