• Home
  • Latest
  • Fortune 500
  • Finance
  • Tech
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
Commentaryinvestment banking

The 19th century banking problem that AI hasn’t solved yet

By
Silvio Savarese
Silvio Savarese
and
Sabastian Niles
Sabastian Niles
Down Arrow Button Icon
By
Silvio Savarese
Silvio Savarese
and
Sabastian Niles
Sabastian Niles
Down Arrow Button Icon
March 20, 2026, 9:30 AM ET
Silvio Savarese is EVP and Chief Scientist, Salesforce AI Research. Sabastian Niles is President and Chief Legal Officer, Salesforce. 
london
The London Clearing House issue still hasn't been figured out.Getty Images

London, 1832. In a modest room on Lombard Street, clerks from thirty-one competing banks gather each afternoon. By 5 p.m., they begin the final settlement process. They are not there to negotiate individual transactions; that would be impossibly slow. They are participating in something revolutionary: the London Bankers’ Clearing House. 

Recommended Video

The technical challenge had already been solved. Banks understood double-entry bookkeeping. They could track debits and credits. They knew how to calculate a balance. The real problem was architectural: How do you enable daily transactions between competitors who fundamentally don’t trust each other? How do you create binding agreements when no single authority has enforcement power over all parties? 

The Clearing House worked not through regulation or legal mandate, but through something more powerful: collective reciprocity backed by reputation. Every bank needed access to the clearing system to function. When a bank violated settlement rules (failing to honor obligations, manipulating exchange rates, or engaging in predatory practices) the response was swift and coordinated. The other banks simply stopped transacting with the violator. 

No lawsuits. No regulatory hearings. Just immediate expulsion from the network that made modern banking possible. 

What made this system work wasn’t just the technical protocols for exchanging paper checks. It was the trust architecture: clear standards for acceptable behavior, transparent verification of compliance, shared consequences for violations, and most critically, registered identity. You couldn’t participate anonymously. Every institution’s reputation was permanently attached to its actions. 

We are at a similar inflection point in agentic AI

Except this time, the transactions aren’t between bankers who meet face to face each afternoon. They are between AI agents negotiating thousands of times daily, across companies, industries, and national boundaries — all without human supervision in the room.

The technical protocols are being built. What has not been built, by anyone, is the trust architecture that makes negotiation possible between competing artificial minds. 

The legal frameworks for this do not exist. The governance standards have not been articulated. The question of how an agent earns the right to negotiate, and maintains that right over time, remains unanswered. 

What follows is our shared conviction, forged at the intersection of research lab and legal counsel: the architecture for trusted agentic negotiation—for trusted agent commerce —has not yet been built. And the time to begin is now. 

The problem that has not yet been named 

Most of the current discourse on AI governance focuses on single-agent behavior: safety, alignment, hallucinations, bias. These are important. But they miss what is about to become the defining challenge of enterprise AI. 

For the first time in history, machines must do something they have never been asked to do before: exercise judgment. 

Not follow rules. Navigate standards. 

Rules are deterministic. A rule says: do not disclose customer data. An agent can follow that rule. But business negotiation doesn’t operate by rules. It operates by standards — contextual, interpretive, value-laden. When does strategic positioning become manipulation? When does flexibility build trust, and when does it signal weakness? When should an agent hold firm, and when should it concede? These are judgment calls. They require something beyond computational compliance. 

“The most important distinction in law is between rules and standards. Rules are precise and predictable; standards are flexible and contextual. We’ve spent decades building AI that follows rules. We’re now asking it to navigate standards — and that requires an entirely different architecture.” 

— Sabastian Niles, President & Chief Legal Officer, Salesforce 

Here is what makes this moment so consequential: existing AI models were not built for this. They were trained to be helpful, agreeable, and conversational. They are optimized for human-agent interaction, not agent-to-agent negotiation where both sides represent competing interests. At Salesforce AI Research, we have spent months stress-testing these interactions before they reach the market—and our findings are instructive to researchers and the legal field alike. 

“Today’s models were not trained to hold a position or make a strategic concession. They were not trained to weigh consequences or to understand that a failed negotiation doesn’t just end a conversation, it can lose a deal, trigger financial exposure, or create legal liability. They cannot read intent, assess context, or feel the weight of what’s at stake. And that is precisely what agent-to-agent negotiation demands.” 

— Silvio Savarese, EVP & Chief Scientist, Salesforce AI Research 

Two agents, both trained to be accommodating, can spiral into a feedback loop of excessive agreeableness. We call this “echoing behavior,” covered in depth in our previous article The A2A Semantic Layer: Building Trust into Agent-to-Agent Interaction. For example, a consumer’s agent contacts a retailer’s agent to return a pair of shoes. What should be a five-minute exchange devolves into twenty minutes of both agents enthusiastically agreeing that the consumer should keep the shoes that don’t fit, pay a restocking fee anyway, and perhaps buy a second pair to demonstrate their appreciation for the retailer’s “fair policies.” 

This is a comedy, at best, when it involves shoes. It is a crisis when it involves healthcare billing disputes, supply chain contracts, or financial services agreements. 

The problem is not computational power. It is foundational architecture. And it cannot be solved by building better models alone. 

Agents will negotiate in the shadow of humanity 

There is a concept in legal theory called negotiating in the shadow of the law — the idea that private parties work out agreements, not in isolation, but against the backdrop of what courts would enforce, what regulations require, what social norms expect. The shadow shapes the negotiation even when litigation never occurs. 

AI agents will operate in a deeper shadow still. They will operate in the shadow of humanity; within systems built by humans, for humans, now extended to artificial minds that were never part of the original design. The contracts, the norms, the institutional frameworks, the relational capital that makes commerce function — none of it was created with agent-to-agent interaction in mind. 

“Agents will not just inherit our technologies. They will inherit our institutions. They will negotiate against and within legal systems, commercial norms, and relational frameworks that were never designed for them. We now require something much deeper than engineering. We require institutional imagination.” 

— Sabastian Niles, President & Chief Legal Officer, Salesforce 

And here’s the complication that separates this moment from every prior technological inflection point: those London bankers two centuries ago operated with deterministic rules. If Bank A owed Bank B exactly £500, that debt was £500 — no variance, no interpretation required. Modern AI agents don’t operate on deterministic rules. They explore probability distributions. Run the same negotiation scenario twice, and you may get different outcomes.

At Salesforce AI Research, we call this the “wriggling problem” — the inherent variance in AI outputs that creates a gap between systems we know how to govern and systems we are still learning to trust. That gap is not just a technical challenge. It is a legal one. It is an ethical one. It is the territory we intend to map. 

“The wriggling problem is real, and it matters enormously for enterprise adoption. You can’t audit a system that produces different outputs from identical inputs using the same frameworks you’d apply to deterministic software. We need new evaluation frameworks — and we need them before the transactions become consequential.” 

— Silvio Savarese, EVP & Chief Scientist, Salesforce 

What the trust architecture requires 

Building agentic trust architecture for agent-to-agent negotiations is not only an engineering problem. It is a legal one, too. It requires scientific rigor, legal expertise, and a clear-eyed understanding of where human judgment must remain sovereign. 

Based on our research and the stress-testing we have done, we believe this architecture requires four foundational elements. 

Registered identity and reputation over time. Just as those London bankers could not participate anonymously in the Clearing House, AI agents cannot negotiate across organizational boundaries without verifiable identity. Our team pioneered the concept of Agent Cards — standardized metadata that communicates an agent’s capabilities, limitations, compliance posture, and authority to commit on behalf of its principal. Google adopted this concept in their A2A specification as the foundation for capability discovery. But Agent Cards are only the beginning. What the field needs next is reputation infrastructure: a way for agents to build, demonstrate, and lose standing over time, not just in a single transaction. Perfect performance in a single interaction is not the bar. Reliability across thousands of them is, after all, the true measure, and only with high capability and consistency can we achieve Enterprise General Intelligence. As with humans, reputation for agents is measured in aggregate — and the agents that earn the right to keep negotiating will be those that demonstrate consistency at scale. 

Boundaries, not scripts. The instinct when governing AI is to specify: if scenario X, then response Y. That approach worked for rule-based automation. It fails for probabilistic agents operating in complex, contextual business environments. The better model, borrowed from how we govern human professionals, is to establish principles and boundaries rather than decision trees. 

We don’t give surgeons scripts for every possible patient presentation. We establish standards of care, professional ethics, and oversight mechanisms that evaluate whether judgment was

exercised appropriately. Agent governance must work the same way: wide latitude within defined boundaries, with evaluation frameworks that assess how the agent decided, not just what it decided. Leading AI providers are beginning to grapple with this within single-agent systems. The challenge multiplies enormously when agents from different organizations, trained on different data, representing different interests, must navigate shared standards together. 

Structured accountability. When an agent makes a consequential decision — commits to a price, accepts a contract term, escalates a dispute — there must be no ambiguity about who is accountable. This is not a technical requirement. It is a governance one. We envision the emergence of new organizational roles: AI operations officers and agent managers with authority over agent deployments and accountability when something goes wrong. We envision audit trails sophisticated enough to survive legal scrutiny — not just logs of what happened, but structured records of how decisions were made, what information was considered, what alternatives were evaluated. Accountability must be traceable. Accountability must be human. 

Calibrated escalation. Perhaps the most critical capability of all: AI agents must know when to stop. Not every negotiation can or should be resolved autonomously. The trust architecture must include clear escalation protocols that distinguish between decisions within an agent’s authority and decisions that require human review — and that calibrate the threshold to consequence. Routine decisions should flow without friction. High-stakes decisions involving, for example, regulatory compliance, major contractual commitments, or significant financial exposure should automatically surface to human judgment. But these are also strategic decisions of design: should the agent escalate mid-negotiation to check in? Or only when it is time to seek final review and approval of the ostensibly final fully negotiated arrangement? Can human review wait until after the fact, via periodic audits and without requiring human signoff before entry into a mutually binding agreement? 

Agents that escalate too often defeat the purpose of automation. Agents that never escalate are a liability. The architecture must be precise about where the line falls. 

Where this will emerge first, and what leaders can do now 

Agent-to-agent ecosystems are not a distant possibility. They are emerging across three domains right now. 

Healthcare billing and authorization: Patient advocate agents will coordinate with insurance agents for pre-authorization and claims processing. Recently, we announced eVerse, our simulation environment for enterprise-ready voice and text agents. Together with UCSF Health, we are already training agents on thousands of synthetic scenarios and stress-testing edge cases before they reach real patients. When human lives are at stake, escalation protocols are not optional. They are essential. 

Financial services: Imagine treasury agents negotiating credit facilities, foreign exchange transactions, and procurement agreements with financial institutions’ agents. These interactions carry fiduciary responsibilities, regulatory requirements, and relationship dynamics that cannot be fully specified in advance. An agent that pushes for better pricing at the cost of a credit relationship is a major reputational liability. 

Supply chain coordination: Picture manufacturing agents negotiating delivery schedules with logistics providers’ agents, adjusting in real time for disruptions while maintaining service agreements. The governance challenge: how do you ensure both sides share information honestly about capacity constraints, without surrendering competitive intelligence? 

In each of these domains, the organizations that will lead are not necessarily those with the most sophisticated models. They will be those that build the most sophisticated boundaries — and invest in the trust architecture before the transactions become consequential. 

“The challenge on the horizon is not simply that agents may behave incorrectly. It is that systems composed of many autonomous agents can produce outcomes that no individual system was explicitly designed to generate. When negotiation moves from humans to networks of interacting agents, governance must shift from controlling individual tools or actors to governing the interaction ecosystem itself.” 

— Sabastian Niles, President and Chief Legal Officer, Salesforce 

What that means in practice: 

Define your standards now. Again, not rules. Standards. What values do you want your agent to embody in negotiation? What boundaries are non-negotiable? What level of autonomy are you comfortable with, and at what threshold does human review become mandatory? These are strategic choices, not technical specifications. Make them deliberately. 

Build for auditability. Every consequential agent interaction should generate a structured record: what was decided, how it was decided, what alternatives were considered, when escalation was triggered. Design for this from the beginning. Retrofitting audit infrastructure is far harder than building it in. 

Invest in reputation infrastructure. Having your agent perform well today is a start, but meeting that test is incomplete and insufficient. The question is whether it will perform reliably over ten thousand interactions. Begin measuring negotiation quality in aggregate, not just in instance. 

Partner with organizations solving this now. The trust architecture for A2A commerce will be shaped by those who engage with the problem early. Standards organizations, regulatory bodies, and industry consortia are beginning to form around these questions. The organizations that help write the standards will have a meaningful advantage over those that simply comply with them.

The stakes have never been higher. 

Negotiation is not just an exchange of goods and services. It is a system of trust — accumulated over centuries, encoded in institutions, enforced by reputation, backstopped by law. Every successfully completed transaction adds to a shared infrastructure that makes the next transaction possible. Every violation of trust erodes it. 

AI agents are about to enter that system at scale. Millions of interactions per day, across organizational and national boundaries, involving consequential decisions about prices, terms, compliance, and relationships. As autonomous agents begin to participate in these systems at scale, we face a fundamental question: will our institutions adapt in ways that strengthen trust, or will they be strained by forms of automation they were never designed to govern? The trust architecture we build now will shape whether that participation strengthens the system or degrades it. 

“You can have the models, the compute capacity, the capital. But what can never become a commodity is legitimacy. Trust is infrastructure. And like all infrastructure, it requires deliberate investment before you need it — not after you discover you don’t have it. That’s exactly the kind of problem Salesforce is built to help solve.” 

— Sabastian Niles, President and Chief Legal Officer, Salesforce 

The London Bankers’ Clearing House didn’t emerge from regulation. It emerged from shared necessity — the recognition that no individual institution could fully solve the problem alone, and that the collective benefit of a trusted system outweighed the competitive cost of contributing to it. 

We believe we are at that same moment today. 

The frameworks do not yet exist. The governance standards have not been written. The reputation infrastructure has not been built. The legal architecture for machines that negotiate in the shadow of humanity is, at this moment, largely uncharted territory. The time to engage is now. 

—————————- 

This thinking is the product of an unlikely partnership: A Chief Scientist and a Chief Legal Officer who discovered that the hardest problems in agentic AI live at the intersection of their disciplines. We would also like to thank Adam Earle, Itai Asseo, Portia Bamiduro, Kylie Mojaddidi and Karen Semone for their insights and contributions.

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

Join us at the Fortune Workplace Innovation Summit May 19–20, 2026, in Atlanta. The next era of workplace innovation is here—and the old playbook is being rewritten. At this exclusive, high-energy event, the world’s most innovative leaders will convene to explore how AI, humanity, and strategy converge to redefine, again, the future of work. Register now.
About the Authors
By Silvio Savarese
See full bioRight Arrow Button Icon
By Sabastian Niles
See full bioRight Arrow Button Icon

Latest in Commentary

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025

Most Popular

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Fortune Secondary Logo
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Fortune Crypto
  • Features
  • Leadership
  • Health
  • Commentary
  • Success
  • Retail
  • Mpw
  • Tech
  • Lifestyle
  • CEO Initiative
  • Asia
  • Politics
  • Conferences
  • Europe
  • Newsletters
  • Personal Finance
  • Environment
  • Magazine
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
  • Group Subscriptions
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map
  • Facebook icon
  • Twitter icon
  • LinkedIn icon
  • Instagram icon
  • Pinterest icon

Latest in Commentary

london
Commentaryinvestment banking
The 19th century banking problem that AI hasn’t solved yet
By Silvio Savarese and Sabastian NilesMarch 20, 2026
3 hours ago
spreng
CommentaryVenture Capital
Unicorns are flush with cash and stuck. A new kind of startup crisis is taking hold in 2026
By David SprengMarch 20, 2026
3 hours ago
placek
Commentarybranding
Intel and Toyota made perfectly logical decisions. That’s exactly how they killed their best brands
By David PlacekMarch 20, 2026
5 hours ago
fabio
CommentaryLoneliness
Why my $150 million startup thinks it can solve the $406 billion loneliness problem
By Fabio BinMarch 20, 2026
7 hours ago
scaramucci
CommentaryWhite House
Anthony Scaramucci: America’s billionaires and presidents have forgotten the lesson that destroyed Rome
By Anthony ScaramucciMarch 19, 2026
1 day ago
china
CommentaryChina
China doesn’t need a trade deal to win. Here’s what CEOs are missing
By Ram CharanMarch 19, 2026
1 day ago

© 2026 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.