What 50 Years of Financial Regulation Teaches Us About Governing AI
Welcome to The Regulated Machine. This is our first formal post! We’re not launching with fanfare, just starting to put ideas on paper. We’ll settle into a regular cadence as we go. For now, here’s something worth thinking about.
Every few months, another think tank publishes a framework for “responsible AI governance” like they’ve discovered fire. Risk tiers! Audit trails! Stress testing! Independent oversight!
Congratulations. You’ve just described the Basel Accords.
The financial services industry has spent half a century building, and iterating on, regulatory frameworks that manage exactly the kind of risks AI now presents: opaque models making high-stakes decisions, systemic risk from interconnected systems, and the tension between innovation and consumer protection. The parallels aren’t vague. They’re structural.
Risk tiers already exist
The EU AI Act’s four-tier risk classification gets treated like a novel invention. It’s not. Basel I established risk-weighted asset categories in 1988. The concept is identical: not all risks are equal, so not all activities deserve the same regulatory scrutiny.
Basel assigned different capital requirements based on asset risk class. The EU AI Act assigns different compliance obligations based on how much harm an AI system can cause. Same architecture, different domain.
The U.S. Treasury made this connection explicit in February 2026, releasing a Financial Services AI Risk Management Framework that directly adapts NIST’s AI Risk Management Framework to financial services. The document reads like a remix of existing banking supervision principles, because that’s exactly what it is.
Audit trails aren’t a new idea
SOX has required public companies to maintain detailed audit trails over financial reporting since 2002. Every material decision needs documentation. Every control needs testing. Every executive signs off personally.
Now look at what AI governance frameworks demand: explainability, traceability, documentation of data sources and model decisions. SOX already solved this problem for financial reporting. The requirement that “the AI said so” isn’t sufficient documentation? That’s just Section 404 wearing a different hat.
The GAO’s 2025 report on AI in financial services found that federal regulators are already applying existing model risk management frameworks to AI systems. They didn’t need new legislation. SR 11-7, the Fed’s model risk management guidance from 2011, covers model validation, ongoing monitoring, and outcome analysis. It maps almost perfectly to AI model governance.
Stress testing works
After 2008, Dodd-Frank mandated stress tests for systemically important financial institutions. Banks now run annual scenarios assessing events such as severe recession, market shock, counterparty failure, with a requirement to prove they can survive them.
AI systems need the same treatment. What happens when your model encounters distribution shift? Adversarial inputs? Data pipeline failures? The Bank for International Settlements has already flagged that AI model opacity could contribute to systemic risk in financial markets, echoing the same concerns that drove post-crisis stress testing requirements.
The methodology exists. The infrastructure exists. The question is whether AI governance bodies will adopt it or insist on building from scratch.
Capital requirements = skin in the game
Basel’s capital requirements force banks to hold reserves proportional to their risk exposure. You want to take big risks, you need big buffers. This principle has no direct analog in AI governance yet, and it should.
What if companies deploying high-risk AI systems were required to maintain insurance reserves or post bonds proportional to potential harm? It would solve two problems at once: creating financial incentives for careful deployment and ensuring funds exist when things go wrong.
The governance committee pattern
Financial institutions have risk committees, audit committees, and compliance functions that operate independently from business lines. The people who check the work aren’t the people who do the work.
The AI governance world is slowly arriving at the same conclusion. The FCA in the UK launched an AI Lab specifically to test models and set supervisory expectations. Financial regulators are participating in cross-border coordination through the Basel Committee on Banking Supervision. These patterns: independent oversight, cross-jurisdictional coordination, specialized supervisory bodies; are all borrowed from decades of financial regulation.
What this means for practitioners
If you’re building AI governance programs in 2026, stop starting from zero. The financial regulatory stack offers a proven blueprint.
Risk-tier your AI applications the way Basel risk-weights assets. Build audit trails the way SOX requires for financial controls. Stress test your models the way Dodd-Frank stress tests banks. Create independent oversight the way every major financial institution separates risk from revenue.
The AI governance conversation would move a lot faster if fewer people were trying to be original and more were willing to copy what works.
Enjoyed this? Subscribe to our newsletter — we write about AI in regulated industries, for people who actually work in them.
