• The AI Check In
  • Posts
  • 🛟 State Fragmentation Has Become an AI Power Play for Banks

🛟 State Fragmentation Has Become an AI Power Play for Banks

Smart institutions, such as JPMorgan, Citi, and Morgan Stanley, are using uneven geographic regulation to move faster, not slower. Here’s how.

👋 Welcome, thank you for joining us here :)

The AI Check In is your weekly power play to navigate AI governance and strategy in banking and finance.

Not yet a subscriber?
Join hundreds of finance professionals staying ahead of AI regulation every week.

What to expect this edition:

  • 🛟 Need to Know: Which states matter and why in the U.S. AI regulatory map?

  • 🥷 Deep Dive: Strategic advantage in a fragmented AI landscape

  • ⚔️ Instruments of Mastery: LexisNexis regulatory compliance AI

  • 📈 Trends to Watch: How fragmentation reshapes financial AI deployment across compliance havens

Rewrite the rules. Outmaneuver competitors. Stay two steps ahead of regulators.

Let the games begin.

🛟 Need to Know: Which states matter and why in the U.S. AI regulatory map?

With no federal AI governance framework, individual U.S. states are enacting their own rules, creating a regulatory patchwork with real cost and compliance consequences for national banks.

California, New York, and Illinois are setting the pace for AI regulation. California’s CPRA includes provisions for algorithmic transparency and consumer rights. New York’s Department of Financial Services has issued formal guidance on fairness and explainability in AI underwriting. Illinois enforces transparency in AI-based hiring decisions through its Artificial Intelligence Video Interview Act. Together, these states form the frontline of financial AI oversight in the U.S

Since the global financial crisis, compliance costs for banks have risen by more than 60%, with jurisdictional fragmentation—particularly around AI—adding to the burden.

Operating fatigue is setting in. Working across multiple states now requires banks to align with overlapping or conflicting rules, retrain teams, and revalidate AI systems on an ongoing basis. For CFOs and risk leads, this strain is no longer theoretical—it must now be actively managed.

But inaction isn’t an option. State AGs in NY, CA, and IL have signaled increased scrutiny of opaque AI deployment in consumer lending and underwriting. Expect rising pressure from auditors in 2025.

🥊 Your Move:

  1. Track State-Level Mandates: Assign a team to monitor regulatory developments in California, New York, and Illinois, where financial AI oversight is most advanced.

  2. Establish Executive Oversight: Set up a cross-jurisdiction AI governance committee to oversee ethical deployment and regulatory alignment. Watch out for internal resistance from product teams or even vendors.

  3. Integrate Algorithm Impact Assessments: Require bias, fairness, and explainability reviews for all AI deployments, benchmarked to the most stringent state-level requirements.

🥷 Deep Dive: Strategic Advantage in a Fragmented AI Landscape

Leveraging uneven rules for speed, scale, and regulatory cover

In the absence of comprehensive federal AI legislation, individual U.S. states have enacted their own regulations, leading to a complex and varied landscape for financial institutions. This patchwork affects how banks and financial services companies deploy AI technologies across jurisdictions.

California continues to lead with the most detailed AI governance proposals under the CPRA. Proposed rules targeting automated decision-making technologies (ADMT) would require financial institutions to conduct algorithmic impact assessments, provide opt-out mechanisms for consumers, and maintain explainability for significant decisions — a rising bar for compliance teams in customer-facing operations.

New York has taken an enforcement-oriented approach. DFS Circular Letter No. 7 directs insurers and financial entities to embed fairness, transparency, and bias mitigation into AI systems used in underwriting and pricing. In addition, NYDFS’s 2024 cybersecurity bulletin highlights new obligations around AI-specific risk assessments and staff training, signaling that governance gaps will face direct scrutiny.

Illinois regulates AI in hiring through the Artificial Intelligence Video Interview Act (AIVIA), which mandates consent and transparency. Recent amendments to the Illinois Human Rights Act prohibit AI-driven discrimination and require AI use disclosures in employment decisions.

Colorado and Virginia have implemented consumer data protection laws—SB 24-205 and the Consumer Data Protection Act, respectively—that include transparency mandates for automated decision-making.

Strategic Jurisdiction Arbitrage

Some financial institutions are exploring geographic restructuring of AI operations to align with differing regulatory environments.

Subscribe to keep reading

This content is free, but you must be subscribed to The AI Check In to continue reading.

I consent to receive newsletters via email. Terms of use and Privacy policy.

Already a subscriber?Sign in.Not now