- The AI Check In
- Posts
- 🛟 AI Explainability: When to Invest and When to Ignore
🛟 AI Explainability: When to Invest and When to Ignore
Discover how banks are tiering explainability by risk, customer impact, and regulatory exposure. Plus: The CoT prompting risk no one talks about and how to spot it in production.
👋 Welcome, thank you for joining us here :)
The AI Check In is your weekly power play to navigate AI governance and strategy in banking and finance.
What to expect this edition:
🛟 Need to Know: Explainability Is Now a Governance Signal
🥷 Deep Dive: AI Explainability vs. Profitability - When to Invest and When to Ignore
⚔️ Instruments of Mastery: Fiddler AI - Operationalizing Model Monitoring and Explainability
📈 Trends to Watch: When AI Reasoning Goes Off Track
Rewrite the rules. Outmaneuver competitors. Stay two steps ahead of regulators.
Let the games begin.
🛡️ Enjoying AI Check In? Forward this to a colleague who needs to stay two steps ahead.
📬 Not subscribed yet? Sign up here to receive future editions directly.
🛟 Need to Know: Explainability Is A Governance Signal

AI explainability is no longer a formal regulatory requirement in the U.S., but it remains under active supervisory interest. In May 2022, the OCC affirmed that banks remain responsible for AI-driven decisions—including those made by third-party vendors. In 2023 testimony, the agency noted that explainability remains a supervisory consideration in credit, fraud, and model risk contexts.
There is no explicit mandate under U.S. law for explainability. However, regulatory expectations are embedded in model risk guidance (SR 11-7), consumer protection enforcement (CFPB), and fair lending standards (e.g., ECOA, UDAAP). These frameworks require documentation, auditability, and defensibility of model outputs—particularly where customer outcomes are affected.
Globally, the EU AI Act (approved 2024) established mandatory explainability for high-risk applications like lending and insurance. This has implications for multinational U.S. institutions, which are now standardizing explainability controls across geographies.
For U.S. boards and CFOs, explainability is a governance signal, more than compliance optics. It should reduce audit friction, defuse model risk events, and protect reputation.
🥊 Your Move:
1. Codify explainability triggers: Update your model governance framework to specify when explainability is required especially in credit, fraud, or pricing models.
2. Strengthen third-party oversight: Require vendors to provide model documentation, input/output behavior summaries, and rationale explanations for important decisions.
3. Equip the board and audit committee: Provide concise briefings on where explainability is applied, why it matters operationally and legally, and where it is being intentionally deprioritized.
🥷 Deep Dive: AI Explainability vs. Profitability — When to Invest and When to Ignore It

As regulatory enforcement has softened, financial institutions are reevaluating their approach to explainable AI (XAI). Without hard mandates from U.S. regulators, banks are shifting from universal explainability to selective deployment.
Explainability serves three core functions:
1. It enables internal risk and compliance teams to monitor for bias, performance drift, or unintended correlations.
2. It supports regulatory alignment by producing documentation and rationale that meet the standards set by SR 11-7, CFPB guidance, and internal model validation teams.
3. It provides reputational insulation by equipping frontline teams, legal departments, and customer service with justifiable logic when decisions are contested.
The costs of explainability are material. Interpretable models may reduce predictive performance. Building and maintaining explanation pipelines requires infrastructure, validation labor, and governance design.
Banks are triaging explainability budgets based on strategic ROI, and deploying only where it mitigates measurable legal, operational, or reputational risk.
Most institutions now segment model types into three broad tiers:
High-risk and mandated: Credit underwriting, fraud detection, pricing optimization
Internal only: Treasury forecasting, liquidity planning, marketing attribution
Low-risk or experimental: Early pilots, back-office process automation
Capital One: Embedded explainability in credit and fraud
Capital One has operationalized explainability within high-risk use cases like credit scoring and fraud detection.
The bank’s AI governance structure emphasizes “controllability”. Models must not only produce accurate outputs, but expose decision logic in ways that audit, compliance, and legal teams can independently review.
Its machine learning pipelines integrate SHAP and LIME to expose how features such as credit utilization, income stability, and transaction anomalies contribute to model outputs. These techniques are implemented to allow challenger comparisons, audit traceability, and scorecard overlays.
JPMorgan maintains over 450 active AI use cases across lines of business. To ensure consistency and transparency, it has established an Explainable AI Center of Excellence. This group develops and deploys fairness tools, model governance frameworks, and post-hoc analysis protocols to ensure that high-impact decisions are auditable.
The bank is also piloting Layered Chain-of-Thought (L-CoT) prompting for large language models. This approach structures AI-generated reasoning into hierarchies of logic, enabling model validation teams to dissect LLM output paths and trace inconsistencies or hallucinations.
Practical Techniques in Use
The following techniques are currently in production across U.S. banks and fintechs. Each serves a distinct function depending on model type, user group, and decision impact.
Here’s how institutions are currently aligning explainability tools with use case maturity and regulatory exposure:
Technique | Primary Use Case | Visibility Scope | Tooling in Use |
SHAP | Credit scoring | Global | Capital One, Fiddler AI |
LIME | Customer-level appeals | Local | Capital One |
Layered CoT prompting | LLM output traceability | Hierarchical | JPMorgan Chase |
Slice & Explain | Segment diagnostics | Targeted | Fiddler AI |
Challenger models | Compliance assurance | Global | Internal tooling |
Governance and organizational maturity
Tiered governance frameworks define when explainability is required based on model risk, regulatory linkage, customer exposure, and internal audit cadence. A typical governance matrix might flag credit models for real-time rationale generation, but exempt marketing attribution models that don’t affect customer outcomes.
Common maturity gaps appear when ownership is unclear. Risk teams may require explainability that data science can’t deliver, or model developers may push complex architectures without visibility into compliance needs. The result is explainability bottlenecks or worse, models pushed into production without governance alignment.
The emerging best practice is to deploy a centralized dashboard that shows which models:
Require explainability
Include explainability methods
Are exempt from explanation
Are subject to regular challenger testing
This structure enables CFOs, risk officers, and audit committees to oversee model portfolios without technical fluency in AI. It also provides a defensible basis for why certain models are treated differently under governance frameworks.