- The AI Check In
- Posts
- 🛟 Execution AI is Moving Capital but Skipping Governance
🛟 Execution AI is Moving Capital but Skipping Governance
For an increasing number of banks, agentic AI is already placing trades across FX, equities, and liquidity. Oversight still relies on human review and Excel-era controls. Examiners want circuit breakers and real-time logs. The gap is growing. Will you exploit it or get caught in it?
Hello, Abbie Widin here,
Rewrite the rules. Outmaneuver competitors. Stay two steps ahead of regulators.
AI Check In delivers sharp, globally relevant intelligence on AI governance, financial risk, and capital automation. AI CI briefs strategy, risk, and compliance leaders at the world’s most influential banks, including JPMorgan, Citi, and Bank of America.
What to expect this edition:
🛟 Need to Know: Execution AI Crosses the Governance Line
🥷 Deep Dive: When AI Executes: Operational Mechanics, Client Risk & Governance Gaps
🔭 Trends to Watch: Oversight of Execution AI
⚔️ Vendor Spotlight: Numerix
The AI race is on. Let the games begin.
🛡️ Enjoying AI Check In? Forward this to a colleague who needs to stay two steps ahead.
📬 Not subscribed yet? Sign up here to receive future editions directly.
🛟 Need to Know: Execution AI Crosses the Governance Line

Execution AI is placing trades today.
Execution AI systems directly influence trades in U.S. capital markets, and are now placing, timing, and adjusting positions without real-time human input. Yet most banks still govern them as advisory tools. These models often bypass the controls designed for pricing or forecasting engines. They are not classified as material models, nor are they required to have override protocols, structured logs, or autonomous tiering.
This creates a mismatch between operational autonomy and governance expectations. Execution models may act on relatively small trades but do so at high frequency and across multiple asset classes. The risk stems from velocity and agency, more than capital size.
As of 2025, JPMorgan operates over hundreds of AI systems in production, many embedded in execution functions. RBC Capital Markets uses reinforcement learning in live equity trades via its Aiden platform. Most firms have not yet implemented autonomy-based tiering, runtime supervision, or pre-trade kill switches. See Deep Dive below for more details.
These systems shape liquidity, pricing, and exposure in real time. The governance gap is live.
🥊 Your Move:
Redefine model materiality. Factor in execution speed and autonomy—not just dollar thresholds.
Map AI systems by function. Require visibility into all models with execution authority.
Apply SR 11-7 standards to execution AI. Demand logging, override capability, and runtime explainability.
🥷 Deep Dive: When AI Executes: Operational Mechanics, Client Risk & Governance Gaps

In Q1 2025, multiple AI-driven quant funds, including strategies from China Southern, AQR, and reportedly Renaissance Technologies, suffered abrupt drawdowns or halted execution systems entirely.
Common failure modes included brittle edge-case behavior, overfitted execution logic, and feedback loops that failed to adapt under macro volatility. These were governance breakdowns, where assumptions went unchallenged until the capital had already moved.
By contrast, Bank of America disclosed a more constrained approach. In its Q2 2025 earnings call, the bank confirmed it uses adaptive AI models to support equity execution, responding to real-time shifts in liquidity and spreads, but with deliberate limits on autonomy. Circuit breakers are embedded to halt model behavior under stress. AI enhances speed, but doesn’t take control.
That structure, interrupt paths, input traceability, and embedded override protocols, increasingly defines emerging best practice as execution AI spreads across asset classes.
While the recent failures occurred on the buy-side, the underlying vulnerabilities apply equally to bank-run systems. As AI agents move from analysis to autonomous execution, the locus of risk has shifted from advice to action. Governance hasn’t caught up.

JPMorgan has publicly disclosed over 450 in-production AI systems, many embedded in trading workflows. Among these, algorithmic execution tools, like LOXM, use machine learning to optimize price impact and reduce slippage. LOXM reportedly outperformed manual and earlier automated methods in trials.
Operational mechanics involve:
Market simulators generating simulated execution scenarios.
RL agents processing live data to refine trade schedules (e.g., VWAP, TWAP).
These agents feed orders directly into trading systems, often with minimal human intervention at runtime. While governed under JPMorgan’s MRGR framework, little is publicly known about pre-trade kill switches, real-time overrides, or explainability at execution time. That opacity creates governance blind spots, especially under rapid volatility.
📈 RBC Aiden: Reinforcement Learning Deployed Live

RBC’s Aiden platform, developed by RBC Borealis, uses deep RL for equity execution. It explores numerous execution permutations to minimize slippage, reportedly surpassing benchmarks like VWAP and Arrival. Post-trade, Aiden generates explainability reports for internal and client review.
Mechanics involve continuous learning from live fills, but RL’s exploratory nature introduces edge-case risks. For instance:
Does Aiden adapt unknowingly to liquidity gaps?
Can it inadvertently generate signals that resemble manipulation?
Are its reward structures aligned with compliance and client outcomes?
The absence of known override procedures or run-time blockers raises concerns about agent accountability in volatile conditions.
🎯 BlackRock Aladdin: Feedback Loops At Scale

BlackRock’s Aladdin manages $21.6 trillion in assets. Though Aladdin doesn’t execute trades directly, it influences allocations, risk models, and routing decisions, effectively directing behavior across buy-side platforms.
This creates a model-into-market feedback loop:
Aladdin directs systematic allocations.
Allocations move market prices.
Price movements feed back into Aladdin’s inputs.
While BlackRock is addressing “crowding” risk and improving model decoupling, this dynamic introduces rapid information reflexivity, especially in stressed markets. Without real-time drift detection or human review of allocation shifts, the system becomes an active market influencer without execution governance.
🏛 Nasdaq: Surveillance and AI Risk Toolkits

Nasdaq has deployed generative AI in its SMARTS surveillance platform, helping analysts detect market abuse patterns more efficiently. According to the company, investigation times in U.S. equity markets have dropped by approximately 33 percent due to enhanced interpretation of complex alert data.
In addition, Nasdaq launched an AI-powered XVA Accelerator, capable of performing up to one trillion risk calculations per day. This tool is designed to improve performance across valuation adjustments such as CVA, DVA, and FVA. Integrated into the Calypso platform, it supports compliance with Basel III Endgame and real-time stress testing requirements.
While Nasdaq has not yet publicly disclosed AI-to-AI collision testing, the proliferation of autonomous trading and surveillance systems raises concerns about unintended interactions. For example, one AI agent's trading activity could trigger another system’s surveillance alerts.
These scenarios are an emerging governance challenge.
⚠️ Failure Modes in Practice
Four recurring operational vulnerabilities appear across use cases:
Recursive feedback loops
Models train on outcomes they caused, deepening systemic risk and market reflexivity.Edge-case brittleness
Systems validated in simulated markets often fail under real-world volatility or illiquidity.Lack of runtime overrides
Once activated, agents rarely include automated kill switches or human veto points mid-execution.Opaque client impact
Clients see execution outcomes, such as price, liquidity, timing, but not causal logic or escalation paths.
🥊 Your Move:
Treat execution-owned models as high-risk infrastructure. Apply SR 11‑7 style validation, tiering, and oversight to any AI that places or routes orders.
Build real-time supervision into execution flows. Require override mechanisms triggered by drift indicators, anomaly gates, or time-based kill switches.
Implement client-facing execution transparency. Demand that vendor or internal systems deliver rationales, drift alerts, and escalation options to clients before settlement.
🔭 Trends To Watch: Execution AI Oversight
Autonomy Tiering for Execution AI
Current model frameworks (e.g., SR 11‑7) assume human supervision. Regulators like SEC, FINRA, and OCC are moving toward classifying execution AI by autonomy, asset class, speed, and materiality.Upcoming expectations include decision logs, kill‑switches, on‑trade explainability, and full governance tiering, similar to FAA aircraft protocols.
Velocity Replacing Capital in Materiality Tests
A bot executing multiple $10M micro-trades in milliseconds can now pose more risk than a $500M monthly trade.Supervisory focus is shifting to execution speed, not just trade size, requiring thresholds for latency and velocity-based triggers.
Execution Logs as Evidence
Regulators are sharpening their focus on timestamped logs that capture model inputs, decision paths, retraining versions, and override actions.Expect these control logs to be treated as audit-grade, capable of supporting enforcement and compliance reviews.
🥊 Your Move:
Classify execution agents by autonomy, asset type, speed, and oversight level.
Adopt latency-aware materiality rules focused on how fast AI models can allocate risk.
Build legal-grade logging infrastructure to capture model logic, overrides, and runtime decisions.
⚔️ Vendor Spotlight: Numerix

Numerix provides pricing, XVA, and real-time risk analytics across derivatives, structured products, and capital markets portfolios. Its AI-enhanced modules are used by U.S. banks for model calibration under market stress and intraday volatility. Numerix supports dynamic scenario generation and sensitivity analysis, integrated into trading and risk platforms.
AI capabilities are embedded in calibration tools and pre-trade pricing engines. However, execution-stage explainability, override mechanisms, and live monitoring are not fully documented in public sources. As banks incorporate these tools into front-office environments, oversight gaps can emerge without clear model lineage, retraining disclosures, or runtime audit logs.
🥊 Your Move:
Request vendor documentation on model autonomy, override capability, and retraining schedules.
Align internal governance reviews with Numerix version upgrades and stress scenarios.
Audit how Numerix outputs are used in execution decisions, not just pre-trade analytics.
🔮 Next Week
LLMs (with or without wrapper) are getting dumber, and neither risk nor compliance has any idea.
We’ll break down how model drift, vendor opacity, and silent summarization failures are creating real risk inside U.S. banks.
Yours,

💌 Was this newsletter shared with you? Join banking and finance leaders receiving weekly AI governance briefings.
📘 Subscribe to AI Check In here.
⚡ Share the edge — forward this edition to a colleague interested in AI, risk, or innovation initiatives.
Disclaimer
This newsletter is for informational and educational purposes only and should not be considered financial, legal, or investment advice. Some content may include satire or strategic perspectives, which are not intended as actionable guidance. Readers should always consult a qualified professional before making decisions based on the material presented.