Synthetic Disinformation Is a Great Big Banking Risk

Viral AI rumors, hijacked regulator accounts, and deepfake CEO calls are already moving markets and deposits. This briefing reveals how U.S. banks are defending against synthetic attacks and calls out the urgent steps every bank and board should take next.

Hello, Abbie Widin here,

AI Check In delivers sharp, globally relevant intelligence on AI governance, financial risk, and capital automation.  Already briefing strategy, risk, and compliance leaders at the world’s most influential banks, including JPMorgan, Citi, and Bank of America.

What to expect this edition:

  • 🛟 Need to Know: Board Accountability in the Age of Synthetic Attacks

  • 🥷 Deep Dive: Anatomy of Real Synthetic Threats in Banking

  • 📈 Trends to Watch: Synthetic Finance Threats

  • ⚔️ Vendor Spotlight: Synthetic Defense Tools in Banking

Rewrite the rules. Outmaneuver competitors. Stay two steps ahead of regulators.

Let the games begin.

🛡️ Enjoying AI Check In? Forward this to a colleague who needs to stay two steps ahead.

📬 Not subscribed yet? Sign up here to receive future editions directly.

🛟 Need to Know: Board Accountability in the Age of Synthetic Attacks

Synthetic disinformation has become a direct governance issue for U.S. banks. Responsibility is shifting to CROs, CISOs, and in some banks, new AI oversight committees. Boards must document escalation and override triggers clearly.

  • A Wall Street Journal investigation (Aug 2025) reported 105,000 deepfake attacks in just three months of 2024, causing losses of over $200 million, many involving cloned CEO voices or videos used to authorize transfers.

  • In January 2024, the SEC’s official X account was hijacked, posting a false Bitcoin ETF approval that briefly moved crypto markets before being corrected.

  • A Reuters-backed UK study (Feb 2025) found that £10 of targeted social ads could move £1 million in deposits. A parallel ABA survey found more than 60% of customers would consider withdrawing funds after exposure to AI-generated fake headlines.

Regulators have responded: FinCEN issued alerts in 2024 on deepfake fraud schemes, while NSA, FBI, and CISA have released guidance requiring boards to plan for detection, escalation, and communication integrity. FFIEC social media guidelines also classify these as explicit enterprise risks. Insurers are also beginning to adjust cyber and crime coverage terms to account for synthetic media fraud, with exclusions and higher premiums emerging in some policies.

Internationally, EU’s DORA (effective Jan 2025) explicitly covers ICT and disinformation resilience. Singapore’s MAS has issued AI risk advisories. APRA’s CPS 230 and CPG 230 in Australia highlight third-party resilience expectations.

🥊 Your Move

  • Harden official pipelines with hardware security keys, dual approvals, and liveness checks for executive communications.

  • Classify investor relations as critical infrastructure, treating wires, analyst calls, and corporate feeds with resilience standards equal to payments systems.

  • Conduct board-level tabletop exercises simulating fake news or deepfake briefings, testing crisis comms and liquidity response in the first 15 minutes. Check insurance policies for synthetic media fraud.

🥷 Deep Dive: Anatomy of Real Synthetic Threats in Banking

Synthetic media attacks are escalating across banking and finance. Case studies show the breadth of attack surfaces and the urgency of board oversight.

Deepfake-related attacks are accelerating. Many target finance teams with voice or video impersonations of executives to authorize transfers. In April 2025, Hong Kong police dismantled a fraud ring using deepfake audio and images to open accounts, with losses of $193 million. In another widely cited case, an employee at Arup was deceived by a deepfake video call with multiple fake executives and transferred $25.6 million.

Regulators are escalating their response.

In November 2024, FinCEN issued Alert FIN‑2024‑Alert004, requiring institutions to file SARs tagged “FIN‑2024‑DEEPFAKEFRAUD.”

In April 2025, Fed Vice Chair Michael Barr confirmed that 1 in 10 U.S. companies had already been targeted, emphasizing that banks are the frontline defenders. FINRA has also warned that generative AI is being used to impersonate licensed professionals and create synthetic brokerage accounts. Deloitte documented a 700% increase in deepfake-related fintech incidents during 2023, with audio cloning the fastest-growing threat vector. JPMorgan has applied large language models to detect anomalies in email compromise cases.

The market for defensive tools is growing rapidly. Deloitte estimates the deepfake detection market will expand from $5.5B in 2023 to $15.7B by 2026. Increasingly, the American Bankers Association advises layering biometric liveness checks and behavioral analytics into onboarding and authentication. TechRadar describes “Repeater” attacks, where varied synthetic identities are repeatedly tested until one bypasses controls. Academic researchers have proposed a “Deepfake Kill Chain” framework, recommending dynamic biometrics like eye movement tracking, paired with data governance and staff training.

Subscribe to keep reading

This content is free, but you must be subscribed to The AI Check In to continue reading.

I consent to receive newsletters via email. Terms of use and Privacy policy.

Already a subscriber?Sign in.Not now