• The AI Check In
  • Posts
  • 🛟 Synthetic attacks on banks are scaling faster than governance

🛟 Synthetic attacks on banks are scaling faster than governance

Banks are being outpaced not by adversaries but by automation. Open-source AI now impersonates leadership, mimics trust rituals, and executes deepfake CFO scams before risk teams can respond. A single bad actor can now do what once took a rogue nation.

👋 Welcome, thank you for joining us here :)

The AI Check In is your weekly power play to navigate AI governance and strategy in banking and finance.

What to expect this edition:

  • 🛟 Need to Know: Boards have accountability for synthetic threats

  • 🥷 Deep Dive: Three Threats, One Attack Chain: designing AI defense for synthetic risk

  • ⚔️ Vendor Spotlight: Reality Defender and Pindrop

  • 📈 Trends to Watch: Synthetic exposure will get harder to contain

Rewrite the rules. Outmaneuver competitors. Stay two steps ahead of regulators.

Let the games begin.

🛡️ Enjoying AI Check In? Forward this to a colleague who needs to stay two steps ahead.

📬 Not subscribed yet? Sign up here to receive future editions directly.

🛟 Need to Know: Boards Have Accountability for Synthetic Threats

The New York State Department of Financial Services (DFS) now requires annual updates to AI-related cybersecurity risk assessments. FinCEN has issued alerts warning of deepfake-enabled fraud and AML evasion, pressing for upgraded verification and anomaly detection systems.

Regulators observe that even the Federal Reserve, as well as leading U.S. banks, such as JPMC, BoA and Wells Fargo, are conducting red-team-style exercises focused on synthetic threat chains, including phishing, deepfakes, and disinformation, and pressing vendors to disclose AI model retraining schedules and override logic.

The 2025 federal budget proposal includes a 10-year moratorium on new state-level AI laws. If passed, this would pause emerging protections for biometric spoofing and algorithmic manipulation, and leave a governance vacuum.

The message to boards is explicit: oversight of AI agents, especially those with access to financial, regulatory, or communications workflows, is no longer optional.

Delegation without interrogation is naïve at best, and mismanagement at worst.

🥊 Your Move:

  • Require adversarial testing of AI agents used in high-value workflows.

  • Mandate visibility into third-party agent permissions and update cadence.

  • Establish oversight for any agent capable of generating, interpreting, or responding to synthetic content.

🥷 Deep Dive: Three Threats, One Attack Chain—Designing AI Defense for Synthetic Risk

Think of synthetic attacks not as isolated risks, but as a unified tactic stack. One where phishing, deepfakes, and disinformation reinforce each other with surgical precision. Each element—on its own—may be detectable or even dismissed. But chained together, they create a context so convincing that even well-trained staff and hardened systems are vulnerable.

Here’s how it typically plays out:

  1. A disinformation campaign circulates a fake press release claiming that a major merger has received accelerated regulatory approval, with funding and settlement instructions to follow within hours. The false release is distributed via cloned media domains and amplified through social channels.

  2. Minutes later, phishing emails referencing the "approval notice" are sent to treasury and legal teams, urging them to access updated escrow details via a fraudulent internal link.

  3. The escalation comes via a synthetic Zoom call. Deepfakes of the CFO and external counsel walk through an "accelerated funding schedule" and request immediate transfer of settlement funds to a new escrow account.

Each stage is coordinated to erode scrutiny: a public trigger, an internal command, a face you recognize. The urgency is emotional, the logic is procedural, and the format is familiar.

Case 1: Deepfake Voice in Wire Transfer Fraud

This is where synthetic threats often begin, audio alone. Multiple U.S. banks have reported business email compromise (BEC) attacks enhanced by AI-generated voice. Cloned executive audio has been repeatedly used to approve transfers over $1 million. Voice authentication alone fails to block the threat.

Case 2: Synthetic Identity Fraud in KYC

After synthetic voice attacks comes full synthetic identity fraud. Synthetic ID fraud reached $2.42 billion in 2023 across U.S. institutions. AI-generated faces and forged credentials passed KYC, sometimes including deepfake video in live interviews. This pushes synthetic threats from impersonation into full identity fabrication.

Case 3: $25M Loss via Synthetic Video Call

Now, combine these tactics. In 2024, a multinational finance employee wired $25 million after joining a video call with deepfaked versions of the CFO and other execs. The attackers used ensemble deepfakes, contextual phishing, and scheduled meetings to simulate normalcy. It was not a single deception, but a coordinated attack across formats.

What used to require a team of skilled attackers is now achievable with open-source deepfake libraries, synthetic voice models, and agentic task runners. This is the democratization of cyber-criminal behavior.

Case 4: $499K Transferred Before Doubt Kicked In

In Singapore, a finance employee transferred approximately $499,000 after joining a Microsoft Teams call with what appeared to be several senior executives. All were deepfakes. The fraud was only uncovered when the scammers later requested an additional $1.4 million. At that point, internal checks halted further transfers. This case mirrored the $25M attack structure in video, context, timing, but ended differently because the pattern was recognized midstream.

Internal Agents: A New Attack Surface

Synthetic attacks don’t just exploit people. In red-team tests, internal AI agents processed manipulated prompts. Without scoped permissions or memory controls, these agents became unwitting facilitators. No intent required, just execution.

What’s Really Under Attack?

Trust is being used as a trojan: a voice sounds right, a video looks familiar, a call comes at the expected time. MFA and biometrics can’t guard against synthetic coordination. These aren’t pure technical exploits. They are deceptions that target human-to-human relationships in a modern-day bank robbery.

We’re going to need better proof-of-humanity tools if we want to preserve trust in our society, let alone banking

🥊 Your Move:

Subscribe to keep reading

This content is free, but you must be subscribed to The AI Check In to continue reading.

I consent to receive newsletters via email. Terms of use and Privacy policy.

Already a subscriber?Sign in.Not now