• The AI Check In
  • Posts
  • šŸ›Ÿ AI Cyber Threats Are Escalating: 3 Moves Every Bank Should Make

šŸ›Ÿ AI Cyber Threats Are Escalating: 3 Moves Every Bank Should Make

MTTR benchmarks, adversarial risk playbooks plus your strategic ā€˜Your Move’ checklist.

šŸ‘‹ Welcome, thank you for joining us here :)

The AI Check In is your weekly power play to navigate AI governance and strategy in banking and finance.

What to expect this edition:

  • šŸ›Ÿ Need to Know: What Trump’s AI deregulation means for bank security

  • 🄷 Deep Dive:  How U.S. banks are adapting to AI-driven cyber-rhreats

  • āš”ļø Instruments of Mastery: Darktrace - Autonomous AI that fights back

  • šŸ“ˆ Trends to Watch: AI-Fueled cyber-attacks are faster, smarter, and cheaper

Rewrite the rules. Outmaneuver competitors. Stay two steps ahead of regulators.

Let the games begin.

 šŸ›Ÿ Need to Know: What Trump’s AI Deregulation Means for Bank Security

On January 23, 2025, President Donald J. Trump signed the Executive Order titled "Strengthening American Leadership in Digital Financial Technology," aiming to support the responsible growth and use of digital assets and blockchain technology across all sectors of the economy.

This Executive Order revoked President Joe Biden's Executive Order 14067, "Ensuring Responsible Development of Digital Assets," issued on March 9, 2022. Additionally, President Trump repealed a 2023 executive order by President Biden that mandated stricter oversight of artificial intelligence (AI) development, shifting the focus towards innovation-driven policies in AI.

As a result, financial institutions now operate with revised directives concerning digital assets and AI technologies. Banks must continue to adhere to examiner expectations under existing FFIEC guidelines. Many institutions are proactively adopting internal governance frameworks and security controls for AI systems, including:

  • Conducting AI model audits to ensure transparency in threat detection.

  • Aligning AI operations with established cybersecurity frameworks to protect training data.

  • Implementing dynamic, risk-based monitoring of AI system behavior.

🄊 Your Move:

  1. Review AI governance protocols to ensure alignment with FFIEC guidance in light of recent federal policy changes.

  2. Implement routine model audits for all machine learning systems utilized in fraud detection or threat response.

  3. Strengthen controls on AI training data to mitigate risks from adversarial attacks or data poisoning.

By updating internal policies and practices, financial institutions can navigate the evolving regulatory environment effectively.

🄷 Deep Dive: How U.S. Banks Are Adapting to AI-Driven Cyber-Threats

The Evolving Threat Landscape

The integration of AI into banking operations has transformed services such as fraud detection, risk management, and customer support. However, this advancement has introduced new vulnerabilities. Cybercriminals are increasingly exploiting AI systems, leading to sophisticated attacks. Notably, adversarial attacks—where malicious inputs are designed to deceive AI models—have become a significant concern. Gartner predicts that 30% of all AI cyberattacks will involve tactics like training-data poisoning, AI model theft, or adversarial samples targeting AI-powered systems.

Operational Challenges and Workforce Constraints

The rapid adoption of AI in banking cybersecurity has outpaced the availability of skilled professionals. According to Cybersecurity Ventures, there are up to 750,000 unfilled cybersecurity jobs in the U.S., contributing to gaps in AI model oversight, incident response, and system maintenance. As banks scale their AI systems, reliance on autonomous tools increases—not only to improve speed and accuracy but also to compensate for lean or overburdened security teams.​

Incident Response Metrics and Financial Implications

Effective incident response is critical in minimizing the impact of cyberattacks. Key metrics include Mean Time to Detect (MTTD) and Mean Time to Recovery (MTTR). MTTD measures the average time taken to identify a security incident, while MTTR assesses the time required to resolve it. Shorter MTTD and MTTR indicate a more efficient response, reducing potential damage. In the financial sector, the average cost of a data breach has risen to $6.08 million, a 22% increase over the global average. Efficient incident response can significantly mitigate these costs. ​

Model Lifecycle Governance

Maintaining the integrity and performance of AI models necessitates robust lifecycle governance. This includes regular monitoring and retraining to address issues such as model drift, where a model's predictive accuracy deteriorates over time due to changes in input data patterns. Implementing structured governance frameworks ensures AI models remain accurate, reliable, and compliant with regulatory standards throughout their operational life. ​

Industry Response and Best Practices

Leading financial institutions are adopting several strategies to mitigate AI-related cybersecurity risks:

  • Comprehensive AI Model Audits: Beyond initial validation, continuous monitoring of AI models for performance degradation, biases, and vulnerabilities is essential. This includes stress-testing models against potential adversarial scenarios.​

  • Adversarial Training: Enhancing model resilience by exposing AI systems to adversarial examples during training, thereby improving their robustness against such attacks.​

  • Collaboration and Information Sharing: Participating in industry-wide collaborations to share threat intelligence and best practices for AI security. Sharing anonymized cybersecurity information with vendors can enhance anomaly detection AI models used for cyber defense functions.​

Emerging Technologies and Solutions

The cybersecurity industry is seeing accelerated investment in AI-powered detection and response platforms. ReliaQuest, which recently raised over $500 million in funding, offers GreyMatter—a solution that integrates with more than 200 cybersecurity and enterprise tools to deliver unified visibility and rapid threat response through agentic AI.

At UK-based Recognise Bank, the platform delivered immediate operational impact: 74% MITRE ATT&CK detection coverage on day one, an 80% reduction in alert fatigue, and a 50% drop in false positives. These gains enabled a lean security team to shift focus from reactive triage to strategic oversight, including real-time executive reporting on cyber-readiness.

The Road Ahead

As AI continues to permeate the banking sector, institutions must proactively address the associated cybersecurity challenges. This involves not only implementing advanced technological solutions but also fostering a culture of continuous learning and adaptation to stay ahead of emerging threats. Regularly updating incident response plans, investing in workforce development, and establishing robust model governance frameworks are critical steps in this journey.

🄊 Your Move:

  1. Enhance Incident Response Metrics: Regularly assess and aim to reduce MTTD and MTTR to improve response efficiency and minimize potential breach costs.​

  2. Implement Continuous Model Monitoring: Establish ongoing surveillance of AI models to detect performance issues, biases, or vulnerabilities promptly.​

  3. Develop Robust Model Governance Frameworks: Create structured policies for the entire AI model lifecycle, including regular retraining schedules and protocols to address model drift and ensure compliance.​

By adopting these strategies, financial institutions can strengthen their defenses against AI-related cyber threats and safeguard their assets and reputation.

āš”ļø Instruments of Mastery: Darktrace - Autonomous AI That Fights Back

Darktrace is an AI-powered cybersecurity solution used by financial institutions to detect, respond to, and mitigate cyber threats in real time. Its self-learning AI builds an understanding of normal behavior across users and devices, enabling anomaly detection without relying on static rules or signatures.

For instance, Aviso, a prominent Canadian wealth services supplier with over $140 billion in assets under administration and management, implemented Darktrace's ActiveAI Security Platform to safeguard its extensive digital ecosystem.

This deployment led to a 137% increase in the actioning of malicious emails, a 100% rise in the detection of critical network incidents, and saved the security team over 1,100 hours of manual investigation time within just a few months. These improvements allowed Aviso's analysts to focus on proactive measures, such as vulnerability management and enhancing business practices across service, operations, and compliance. ​

🄊 Your Move:

  1. Deploy autonomous threat detection tools to reduce response time and reliance on manual triage.

  2. Enhance email security by integrating AI systems capable of behavioral anomaly detection.

  3. Ensure ecosystem-wide visibility across SaaS, cloud, and endpoint systems to minimize blind spots.

The integration of artificial intelligence (AI) into cyberattacks is intensifying, presenting new challenges for financial institutions. Key emerging threats include:​

  • Data Poisoning Attacks: Manipulating AI training data can compromise model integrity, leading to erroneous outputs exploitable for fraudulent activities. ​

  • Deepfake-Enabled Fraud: The financial sector faces significant risks from deepfake technologies, with incidents of AI-generated synthetic media being used to impersonate individuals and authorize fraudulent transactions. ​

  • AI-Generated Phishing: AI-powered phishing emails have a 60% success rate, matching the effectiveness of those crafted by human experts, highlighting the increasing challenge in detecting such attacks. ​

  • Cybercrime-as-a-Service: The accessibility of generative AI tools has lowered the barrier to entry for cybercriminals, enabling the automation of sophisticated attacks and expanding the cyber threat landscape.

Note: Once we are post-quantum, all current cyber-security and cryptographic bets are off.

🄊 Your Move:

  1. Secure AI Model Training Pipelines: Implement robust data validation and monitoring processes to prevent and detect data poisoning attempts.​

  2. Enhance Identity Verification: Adopt multi-factor authentication methods incorporating biometric or behavioral analysis to identify and mitigate deepfake-related fraud.​

  3. Utilize AI-Enhanced Phishing Detection Tools: Deploy advanced detection systems capable of identifying AI-generated phishing attempts through behavioral analysis and anomaly detection.​

By proactively addressing these evolving threats, financial institutions can bolster their defenses against the rising tide of AI-powered cyberattacks.

šŸ”® Next Week

We examine how AI is reshaping the banking workforce—not just as a tool, but as a full-scale restructuring force. From displaced roles to emerging functions, we'll explore the operational and strategic implications of AI-powered workforce transformation.

Yours,

Disclaimer

This newsletter is for informational and educational purposes only and should not be considered financial, legal, or investment advice. Some content may include satire or strategic perspectives, which are not intended as actionable guidance. Readers should always consult a qualified professional before making decisions based on the material presented.