Last week, the FBI warned the public about the criminal use of AI to commit financial fraud. In mid-November, the Financial Crimes Enforcement Network (FinCEN) alerted financial institutions of the rise in GenAI attacks. It offered insight into the types of attacks and red flag indicators to help identify suspicious activity. The alert reminded financial institutions of their reporting obligation under the Bank Secrecy Act.
FinCEN Director Andrea Gacki emphasized the dual nature of GenAI, highlighting its potential benefits and the risks posed by malicious actors. She urged financial institutions to remain vigilant against deepfake threats and to report any suspicious activities promptly. This proactive approach is crucial in safeguarding the U.S. financial system and protecting consumers from exploitation.
Deepfake attacks have plagued several organizations this year:
Perhaps the most well-known incident is the Hong Kong bank employee who transferred $25 million while on a video call with criminals who leveraged AI to create the voice and face of the company’s CFO.
Attackers used deepfake technology to impersonate the voice of a German CEO, instructing the company’s UK subsidiary to transfer $243,000 to a supplier.
Deepfake audio was also used to trick an executive into transferring $35 million to a third-party account. Luckily, the transfer was stopped before completed.
Trustmi recently conducted a survey that substantiated the significant increase in AI deepfake and impersonation attacks. Our study of 516 participants from diverse industries found that 22% reported facing sophisticated AI attacks in the last 12 months. Specifically, AI deepfake cyberattacks (10%) and executive impersonations (12%) - all aimed at compromising business payment processes. Nine respondents reported losing more than $25 million to payment fraud in just the past 12 months. Nearly 6% of participants lost up to $1 million due to payment fraud.
With the increasing sophistication of AI, it is becoming more difficult to identify fraudulent activity, leaving organizations more vulnerable to attacks. A study from the University of Oslo, Norway analyzed people’s ability to identify AI-generated and human voices. Participants correctly identified human voices only 56% of the time, and AI voices only 50% of the time. While educating employees about what to look for to detect suspicious behavior is important, education alone will not protect organizations from AI and deepfake attacks.
The surge in AI-driven financial fraud, including increasingly realistic executive impersonations, calls for a proactive, comprehensive approach to securing emails and business payments. Our research highlights the critical need for automated solutions that surpass traditional email security. With the average cost of a successful executive impersonation reaching $1.5M, organizations can significantly reduce risk by deploying AI-powered email and payment security alongside end-to-end payment process visibility. As the threat landscape evolves, defense strategies must advance as well, integrating cutting-edge technology with human oversight to protect financial assets effectively.
To learn more about deepfake attacks and ways to protect your organization, check out A CISO’s Guide to Fraud Risk Mitigation in the Gen AI Era.