In early February 2024, a multinational Hong Kong-based company lost $25 million to a complex phishing scam. Using publicly available videos on YouTube, scammers created elaborate deepfakes of the company’s CFO and coworkers of the target.
They started by compromising the CFO’s business email, and then used it to send the employee a request for a $25 million transfer to 5 different bank accounts that needed to be kept “secret”. Because the employee was suspicious of the request, the scammers used deepfakes they generated on a video call with the employee to allay his concerns. He was placated and felt the request was validated and performed the transaction.
This attack could have been prevented if the company had followed several key steps and best practices, and if the finance team had an AI-powered, end-to-end system in place to catch the incident at multiple points throughout the incident chain. Let's take a look at how a solution like Trustmi could have helped reverse-engineer the scheme to avoid the fraudulent payment.
This attack was not just a deepfake attack. There were many layers and components that helped the attackers achieve their goal in getting the $25 million funds transferred to their account. Below we’ve broken down the different steps they took, and how Trustmi would have been able to detect the suspicious signals and flag them for additional review.
BEC, or Business Email Compromise is a fraud method that gains access to executives’ or decision makers’ emails within an organization in order to steal sensitive information and send fraudulent payment requests, among other actions. This tactic was used in the Hong Kong incident, which kicked off with an email that appeared to be from the CFO requesting that the employee perform a secret transaction. This in turn led to the cutting-edge deepfake call that succeeded in duping the employee.
Trustmi is an end-to-end platform that can detect, starting with the first email exchange, how communication around payment requests plays out at the organization. Our platform is able to examine the language used in the emails, to see if the domain is spoofed, and to catch other suspicious signs that an employee (or vendor) email was hacked or that the original email request was sent from a fraudulent email account. Furthermore, in this example of the Hong Kong based company, the employee received an email from what was purportedly the UK-based CFO. If the employee had never corresponded via email with this CFO in the past, our system would have caught that anomaly. If these two employees had corresponded in the past regularly, our platform would have caught whether this email exchange followed the usual pattern of communication between these two employees or if there was a difference. Furthermore, the platform would have detected the language in the email that stated this was meant to be a “secret” transaction, which is highly irregular for any fund transfer request by a senior executive at a company. These are merely some of the ways that a solution like Trustmi would have started to raise the red flags at the email and initial communication stage.
The deepfake was extremely convincing in this case, no doubt about it. And it would be exceedingly difficult to discern the difference between the real and fake people on the video. We don’t have a lot of the details in this case, but we can make certain inferences to show that a technology solution could have helped poke holes in the deepfake trap. First there are some questions to consider here. Who were all the people on the call and were their correct emails included on the emailed meeting invite or in other emails leading up to the call? Was there a recap email sent after the meeting to the entire group to ‘close the loop’ before making the payment and who was included on that communication? Were any or all of the people on the call the right people to be involved in these types of transactions?
Technology could have helped in certain areas that would have raised suspicion sooner. Our platform would monitor the events leading up to the deepfake, the coordination of the video call, and the post-communication to see if there were any anomalies in the order of operations, overall process, and the roles of the individuals involved. Even if the deepfake itself was 100% convincing to the employee, our platform would have detected these operational deviations in the communication and behavioral patterns to unmask the overarching social engineering at play.
In this situation, the employee should not have been allowed or able to make a transfer of this size on their own, and there are multiple points throughout the course of events where the employee could have been prevented or stopped from executing the task. First, there should always be someone above the employee that approves such a transfer, so there was clearly a violation of the segregation of duties. Second, even without knowing all the details, it is highly possible that the employee circumvented the usual approval process and protocols because this was a “secret” transfer and required special treatment (a clear deviation from the normal procedure). Third, it is also possible that the employee was given permissions to multiple systems involved in the payment process that they should not have had, or that should have been limited, otherwise they wouldn't have been able to execute the payment. If this employee were able to authorize, approve, and release these funds on their own, it stands to reason that they were given a lot more access to all the systems involved in sending funds than any single person should have. Again, without knowing all the details it’s hard to say, but these possibilities could have contributed to the unfortunate outcome.
With regards to processes, protocols and procedures, there are several areas where red flags could have been raised by an advanced AI tech solution like Trustmi. Our platform, being end-to-end, enforces controls across all systems involved in the payment process so that individuals don’t override rules or find a way around designated protocols and segregation of duties. By using Trustmi, organizations can monitor and control who has access to which systems and how much access to ensure that the systems are properly protected and the rules are appropriately followed. This means that an employee wouldn’t be able to override the system and force through a payment in the first place because they wouldn’t have permissions to do so. Also, Trustmi’s system can monitor if and when changes are made within the ERP so that if someone logs in and takes an action that deviates from the normal process for a large payment, the system will catch it and flag it as suspicious. Within the payment process itself, our platform monitors the approval process to ensure that the established and official approval flow is followed. From the little information available about the case in Hong Kong, it would seem that there should have been more people involved in the approval process to ensure that a payment of this size was legitimate before releasing funds.
Who was the third-party that the employee in Hong Kong thought they were paying? Without insight into the specific request the employee received, it’s difficult to know what excuse or reason they were given for the request to wire money to five different bank accounts. However, we know that the fake CFO must have provided some information, albeit fake information, about the parties that were going to receive the funds. But at no point was there a red flag to check that the bank accounts that were going to receive the funds actually belonged to third-parties the company was working with already, or that the owners of the accounts were even real to begin with. There was clearly a blindspot in the vendor management side of the situation, which should have raised suspicion about who exactly was being paid and why.
Trustmi’s vendor onboarding and supply chain lifecycle modules provide protection on the vendor side of the equation, ensuring that a third-party supplier is real and all their details including bank account information is correct. Furthermore, if any bank account change requests are made, our system can ensure that these requested changes are legitimate. If the CFO’s request was for the employee to pay an existing vendor with five new bank accounts, our system would have detected this as an anomaly and would have kicked off a bank account validation process to ensure that those five bank accounts were indeed owned by the real vendor. If the request was to pay an entirely new fake vendor or group of fake vendors, then the onboarding process would have uncovered quickly that these vendors were fraudulent. There would have been a number of steps to follow to onboard the new vendors securely and to fully verify that their bank accounts are real and belong to real vendors.
Attacks like this one will become more common in the coming years. In fact, if the trends we have seen are any indication, these deep fake attacks will become exponentially worse, as criminals realize how effective and efficient these AI attacks truly are. As deepfakes now number in the millions, we can expect more large and small companies to experience these types of attacks. The most important thing for these organizations is to have a comprehensive platform that can scan, flag, and remediate these scamming attempts before they succeed. TrustMi provides all of that and more, leveraging an AI engine that does the work for you, and leaves manual processes in the dust. The evil uses of AI out in the world aren’t going away anytime soon, but at least by leveraging an AI-powered technology like Trustmi can help organizations fight back.
To see Trustmi and action and find out how we can help your organization avoid a massive $25 million setback, get in touch today.