From Hollywood to Hacking: Understanding Deepfake Scams

Author: Yuman Chau, CISA, CISM, CRISC, CISSP, ACP, CHFI, PBA, PMP
Date Published: 18 September 2024
Read Time: 4 minutes

Case Background

Recently, a fraud case involving HK$200 million occurred in Hong Kong.1 The scammer pretended to be the chief financial officer (CFO) of a multinational enterprise’s overseas headquarters and instructed the branch finance staff to join a confidential meeting. Using deepfake technology, the fraudster created a virtual video and provided false investment guidance during the video call. Per these instructions, the staff transferred a significant sum of money to various accounts without proper authorization. Subsequently, when they contacted their colleagues at the UK headquarters, it became clear that they had been defrauded. They promptly reported this incident to the police after realizing what had transpired.

Advanced Technology Scam

Deepfake scams have become a pervasive threat in the digital age. They use artificial intelligence (AI) to create convincing yet entirely fabricated videos and audio recordings. Deepfake scams have the potential to damage reputations, spread false information, and even manipulate financial markets. In response to this growing concern, individuals and organizations must implement robust strategies to prevent and mitigate the impact of deepfake scams.

The advancement of deepfake technology has significantly complicated the common practice of photo editing. In the past, photos could be altered using Photoshop, leading many to question their reliability and usability as evidence for purposes such as forensics or missing person identification. Nowadays, deepfake technology has substantially blurred the line between truth and fiction. Ordinary people do not have access to the technology needed to create convincing deepfakes, however, distinguishing between what is real and what is fake remains a primary concern.

With the rapid advancement of technology and network speed, improved computer performance, and the widespread acceptance of online meetings following COVID-19, hackers have greater flexibility in using fraudulent techniques, including deepfake technology. As a result, employees are at an increased risk of being deceived, making defense against such tactics more challenging.

These scams have the potential to damage reputations, spread false information, and even manipulate financial markets.

It is becoming increasingly challenging to distinguish deepfake transformations. Using the deepfake case in Hong Kong as an example, the staff of the targeted enterprise were entirely fooled by the deepfake CFO. This situation demonstrates the far-reaching consequences of deepfake technology, even for the most secure enterprises. Therefore, it is essential to understand how deepfakes function to properly defend against them.

In Hong Kong, the scammer's approach involved sending a confidential meeting request and a link to join, among other socially engineered tactics. When considering how to prevent such attacks in the future, can we glean any clues from the emails received from the scammer or the meeting attendee’s ID? Is there a non-technical way to prevent this scam from happening?

Non-technical Actions and Steps for Organizations

An effective strategy for prevention is to raise awareness and provide education about the existence and potential impact of deepfake scams. By teaching individuals how to spot suspicious content and encouraging critical thinking when consuming media, the risk of falling victim to deepfake scams can be significantly reduced.

It has not been determined if the victims of the Hong Kong case followed established procedures for transferring money. In some cases, the chief executive officer (CEO)/chief operations officer (COO)/CFO of certain enterprises must deliver at least one signature before any payment above HK$1000 can be made.2 This requirement may seem strict, but it helps reduce the risk faced by the enterprise. Normally, regardless of the amount, approvals are required for money transfers to ensure compliance with permissions and internal auditing controls. Fewer people can approve smaller amounts, while larger amounts require more signatures. Additionally, it is important to verify signatures to prevent unauthorized use (there have been cases where executives' emails were hacked, and funds were authorized). If the organizational policy requires approval by overseas headquarters, organizations should consider using an electronic signature solution such as DocuSign to safeguard against process interruption due to potential individual errors.

It is surprising that a single individual could access such a large amount of money without proper oversight from an internal control or risk management perspective. If the individual in the Hong Kong case had not fallen victim to a scam or been in a compromised state, they could have potentially sent the funds to various accounts in an unregulated manner. To prevent such risk, organizations should implement a straightforward internal control process where at least two to three senior executives function as gatekeepers. This ensures that large fund transfers undergo scrutiny, avoiding potentially disastrous scenarios.

It can be said that although the focus of this incident is on the power of deepfake technology, the actual success of the fraudsters is due to the loopholes in the mechanism of the organization.

Apart from understanding the deepfake technology that enables fraudsters to use more sophisticated scamming methods, organizations need to focus on establishing robust internal controls. By formulating clear internal policies, organizations can effectively mitigate catastrophic risk. However, despite its importance, internal control policy is often neglected within organizations.

Conclusion

Addressing deepfake scams requires a multi-faceted approach that involves end-user education, cybersecurity, and collaboration with relevant stakeholders. By proactively implementing these strategies, individuals and organizations can better protect themselves from the damaging effects of advanced technology scams.

Endnotes

1 Magramo, K.; “British Engineering Giant Arup Revealed as $25 Million Deepfake Scam Victim,” CNN, 17 May 2024
2 Derived from the author’s personal experience working as an in-house IT manager in Hong Kong.

Yuman Chau

Is an ISO27001 senior lead auditor, lecturer in Cybersecurity and Enterprise Security, and has an MSc in Information Security and Computer Crime.

Additional resources