|
Deepfake Business Email Compromise (BEC)
As the rise of technology has vastly grown over the last couple of decades, criminals have continued to develop new methods to commit fraud and gain access to classified personal information. Deepfake Business Email Compromise (BEC) has become one of the more popular cyber attacks over the last few years. The expansion of Deepfake BEC attacks can be credited to the advancements in AI and Machine Learning, making it easier for attackers to create realistic but fake communications that deceive their targets. These attacks are often harder to detect than traditional email attacks because the deepfake content mimics familiar voices or images, making the requests seem more legitimate.
Deepfake BEC Explained
Deepfake Business Email Compromise (BEC) is a growing cybercrime trend where cybercriminals use deepfake technology—artificially generated or altered media, including audio and video—to impersonate company executives or trusted individuals at lucrative businesses. This tactic is used in email phishing attacks to mislead employees into transferring funds or sharing sensitive information. Deepfake BEC attacks are carried out when an individual compromises / obtains the account through social engineering / computer intrusion to conduct unauthorized transfer of funds (1).
The most commonly used keywords in Deepfake BEC attacks are “Request” (25%), “Payment” (15%), and “Urgent” (10%). The consequences of deepfake BEC attacks can be severe, including significant financial losses and reputational damage. Organizations are responding by improving cybersecurity measures, training employees to recognize suspicious activities, and using AI-based tools to detect deepfake content. However, the sophistication of these attacks is constantly evolving, presenting an ongoing challenge for businesses worldwide. These attacks have become more intricate than just a typical phishing email; they can come in numerous different ways, such as manipulation of video/audio to impersonate a trusted employee. The primary use of a Deepfake Audio/Video swapping is to enhance Business Email Compromise (BEC) to falsely authorize payments.
Deepfake Video is the invention, using digital software and machine learning programs, of face- swapping images and videos in order to create messages and statements that are not accurate against the original. Since the vast majority of people around the world create their opinions from the information they see online, Deepfake can be missed by the common eye. Anyone with the right technology has the ability to create / distribute false information online and can influence a mass group of people. While AI can be the answer to detecting deepfakes and false information, AI is also the reason how Deepfakes are created. Deepfake Audio is the creation of mimicking the voice of a real person on the phone / on a video. An audio deepfake attack is one of the more advanced forms of AI attacks. The technology takes voice samples that are obtained, from speeches, presentations, interviews, by the hacker. It will use whatever text that hacker creates and spin it into a fake voice that is almost identical to the real audio. Voice phishing (vishing) is the criminal practice of using social engineering over the telephone to gain access to, or trick people into providing, private, personal, or financial information, usually with the promise of financial reward. The cybercriminal makes a phone call or leaves a voice message purporting to be from a reputable company in order to induce individuals to reveal personal information, such as bank details and credit card numbers. Vishing uses the same techniques as in phishing emails but is done over the phone instead (2).
This technology has allowed attackers to create highly realistic representations while combining that with tactics to make the victims further engage in the fraud. Attackers will use any personal information that is public knowledge and script the attack based on the individual victim.
How Deepfake BEC Is Being Used for Cryptocurrency
Over the last decade, cryptocurrency has grown to astronomical heights and with those heights come values in the road. Chainalysis reported that nearly $2.2 billion worth of cryptocurrency funds were stolen from hacks/frauds in 2024 (3). The average loss from Deepfake BEC frauds involving Cryptocurrency in 2024 was estimated to be around $400,000 per company that was affected by the impersonation. These frauds and attacks are being designed around creating fraudulent videos/audio from prominent figures in the company, such as a CEO, to trick users into sending their funds to fraudulent addresses of the hacker. Popular ways such as fake endorsements or social engineering/phishing attempts are how attackers can gain control of your funds. Cryptocurrency is more vulnerable to deepfakes than regular currency because cryptocurrency is harder to trace and the volatility of the markets incentivizes users to make faster decisions than expected, leading to more mistakes and more frauds. All communication on cryptocurrency markets happens online, thus a greater chance the deepfake video/audio attack will be successful. Deepfake Frauds featuring current presidents and wealthy entrepreneurs on the internet have been tricking users to send various amounts of cryptocurrency to a single address with expectations of doubling the funds given.
 |
A real-life example of a Deepfake BEC attack happened in June of 2024 where the CEO of Ferrari, Benedetto Vigna, was impersonated. The fraud was initiated by several messages being sent to a Ferrari executive over WhatsApp from the hacker mimicking the CEO. The contents of the message were for a confidential acquisition for Ferrari, requiring a hedge fund transaction, and insisted the executive sign a Non-Disclosure Agreement (NDA). Afterward, a phone call followed from the same number as the messages, impersonating Mr. Vigna’s voice to the executive. The executive grew worrisome and asked the suspected CEO a specific question regarding a book recommendation that Mr. Vigna had made to him recently. This small verification method spooked the hacker from his BEC attack and prevented harm to the company, saving millions of dollars (4). The executive’s intuition was one of the many ways employees can help prevent BEC attacks
 |
Some helpful measures companies and employees can take to prevent BEC attacks:
- Adding a “Zero Trust” security model, requiring verification at every access point and assuming potential threats from any source
- Implementing strong email authentication protocols like DMARC, SPF, and DKIM
- Educating employees with trainings to identify suspicious emails and verify requests through alternative channels
- Utilizing AI-based deepfake detection tools
- Enabling multi-factor authentication
- Conduct periodic assessments of your security posture to identify vulnerabilities and address potential risks related to deepfake attacks
- Prioritize training for employees in critical roles like finance or accounting who are more likely to be targeted by BEC attacks
- If / when an attack happens, develop a comprehensive plan to respond effectively to a deepfake attack, including containment and remediation steps
 |
FinCEN identified the following red flag indicators to help financial institutions detect, prevent, and report potential suspicious activity related to the use of GenAI tools for illicit purposes (5):
- A customer’s photo is internally inconsistent or is inconsistent with their other identifying information.
- A customer presents multiple identity documents that are inconsistent with each other.
- A customer attempts to change communication methods during a live verification check due to excessive or suspicious technological glitches during remote verification of their identity.
- A customer declines to use multifactor authentication to verify their identity.
- A reverse-image lookup or open-source search of an identity photo matches an image in an online gallery of GenAI-produced faces.
- A customer’s photo or video is flagged by commercial or open-source deepfake detection software.
- GenAI-detection software flags the potential use of GenAI text in a customer’s profile or responses to prompts.
- A customer’s geographic or device data is inconsistent with the customer’s identity documents.
HSI’s mission is to protect the United States by investigating global crimes that impact our local communities. We have over 10,000 employees stationed in over 235 U.S. cities and more than 50 countries worldwide. This gives us an unparalleled ability to prevent crime before it reaches our communities. HSI encourages the public to report suspected suspicious activity through its Tip Line. You may remain anonymous.
|