It’s Surprisingly Easy to Create Fake Accounts with Deepfake Injection Attacks

Without countermeasures, it takes just a few simple tools to bypass security functions to register a synthetic identity through a mobile app

 

This public version of our article is partially redacted to avoid educating aspiring fraudsters on how deepfakes can be created and used to conduct attacks on mobile apps. The full version of the article and a detailed explainer video are available upon request via our contact web page.

Digital onboarding has revolutionized the way customers sign on for new services, whether it’s opening a bank account, signing up for a dating app, or creating a social media profile. But the process also introduces risks of fraud and crime when the digital identity of a new customer can’t be trusted. 

A poll conducted during a recent ID R&D webinar found that while 94% of respondents share that threats from generative AI are on their radar, only 37% consider themselves adequately prepared to address deepfake fraud. This is troubling because the proportion of identity fraud committed using deepfakes has doubled, according to recent statistics shared by Sumsub. Through “injection attacks”, bad actors can stream deepfakes of facial images to bypass liveness checks, then create and exploit fraudulent accounts without consequences. With the growing sophistication of today’s deepfakes, our digital identities have become harder to trust where there are no countermeasures in place. 

Banks are at particular risk, as the technique can enable fraudsters to conduct scalable attacks and establish an unlimited number of fraudulent accounts without detection. Dating and social media apps face the danger of exposing their legitimate users to online predators.

The tools for conducting an attack are readily available

To conduct a deepfake injection attack, fraudsters need only a set of basic software tools that are surprisingly accessible, simple to use, and alarmingly effective. [This section is partially redacted. Contact ID R&D for more information.]

Demonstrations of presentation and injection attack detection 

Armed with these tools, the steps to register a synthetic identity are straightforward: 

  1. A fraudster prepares the data for one or more synthetic identities, complete with deepfake facial imagery and biographic identity data. 
  2. The deepfake files are streamed or presented from a virtual camera, which can make it appear as though the image is live and coming from a real camera.
  3. The mobile app targeted for fraud is run within a special software program that is shared widely among fraud networks. 
  4. The digital onboarding process is initiated. 
  5. For selfie capture, the deepfake facial imagery is streamed as if it were live.

Methodologies for video injection attack detection

The detection of video injection attacks involves an intricate series of steps, which are crucial to ensuring the security and integrity of biometric image and video data. These steps include creating a comprehensive attack tree, implementing detectors that cover all the attack vectors, evaluating potential security loopholes, and setting up a continuous improvement process for the attack tree and associated mitigation measures. Below are methodologies for the detection of some widely used types of injection attacks. For more details about injection attacks, contact ID R&D

Javascript injection detection. One potential attack vector for video injection is through Javascript used in the process of biometric capturing. Mitigation methodologies must be comprehensive, covering different browsers, and plugins. Techniques include code obfuscation to increase the difficulty of hacking and using specific libraries to binary-encode the Javascript code.

Virtual Camera Detection. Virtual cameras appear as additional cameras in the operating system. They can provide virtual backgrounds, video filters, and other manipulations, which can be exploited for video injection attacks. Software and AI-based approaches can be employed for detecting virtual cameras. These techniques are based on identifying discrepancies between the virtual and physical camera.

Hardware video capturer detection. Hardware video capturers can also be used for video injection attacks. They operate as secondary hardware cameras in the operating system, allowing a video to be streamed from another device via USB as if it was a real camera. Similar approaches to those used for virtual camera detection can also be employed here.

Injection attack detection is essential to stopping deepfake attacks

Although many mobile apps include presentation attack detection (PAD) during selfie registration to prevent users from presenting printed copies or screen replays, this is not always enough to stop all deepfake attacks. Injection attack detection goes beyond PAD to detect not only deepfake content used in a presentation attack, but also the use of virtual cameras and other advanced attack vectors. Injection attack detection helps businesses significantly mitigate the risk of fraudulent account creation, ensuring a secure digital onboarding and safe experience for their customers. As deepfake technology becomes more sophisticated, the urgency to implement such countermeasures will only increase.

Deepfake attacks represent a new threat to digital onboarding, and understanding these risks is the first step toward mitigating them. In the AI-powered arms race between security and fraud, remaining vigilant and proactive in utilizing the latest countermeasures is not merely beneficial – it’s essential.

The increasing capabilities of artificial intelligence have significantly bolstered our ability to detect more complex injection attacks. AI-based algorithms can analyze the artifacts of the video signal and identify subtle differences in the video signal paths within the operating system, whether the signal is from virtual cameras, injected pre-saved videos, or video streams from real cameras. High accuracy in detecting these differences can be achieved with a robust AI process and ample labeled data.