IDLive® Face Plus: Frictionless Liveness Detection, Deepfake Prevention, and Injection Attack Protection

Comprehensive Liveness Detection

IDLive Face Plus complements IDLive Face presentation attack detection with injection attack detection, providing comprehensive protection from deepfakes and other types of fraudulent digital imagery.

  • Detect injection attacks that use virtual and external cameras
  • Prevent browser JavaScript code modifications on both desktops and mobile devices
  • Prevent man-in-the-middle replay attacks
  • Protect from emulators, cloning apps, and other software used for fraud
  • Prevent a variety of attack content
    – Images, recorded video, and live streaming
    – Deepfakes and digital renderings
    – Face swaps and morphs
  • Improve presentation attack detection performance

Learn more about IDLive Face Plus

How do deepfakes threaten biometric security?

Facial recognition security relies on presentation attack detection (PAD) to ensure that a biometric selfie is not actually a fraudster presenting a non-live facial image to the camera, such as a printed copy, screen replay, or 3D mask. Injection attacks pose a different vulnerability, where hardware and software hacks are used to bypass a proper capture process. Without countermeasures, fraudsters can emulate camera capture with non-live digital facial imagery in a way that can defeat certain liveness detection measures.  Deepfakes can be used to create synthetic identities that fraudsters use to open fraudulent accounts or gain unauthorized access to their victims’ accounts.

IDLive Face Plus: Stop Fraud from Deepfakes by Detecting Injection Attacks

As presentation attack detection has become more effective, fraudsters are turning to injection attacks to defeat biometric security mechanisms with hyper-realistic deepfakes. IDLive Face Plus combines award-winning presentation attack detection with a unique approach to injection attack detection to prevent deepfakes and other fraudulent digital content. Instead of focusing on the content of digital fakes, it helps shut down the channels used to deliver it, such as virtual cameras in desktop browsers and sophisticated hardware attacks.  Desktop and mobile browsers as well as mobile apps are protected in a way that is completely transparent to the user.

Complete OS Coverage
Addresses threats on desktops and mobile devices with coverage on iOS, Android, Windows, and Mac.
The techniques require no interaction whatsoever with the user and adds no friction to the user experience.
Architecture Flexibility
Different configurations are available based on the desired footprint on the mobile device.
IDLive Face Plus has been demonstrated to be extremely effective in detecting a variety of attacks, with a low false-positive rate.
Broad Protection
Prevent most critical attack types including virtual cameras and browser JavaScript modifications.
AI-Powered Detection
IDLive Face leverages deep learning to detect injection attacks that humans can’t.

Want to learn more about injection attack detection for comprehensive liveness?

In real work operation, supporting millions of banking transactions a month, one of our customers migrated from an active face liveness technique to ID R&D passive liveness, and completion rates increased from 60% to 95%+. This was in one of the more challenging production environments with a very wide range of operating parameters.

On the back of the improved liveness success rates experienced (faster, more accurate, much higher completion rates with much less attempts) the customer expanded their use of the system and their business dependency on ID R&D liveness for a much broader digital engagement on a long-term basis.

Large global partner

Passive Facial Liveness Enhanced with Injection Attack Detection

Most of today’s facial liveness technologies are “active”, requiring users to blink, turning their heads or move their phone back and forth. This results in three issues: fraudsters can present a photo with cut out with eye holes, use a mask or show a video to trick the system. Challenge-response techniques put attackers on alert that they are being checked. And, active methods create friction that slows the authentication process, increases abandon rates, and diminishes the overall user experience.

The team at ID R&D has worked relentlessly to ensure you don’t have to sacrifice usability for security. Customers and partners who have made the switch from active to passive facial liveness, report significant reduction in abandonment, lower false rejections of real users, and highly accurate presentation attack detection.

The idea of liveness relative to biometrics dates back to the early 2000s. Prior to AI-driven liveness detection, companies relied on humans to perform liveness checks. For example, during remote onboarding, customers would be required to show their photo ID on a video call to validate the physical presence of the person in the ID. Of course, this process was slow, costly, and error prone.

Friction in Active LivenessThe technology behind liveness detection is based on the recognition of physiological information as a sign of life. Historically, liveness algorithms have been trained to identify head movements, dilation of a subject’s pupils, changes in expression, and other physical responses.

The first generation of facial liveness technology is referred to as “active.” Active liveness detection relies on the user’s movements in response to challenges such as nodding, blinking, smiling, or correctly positioning one’s face in a frame. While the technology can be effective at detecting a spoof, it introduces friction into a verification process that was largely desirable for its ability to remove friction and is less secure as fraudsters have learned how to fool these systems. The pursuit of an easier solution, facilitated by increased access to training data for better machine learning, led to a new generation of “passive” liveness detection.

Passive liveness is fundamentally different from active in that it requires no action by the user. As such, active liveness is impractical for use cases with frequent login. The friction also has a negative impact on new customer acquisition, with companies reporting abandonment rates as high as 50% when using active liveness. 

However, there are additional differences to be aware of when comparing the two approaches.

Passive Liveness

Active Liveness

User Experience

Requires no action by the user, which results in less friction and lower user abandonment during processes such as remote customer onboarding.

Requires users to respond to “challenges” that add time and effort to the process.

Software Requirements

Some approaches require installing a software component on a device; others do not.

Active solutions usually require installing software on the device.

Image analysis

This varies depending on the passive approach. Analysis may be based on a single image with processing in near real time.

Requires analysis of multiple images or frames of video to detect movement.

Bandwidth Requirements

May use the same selfie taken for facial recognition, resulting in no incremental traffic to the server.

May require additional data to be exchanged between the user’s device and a server-based solution. This is a problem in geographies where bandwidth is scarce or expensive.


The speed of a passive liveness check depends on the method used. Near real time is possible.

Active liveness always increases user effort, resulting in a longer liveness check.

Robustness to Spoofing

Passive methods have the advantage of “security through obscurity.” They are generally more immune to spoofing attacks because the fraudster doesn’t have clues as to how to defeat the liveness check. In fact, they won’t even know it’s happening.

Active systems provide fraudsters with instructions that can be “reverse engineered” to attack and defeat the liveness check. Known techniques to break them include using a simple 2D mask with cut out eyes or animation software to mimic head movements, smiling, and blinking.

Proven Compliant with ISO 30107-3
Standard for Robustness

A few passive liveness solutions have passed iBeta Level 1 and Level 2 testing and are compliant with ISO 30107-3. Additionally, ID R&D has passed Level 1 and Level 2 testing with a single image approach to liveness detection.

Multiple solutions are iBeta Level 1 and Level 2 compliant or conformant. Note that there is no difference between iBeta “certified” and “compliant.” (link

A variety of techniques are used to perform a passive liveness check, ranging from analyzing a selfie image to capturing a video, to flashing lights on the subject. These passive liveness approaches have different impacts on the user experience and processing as outlined in the following table.




Flashing lights on the user

The user does not need to respond to any challenges or move.

  • The device must be held steady
  • The process takes time
  • It fails in bright sunlight
  • Users may not tolerate the strobing lights
Capturing a short video

The user does not need to respond to any challenges or move.

  • The video takes time to capture
  • Software may need to be downloaded to the device
  • Passive observation relies on micro-mimics and small movements, which may be hard to capture
Examining a single selfie image

Uses the same selfie used for facial matching and requires no extra effort by the user.
Because only one image is needed, bandwidth requirements are low and processing is fast.

No additional software download is required on the user side for image capture.

  • Requires a server-side component
Using a hardware assisted approach (e.g. depth measuring)

Requires no extra effort by the user.
Because only a few images are needed, the requirements on bandwidth are acceptable in most cases.

  • Requires expensive client-side hardware
  • Requires a server-side component
  • Uses more CPU power

How can facial liveness be determined based on a single image? It starts with the ability to use the same selfie image that is used by the biometric and facial recognition system. As such, no special hardware or additional software is needed for image collection. If the quality of the selfie is good enough for facial recognition, it can be used for the liveness check.

The other enabling factor is use of Deep Neural Networks that process the selfie image to detect artifacts that help distinguish between a photo of a live person and a presentation attack. ID R&D has built a unique DNN-based approach to liveness detection that underpins the single image technique.

The process takes less than one second and looks like this:

User takes a selfie. The selfie image is used by the facial recognition system to determine a match. The same selfie image is used for the liveness check.
Deep Neural Networks and proprietary algorithms are used to analyze the image for liveness.

Each of the neural networks examines a different element of the image to detect artifacts that help distinguish between a photo of a live person and a presentation attack. Knowing what the neural networks should examine and how to combine the neural networks is proprietary information. This is the “magic”!

The software fuses the output of these neural networks to produce a liveness score.

A presentation attack involves “presentation” of non-live facial imagery to the camera during biometric capture, instead of a live selfie. An injection attack is conducted by “injecting” digital data downstream from the camera by way of hardware and/or software hacks. A digital representation of a live capture can spoof liveness detection if it does not know the imagery is not from the camera.

Don't deploy face recognition for identity verification or authentication without passive facial liveness that protects against presentation attacks and injection attacks. IDLive Face Plus can be integrated with any face recognition software to help prevent deepfakes and other types of fraudulent digital imagery.