The use of facial recognition for authentication is becoming increasingly prevalent — especially on mobile devices. But between easy access to images on social media and advances in digital and print image resolution, biometric systems have security gaps that fraudsters can exploit to successfully spoof a facial recognition system. These spoofs, called presentation attacks, include printed photos, cutout masks, digital and video replay attacks, and 3D masks.
IDLive® Face
Passive Facial Liveness Detection


Fight Face Recognition Spoofing
Learn more about IDLive Face
What is Facial Liveness Detection?
Facial liveness has emerged as a way to fight fraud and ensure the integrity of face biometrics as a means of authentication or identity verification. Whereas face recognition can accurately answer the question “Is this the right person?” it doesn’t answer the question, “Is this a real person?”. This is the role of liveness detection.
Facial liveness detection works with facial recognition to determine if a biometric sample is being captured from a living subject who is present at the point of capture. As such, it stops fraudsters from using presentation attacks to spoof a facial recognition system.
IDLive Face: A New Generation of Liveness Detection for Face Recognition
The benefit of liveness is improved security and fraud detection. The benefit of ID R&D’s passive face liveness is that it is both secure and easy to use. Whereas other products add extra steps and time to the liveness check, IDLive Face is imperceptible to customers who don’t even know it’s happening. And, the software gives no clues to fraudsters on how to beat it. Because passive liveness is less confusing for users, abandonment rates are greatly reduced and less human intervention is needed.
IDLive Face is the world’s first passive facial liveness product to identify spoofing attempts based on the same selfie used for facial matching and NO user participation — no smiling, blinking, head-turning, lights flashing, or moving the camera. This unique single image liveness detection approach is fast, accurate, and doesn’t require any capture-side software. The product uses patent-pending methodologies and a Deep Neural Network (DNN) based approach that examines different elements of the image to detect artifacts that help distinguish between a photo of a live person and a presentation attack. Read the single image facial liveness tech brief.
Want to learn more about passive liveness for facial recognition?
Make the switch from Active to Passive Facial Liveness
Understanding passive vs. active facial liveness is crucial. Most of today’s facial liveness technologies are “active”, requiring users to blink, turning their heads or move their phone back and forth. This results in three issues: First, fraudsters can present a photo with cut out with eye holes, use a mask or show a video to trick the system. Second, challenge-response techniques put attackers on alert that they are being checked. And lastly, active methods create friction that slows the authentication process, increases abandon rates and diminishes the overall user experience.
The team at ID R&D has worked relentlessly to ensure you don’t have to sacrifice usability for security. Customers and partners who have made the switch from active to passive facial liveness, report significant reduction in abandonment, lower false rejections of real users, and highly accurate presentation attack detection.
The idea of liveness relative to biometrics dates back to the early 2000s. Prior to AI-driven liveness detection, companies relied on humans to perform liveness checks. For example, during remote onboarding, customers would be required to show their photo ID on a video call to validate the physical presence of the person in the ID. Of course, this process was slow, costly, and error prone.
The first generation of facial liveness technology is referred to as “active.” Active liveness detection relies on the user’s movements in response to challenges such as nodding, blinking, smiling, or correctly positioning one’s face in a frame. While the technology can be effective at detecting a spoof, it introduces friction into a verification process that was largely desirable for its ability to remove friction and is less secure as fraudsters have learned how to fool these systems. The pursuit of an easier solution, facilitated by increased access to training data for better machine learning, led to a new generation of “passive” liveness detection.
Passive liveness is fundamentally different from active in that it requires no action by the user. As such, active liveness is impractical for use cases with frequent login. The friction also has a negative impact on new customer acquisition, with companies reporting abandonment rates as high as 50% when using active liveness.
However, there are additional differences to be aware of when comparing the two approaches.
Passive Liveness
Active Liveness
User Experience
Requires no action by the user, which results in less friction and lower user abandonment during processes such as remote customer onboarding.
Requires users to respond to “challenges” that add time and effort to the process.
Software Requirements
Some approaches require installing a software component on a device; others do not.
Active solutions usually require installing software on the device.
Image analysis
This varies depending on the passive approach. Analysis may be based on a single image with processing in near real time.
Requires analysis of multiple images or frames of video to detect movement.
Bandwidth Requirements
May use the same selfie taken for facial recognition, resulting in no incremental traffic to the server.
May require additional data to be exchanged between the user’s device and a server-based solution. This is a problem in geographies where bandwidth is scarce or expensive.
Speed
The speed of a passive liveness check depends on the method used. Near real time is possible.
Active liveness always increases user effort, resulting in a longer liveness check.
Robustness to Spoofing
Passive methods have the advantage of “security through obscurity.” They are generally more immune to spoofing attacks because the fraudster doesn’t have clues as to how to defeat the liveness check. In fact, they won’t even know it’s happening.
Active systems provide fraudsters with instructions that can be “reverse engineered” to attack and defeat the liveness check. Known techniques to break them include using a simple 2D mask with cut out eyes or animation software to mimic head movements, smiling, and blinking.
Proven Compliant with ISO 30107-3
Standard for Robustness
A few passive liveness solutions have passed iBeta Level 1 and Level 2 testing and are compliant with ISO 30107-3. Additionally, ID R&D has passed Level 1 and Level 2 testing with a single image approach to liveness detection.
Multiple solutions are iBeta Level 1 and Level 2 compliant or conformant. Note that there is no difference between iBeta “certified” and “compliant.” (link https://www.ibeta.com/iso-30107-3-presentation-attack-detection-confirmation-letters/)
A variety of techniques are used to perform a passive liveness check, ranging from analyzing a selfie image to capturing a video, to flashing lights on the subject. These passive liveness approaches have different impacts on the user experience and processing as outlined in the following table.
APPROACH
PROS
CONS
Flashing lights on the user
The user does not need to respond to any challenges or move.
- The device must be held steady
- The process takes time
- It fails in bright sunlight
- Users may not tolerate the strobing lights
Capturing a short video
The user does not need to respond to any challenges or move.
- The video takes time to capture
- Software may need to be downloaded to the device
- Passive observation relies on micro-mimics and small movements, which may be hard to capture
Examining a single selfie image
Uses the same selfie used for facial matching and requires no extra effort by the user.
Because only one image is needed, bandwidth requirements are low and processing is fast.
No additional software download is required on the user side for image capture.
- Requires a server-side component
Using a hardware assisted approach (e.g. depth measuring)
Requires no extra effort by the user.
Because only a few images are needed, the requirements on bandwidth are acceptable in most cases.
- Requires expensive client-side hardware
- Requires a server-side component
- Uses more CPU power
How can facial liveness be determined based on a single image? It starts with the ability to use the same selfie image that is used by the biometric and facial recognition system. As such, no special hardware or additional software is needed for image collection. If the quality of the selfie is good enough for facial recognition, it can be used for the liveness check.
The other enabling factor is use of Deep Neural Networks that process the selfie image to detect artifacts that help distinguish between a photo of a live person and a presentation attack. ID R&D has built a unique DNN-based approach to liveness detection that underpins the single image technique.
The process takes less than one second and looks like this:
IDLive Face is the only single frame passive liveness product to achieve iBeta Levels 1 and 2 ISO/IEC 30107-3 presentation attack detection (PAD) compliance with a perfect score. Read more about iBeta here.
IDLive Face Features
IDLive Face Benefits
Passive Facial Liveness Use Cases
ID R&D passive liveness is the choice of leading identity and access management vendors, onboarding and KYC providers, access control companies, IoT developers, and enterprise customers worldwide. IDLive Face is deployed in 50 countries, supporting and millions of liveness checks each month. The product is used in a range of use cases across financial, government, travel, healthcare, building management and more.
IDLive Face is packaged as an SDK and as a Docker with a simple RESTful API. Additionally, a mobile SDK (beta) is now offered for FIDO-compliant, on-device deployment. IDLive Face Mobile SDK runs on Android and iOS.