When using face biometrics for authentication, accuracy is no longer a concern. However, spoofing attacks in the form of printed photos, videos, deepfake images, and 3D masks are a significant threat.
Whereas facial recognition can accurately answer the question, “Is this the right person?”, it doesn’t answer the question, “Is this a live person?” This is the role of liveness detection. Detecting spoofs is essential for face biometric matching to be trusted, as well protecting the integrity of our biometric data. In other words, because of liveness detection, our biometrics do not have to be kept a secret – which is a good thing since many of us have numerous images and videos posted online!
The Evolution of Facial Liveness Detection
The idea of liveness relative to biometrics dates back to the early 2000s. Prior to AI-driven liveness detection, companies relied on humans to perform liveness checks. For example, during remote onboarding, customers would be required to show their photo ID on a video call to validate the physical presence of the person in the ID. Of course, this process was slow, costly, and error prone.
The technology behind liveness detection is based on the recognition of physiological information as a sign of life. Historically, liveness algorithms have been trained to identify head movements, dilation of a subject’s pupils, changes in expression, and other physical responses.
The first generation of facial liveness detection technology is referred to as “active.” Active liveness detection relies on the user’s movements in response to challenges such as nodding, blinking, smiling, or correctly positioning one’s face in a frame. While the technology can be effective at detecting a spoof, it introduces friction into a verification process that was largely desirable for its ability to remove friction and is less secure as fraudsters have learned how to fool these systems. The pursuit of an easier solution, facilitated by increased access to training data for better machine learning, led to a new generation of “passive” liveness detection.
Differences Between Passive and Active Liveness
Passive liveness detection is fundamentally different from active in that it requires no action by the user. As such, active liveness is impractical for use cases with frequent login. The friction also has a negative impact on new customer acquisition, with companies reporting abandonment rates as high as 50% when using active liveness.
However, there are additional differences to be aware of when comparing the two approaches.
Requires no action by the user, which results in less friction and lower user abandonment during processes such as remote customer onboarding.
Requires users to respond to “challenges” that add time and effort to the process.
Some approaches require installing a software component on a device; others do not.
Active solutions usually require installing software on the device.
This varies depending on the passive approach. Analysis may be based on a single image with processing in near real time.
Requires analysis of multiple images or frames of video to detect movement.
May use the same selfie taken for facial recognition, resulting in no incremental traffic to the server.
May require additional data to be exchanged between the user’s device and a server-based solution. This is a problem in geographies where bandwidth is scarce or expensive.
The speed of a passive liveness check depends on the method used. Near real time is possible.
Active liveness always increases user effort, resulting in a longer liveness check.
Robustness to Spoofing
Passive methods have the advantage of “security through obscurity.” They are generally more immune to spoofing attacks because the fraudster doesn’t have clues as to how to defeat the liveness check. In fact, they won’t even know it’s happening.
Active systems provide fraudsters with instructions that can be “reverse engineered” to attack and defeat the liveness check. Known techniques to break them include using a simple 2D mask with cut out eyes or animation software to mimic head movements, smiling, and blinking.
Proven Compliant with ISO 30107-3
Standard for Robustness
A few passive liveness solutions have passed iBeta Level 1 and Level 2 testing and are compliant with ISO 30107-3. Additionally, ID R&D has passed Level 1 and Level 2 testing with a single image approach to liveness detection.
Multiple solutions are iBeta Level 1 and Level 2 compliant or conformant. Note that there is no difference between iBeta “certified” and “compliant.” (link https://www.ibeta.com/iso-30107-3-presentation-attack-detection-confirmation-letters/)
So, Which is Better – Passive or Active Liveness Detection?
A passive solution with proven robustness is preferable to an active liveness solution. Both types accurately detect a range of spoofs, but only passive liveness keeps the process fast and effortless. The fact that companies are increasingly prioritizing user experience as a way to attract and retain customers is driving the shift from yesterday’s active solutions to today’s modern, passive liveness detection.
Experian found one-third of consumers would conduct more transactions online if there were fewer security hurdles.
Does this mean that active liveness is a thing of the past? No, but more companies are moving away from these solutions. One argument for using an active solution is that users expect friction in a security solution. How can they trust it, if they can’t see it? Just like any new technology, this
This perception however is changing – thanks in part to familiarity with consumer-grade biometrics (e.g., iPhone’s Face ID) and businesses communicating how new authentication practices work. Another factor in building trust is designing the authentication experience with indication that a user has been positively verified and the interaction is secure — similar to how the “s” in “https” assures users that a transaction is secure when web-browsing.
Approaches to Passive Liveness
A variety of techniques are used to perform a passive liveness check, ranging from analyzing a selfie image to capturing a video, to flashing lights on the subject. These passive liveness approaches have different impacts on the user experience and processing as outlined in the following table.
Flashing lights on the user
The user does not need to respond to any challenges or move.
- The device must be held steady
- The process takes time
- It fails in bright sunlight
- Users may not tolerate the strobing lights
Capturing a short video
The user does not need to respond to any challenges or move.
- The video takes time to capture
- Software may need to be downloaded to the device
- Passive observation relies on micro-mimics and small movements, which may be hard to capture
Examining a single selfie image
Uses the same selfie used for facial matching and requires no extra effort by the user.
Because only one image is needed, bandwidth requirements are low and processing is fast.
No additional software download is required on the user side for image capture.
- Requires a server-side component
Using a hardware assisted approach (e.g. depth measuring)
Requires no extra effort by the user.
Because only a few images are needed, the requirements on bandwidth are acceptable in most cases.
- Requires expensive client-side hardware
- Requires a server-side component
- Uses more CPU power
A single image approach to passive liveness offers clear advantages. In addition to those outlined in the table, It is imperceptible to users, therefore offering no clues to fraudsters on how to break it. And, because a single frame solution is deployed as a separate independent function, no changes are required to the user interface or communication interfaces for simplified integration.
Single Image Passive Facial Liveness Explained
How can facial liveness be determined based on a single image? It starts with the ability to use the same selfie image that is used by the biometric and facial recognition system. As such, no special hardware or additional software is needed for image collection. If the quality of the selfie is good enough for facial recognition, it can be used for the liveness check.
The other enabling factor is use of Deep Neural Networks that process the selfie image to detect artifacts that help distinguish between a photo of a live person and a presentation attack. ID R&D has built a unique DNN-based approach to liveness detection that underpins the single image technique.
The process takes less than one second and looks like this:
Selecting the Right Liveness Solution for Your Application
When determining the right approach for your application, consider asking the following questions:
- Has the product been tested by iBeta for Level 1 and Level 2? Was it testing on both iOS and Android?
- Does the product work on a desktop?
- How much bandwidth is required?
- What has to be downloaded to the device?
- Can it run off-line?
- Where is data stored?
- Is there an on-premise option for performing the liveness check?
- Is there an on-device option?
- How many images are sent for the liveness check? Is video captured?
- How is the product calibrated to find the right balance between false acceptance and false rejection for a particular use case?
- How is the product integrated? What is the average time to deploy?
ID R&D’s ability to accurately detect liveness in a passive manner using a single frame is the result of extensive and ongoing research and development, and our commitment to enabling a secure, frictionless user experience. The product is ISO/IEC 30107-3 compliant, having passed iBeta Quality Assurance’s Presentation Attack Detection (PAD) testing with a perfect score. If you are interested in learning more, we’d love to talk.