IDLive® Face

Passive Facial Liveness Detection

iBeta Level 2 PAD Badge

ID R&D passive facial liveness is a top performer in the NIST PAD evaluation.

We’re extremely pleased to share results of the recent NIST FATE evaluation of passive facial presentation attack detection (PAD), where ID R&D earned more top rankings than any other developers in the categories entered. Particularly notable are top rankings in both security and convenience for detection of photo print and replay impersonation attacks.

Fight spoofing with passive liveness detection

IDLive Face is the world’s first and best-performing software-based passive facial liveness product. It detects presentation attacks using just the same single-image selfie used for facial matching and NO user participation — no smiling, blinking, head-turning, lights flashing, or moving the camera. This unique single-image liveness detection approach is fast, accurate, and doesn’t require any capture-side software.

Where active livenesss adds extra steps and time to the liveness check, IDLive Face is imperceptible to customers and gives no clues to fraudsters on how to beat it. Abandonment rates are greatly reduced and less human intervention is needed.

Learn more about IDLive Face

    What is facial liveness detection?

    The use of biometric facial recognition for mobile authentication is becoming ubiquitous, but requires that users be proven to be live and present while capturing their biometric selfie. Otherwise, fraudsters can “spoof” biometric security with printed photos, cutout masks, digital and video replay attacks, and 3D masks.

    Liveness detection works with facial recognition to help prevent these “presentation attacks”. Where facial recognition can accurately answer the question “us this the right person?”, it doesn’t answer the question, “is this a live person?”  This is the role of liveness detection.

    IDLive Face: the benefits of a passive approach to liveness

    Where active livenesss adds extra steps and time to the liveness check, IDLive Face is imperceptible to customers and gives no clues to fraudsters on how to beat it. Abandonment rates are greatly reduced and less human intervention is needed.

    IDLive Face is the world’s first liveness product to identify spoofing attempts based on the same selfie used for facial matching and NO user participation — no smiling, blinking, head-turning, lights flashing, or moving the camera.

    null
    Fast
    Takes less than one second; unique single image approach uses the same selfie taken for facial recognition to determine liveness
    null
    Frictionless
    Single image approach requires absolutely no participation for a better user experience; improves automation rates and reduces customer abandonment
    null
    Flexible
    Works with all modern smart devices and web cameras. Available as an SDK or Docker for deployment in your data center or private cloud. On-device option available. Stores no data and offers superior scalability using a web services architecture.
    null
    Accurate
    iBeta/NIST Level 1 and Level 2 PAD Compliant for iOS and Android with a perfect score. The software powers millions of liveness checks monthly.
    null
    Works for Everyone, Everywhere
    ID R&D invests heavily in creating algorithms that deliver unbiased results across demographics including race, gender, and age.
    null
    Cost Efficient
    Reduces infrastructure costs, unlike other solutions that require high bandwidth internet connections and large-scale server deployments to process video.

    Want to learn more about liveness detection for facial recognition?

    IDLive Face - OnDevice

    Fast, accurate, passive facial liveness detection optimized for operation on a device

    IDLive Face – OnDevice enables frictionless facial liveness detection to be run from within a mobile app or device software, as opposed to calling an API running on a cloud or central server. Among the key benefits are low latency, offline operation, and privacy for users, features valued for a variety of usage models.

    Features & Benefits

    • Fast and accurate
    • Frictionless
    • Low, consistent latency
    • Offline operation
    • Privacy preserving
    • No biometric data on server
    • Lightweight
    • Less bandwidth consumption
    • No server-side computation

    ID R&D leads the industry in addressing demographic bias in facial liveness detection.

    The first independent evaluation of a facial liveness detection product by an accredited biometrics lab finds that IDLive Face exhibits fairness in target demographics, including gender, age group, and race. Get more information here.

    In real work operation, supporting millions of banking transactions a month, one of our customers migrated from an active face liveness technique to ID R&D passive liveness, and completion rates increased from 60% to 95%+. This was in one of the more challenging production environments with a very wide range of operating parameters.

    On the back of the improved liveness success rates experienced (faster, more accurate, much higher completion rates with much less attempts) the customer expanded their use of the system and their business dependency on ID R&D liveness for a much broader digital engagement on a long-term basis.

    Large global partner

    IDLive Face Features

    • null
      Passive UX – no action required
    • null
      No special capture software required 
    • null
      Single image analysis (vs multiple frames or video)
    • null
      Background implicit method of attack detection
    • null
      Identification of natural face movements (when available)
    • null
      Level 1 and 2 ISO 30107-3 compliance; tested with both iOS and Android 
    • null
      Cross channel input: mobile, web, stand-alone devices

    IDLive Face Benefits

    • null
      Strengthen the security of mobile and web authentication
    • null
      Eliminate friction by working in the background with no active participation from the user
    • null
      Prevents fraudsters from knowing liveness detection is occurring
    • null
      Ease remote customer onboarding and improve identity proofing processes
    • null
      Reduce fraudulent accounts and account takeovers
    • null
      Zero extra data to transmit, reducing network traffic and associated costs
    • null
      No need for high bandwidth internet connections and large-scale server deployments

    Passive Facial Liveness Use Cases

    ID R&D passive liveness is the choice of leading identity and access management vendors, onboarding and KYC providers, access control companies, IoT developers, and enterprise customers worldwide. IDLive Face is deployed in 50 countries, supporting and millions of liveness checks each month. The product is used in a range of  use cases across financial, government, travel, healthcare, building management and more.

    IDLive Face is packaged as an SDK and as a Docker with a simple RESTful API. Additionally, a mobile SDK (beta) is now offered for FIDO-compliant, on-device deployment. IDLive Face Mobile SDK runs on Android and iOS.

    Make the switch from active to passive facial liveness

    Understanding passive vs. active facial liveness is crucial. Most of today’s facial liveness technologies are “active”, requiring users to blink, turning their heads or move their phone back and forth. This results in three issues: First, fraudsters can present a photo with cut out with eye holes, use a mask or show a video to trick the system. Second, challenge-response techniques put attackers on alert that they are being checked. And lastly, active methods create friction that slows the authentication process, increases abandon rates and diminishes the overall user experience.

    The team at ID R&D has worked relentlessly to ensure you don’t have to sacrifice usability for security. Customers and partners who have made the switch from active to passive facial liveness, report significant reduction in abandonment, lower false rejections of real users, and highly accurate presentation attack detection.

    The idea of liveness relative to biometrics dates back to the early 2000s. Prior to AI-driven liveness detection, companies relied on humans to perform liveness checks. For example, during remote onboarding, customers would be required to show their photo ID on a video call to validate the physical presence of the person in the ID. Of course, this process was slow, costly, and error prone.

    Friction in Active LivenessThe technology behind liveness detection is based on the recognition of physiological information as a sign of life. Historically, liveness algorithms have been trained to identify head movements, dilation of a subject’s pupils, changes in expression, and other physical responses.

    The first generation of facial liveness technology is referred to as “active.” Active liveness detection relies on the user’s movements in response to challenges such as nodding, blinking, smiling, or correctly positioning one’s face in a frame. While the technology can be effective at detecting a spoof, it introduces friction into a verification process that was largely desirable for its ability to remove friction and is less secure as fraudsters have learned how to fool these systems. The pursuit of an easier solution, facilitated by increased access to training data for better machine learning, led to a new generation of “passive” liveness detection.

    Passive liveness is fundamentally different from active in that it requires no action by the user. As such, active liveness is impractical for use cases with frequent login. The friction also has a negative impact on new customer acquisition, with companies reporting abandonment rates as high as 50% when using active liveness. 

    However, there are additional differences to be aware of when comparing the two approaches.

    Passive Liveness

    Active Liveness

    User Experience

    Requires no action by the user, which results in less friction and lower user abandonment during processes such as remote customer onboarding.

    Requires users to respond to “challenges” that add time and effort to the process.

    Software Requirements

    Some approaches require installing a software component on a device; others do not.

    Active solutions usually require installing software on the device.

    Image analysis

    This varies depending on the passive approach. Analysis may be based on a single image with processing in near real time.

    Requires analysis of multiple images or frames of video to detect movement.

    Bandwidth Requirements

    May use the same selfie taken for facial recognition, resulting in no incremental traffic to the server.

    May require additional data to be exchanged between the user’s device and a server-based solution. This is a problem in geographies where bandwidth is scarce or expensive.

    Speed

    The speed of a passive liveness check depends on the method used. Near real time is possible.

    Active liveness always increases user effort, resulting in a longer liveness check.

    Robustness to Spoofing

    Passive methods have the advantage of “security through obscurity.” They are generally more immune to spoofing attacks because the fraudster doesn’t have clues as to how to defeat the liveness check. In fact, they won’t even know it’s happening.

    Active systems provide fraudsters with instructions that can be “reverse engineered” to attack and defeat the liveness check. Known techniques to break them include using a simple 2D mask with cut out eyes or animation software to mimic head movements, smiling, and blinking.

    Proven Compliant with ISO 30107-3
    Standard for Robustness

    A few passive liveness solutions have passed iBeta Level 1 and Level 2 testing and are compliant with ISO 30107-3. Additionally, ID R&D has passed Level 1 and Level 2 testing with a single image approach to liveness detection.

    Multiple solutions are iBeta Level 1 and Level 2 compliant or conformant. Note that there is no difference between iBeta “certified” and “compliant.” (link https://www.ibeta.com/iso-30107-3-presentation-attack-detection-confirmation-letters/)

    A variety of techniques are used to perform a passive liveness check, ranging from analyzing a selfie image to capturing a video, to flashing lights on the subject. These passive liveness approaches have different impacts on the user experience and processing as outlined in the following table.

    APPROACH

    PROS

    CONS

    Flashing lights on the user

    The user does not need to respond to any challenges or move.

    • The device must be held steady
    • The process takes time
    • It fails in bright sunlight
    • Users may not tolerate the strobing lights
    Capturing a short video

    The user does not need to respond to any challenges or move.

    • The video takes time to capture
    • Software may need to be downloaded to the device
    • Passive observation relies on micro-mimics and small movements, which may be hard to capture
    Examining a single selfie image

    Uses the same selfie used for facial matching and requires no extra effort by the user.
    Because only one image is needed, bandwidth requirements are low and processing is fast.

    No additional software download is required on the user side for image capture.

    • Requires a server-side component
    Using a hardware assisted approach (e.g. depth measuring)

    Requires no extra effort by the user.
    Because only a few images are needed, the requirements on bandwidth are acceptable in most cases.

    • Requires expensive client-side hardware
    • Requires a server-side component
    • Uses more CPU power

    How can facial liveness be determined based on a single image? It starts with the ability to use the same selfie image that is used by the biometric and facial recognition system. As such, no special hardware or additional software is needed for image collection. If the quality of the selfie is good enough for facial recognition, it can be used for the liveness check.

    The other enabling factor is use of Deep Neural Networks that process the selfie image to detect artifacts that help distinguish between a photo of a live person and a presentation attack. ID R&D has built a unique DNN-based approach to liveness detection that underpins the single image technique.

    The process takes less than one second and looks like this:

    null
    User takes a selfie. The selfie image is used by the facial recognition system to determine a match. The same selfie image is used for the liveness check.
    null
    Deep Neural Networks and proprietary algorithms are used to analyze the image for liveness.

    Each of the neural networks examines a different element of the image to detect artifacts that help distinguish between a photo of a live person and a presentation attack. Knowing what the neural networks should examine and how to combine the neural networks is proprietary information. This is the “magic”!

    null
    The software fuses the output of these neural networks to produce a liveness score.

    IDLive Face is the only single frame passive liveness product to achieve iBeta Levels 1 and 2 ISO/IEC 30107-3 presentation attack detection (PAD) compliance with a perfect score. Read more about iBeta here.

    Don't deploy face recognition for authentication without passive facial liveness. IDLive Face can be integrated with any face recognition software to prevent presentation attacks.