The use of facial recognition for authentication is becoming increasingly prevalent — especially on mobile devices. But between easy access to images on social media and advances in digital and print image resolution, biometric systems have security gaps that fraudsters can exploit to successfully spoof a facial recognition system.
In order for face biometrics to truly gain mainstream adoption as a better mode of authentication, it is essential to determine whether the presented face is genuine or an attempt to spoof the system by presenting an artificial representation of it. Thus, automated detection of presentation attacks and specifically liveness detection, has become a necessary component of any authentication system that is based on face biometrics for verification.
Facial liveness has emerged as a way to stop fraud and ensure the integrity of face biometrics as a means of authentication. Whereas face recognition for authentication can accurately answer the question “Is this the right person?” it doesn’t answer the question, “Is this a real person?”. This is the role of liveness detection.
Facial liveness detection works with a biometric system to measure and analyze physical characteristics and reactions in order to determine if a biometric sample is being captured from a living subject who is present at the point of capture.
Understanding passive vs. active facial liveness is crucial. Most of today’s facial liveness technologies are “active”, requiring users to blink, turning their heads or move their phone back and forth. This results in three issues: First, fraudsters can present a photo with cut out with eye holes, use a mask or show a video to trick the system. Second, challenge-response techniques put attackers on alert that they are being checked. And lastly, active methods create friction that slows the authentication process, increases abandon rates and diminishes the overall user experience.