The idea of liveness relative to biometrics dates back to the early 2000s. Prior to AI-driven liveness detection, companies relied on humans to perform liveness checks. For example, during remote onboarding, customers would be required to show their photo ID on a video call to validate the physical presence of the person in the ID. Of course, this process was slow, costly, and error prone.
The technology behind liveness detection is based on the recognition of physiological information as a sign of life. Historically, liveness algorithms have been trained to identify head movements, dilation of a subject’s pupils, changes in expression, and other physical responses.
The first generation of facial liveness technology is referred to as “active.” Active liveness detection relies on the user’s movements in response to challenges such as nodding, blinking, smiling, or correctly positioning one’s face in a frame. While the technology can be effective at detecting a spoof, it introduces friction into a verification process that was largely desirable for its ability to remove friction and is less secure as fraudsters have learned how to fool these systems. The pursuit of an easier solution, facilitated by increased access to training data for better machine learning, led to a new generation of “passive” liveness detection.