FAQ – NIST Passive Facial Presentation Attack Detection Evaluation (NIST IR 8491)

NIST performed an evaluation of 81 passive software-based facial presentation attack detection (PAD) algorithms and published the results in a report. ID R&D was the top performer in the evaluation for the impersonation use case, with #1 rankings in detecting common attack types for both convenience and security; the most #1 rankings of any algorithm; and an overall #1 ranking across all attack types and categories. This FAQ serves to help explain the evaluation and the results. 

What is the NIST facial presentation attack detection (PAD) evaluation? 

NIST is the National Institute of Standards and Technology of the United States, and it regularly performs evaluations of face technology algorithms and issues reports on their findings1. New for 2023, Part 10 of the NIST Face Analysis Technology Evaluation (FATE) focused on assessing the performance of passive, software-based face presentation attack detection (PAD) algorithms. The results of the FATE PAD evaluation were published in NIST Internal Report 8491, released in September, 2023. Results of top performers for the impersonation use case are shared in Table 1 on page 4.

How did ID R&D perform? 

The report presents a breakdown of algorithm performance across various categories, including impersonation and evasion attacks, data types (still images or videos), and presentation attack types. The results of the evaluation indicate the following:

  1. ID R&D algorithms ranked first in detecting several critical attack types within the impersonation use case, including photo prints and replays. 
  2. ID R&D earned more top rankings than any other developers in the categories entered (see results of “top performers” summarized in Table 1 of the report), and
  3. ID R&D achieved an overall top rank for the impersonation use case when averaged across all the attack types.

What does the NIST report cover?

Here are some key points about what FATE PAD did and did not cover:

  • It evaluated the ability of algorithms to detect physical presentation attacks like printed photos, masks, and 3D masks. It did not look at digital attacks like injected images.
  • It only looked at passive approaches that rely solely on analyzing the input facial imagery, not active PAD that challenges the user.
  • It only evaluated software-based PAD algorithms, not hardware-based approaches.
  • It tested PAD on still images and videos, but was an offline test without a live data acquisition component.
  • Two separate PAD tasks were evaluated: detecting impersonation attacks and evasion attacks. Algorithms specialized in one or the other.
  • A range of presentation attacks were tested; some known and others undisclosed to participants to prevent tuning.
  • Performance was quantified using Attack Presentation Classification Error Rate (APCER) and Bona Fide Classification Error Rate (BPCER) metrics.
  • Analysis looked at error tradeoffs, performance on stills vs video, fusion of multiple algorithms, and effects across demographic groups.

In summary, this specific NIST evaluation focused on a particular slice of the PAD landscape: software-only passive detection of physical impersonation and evasion attacks based on analyzing pre-existing facial imagery. It did not address other PAD approaches or use cases. The results quantify algorithm accuracy on those defined tasks. 

What is the difference between impersonation attacks and evasion attacks?

The fundamental difference is the attacker’s goal. Impersonation is aimed at gaining illegitimate access by posing as someone else. Evasion is aimed at avoiding detection of one’s own identity, such as it might be registered in a watch list or a surveillance system. Impersonation is a security risk of false acceptance. Evasion is a risk of false non-match.

Impersonation attacks are when someone tries to pose as another person in order to gain unauthorized access to their accounts, devices, or privileges. For example, an impersonation attack could involve using a printed photo, mask, or video replay of someone’s face to fool a facial recognition system and gain entry somewhere they shouldn’t have access. The goal is to impersonate someone else’s identity.

Evasion attacks are when someone tries to avoid being recognized by a facial recognition system or matching against their own enrolled template in a database or watchlist. For example, an evasion attack could involve wearing obscuring accessories, masks, or makeup to prevent their real identity from being detected. The goal is to evade recognition of their true identity.

What types of attacks are evaluated?

The goal of FATE PAD was to evaluate a diverse set of impersonation and evasion attacks, with a mix of presentation attack (PA) instruments. Both physical and digital attacks were in scope. Not all PA types were disclosed. 

The following types of presentation attacks were evaluated:

Impersonation Attacks Evasion Attacks
  • PA Type 1 – Undisclosed attack
  • PA Type 3 – Flexible silicone face masks
  • PA Type 4 – Undisclosed attack
  • PA Type 7 – Undisclosed attack
  • PA Type 8 – Photo print/replay attacks
  • PA Type 1 – Undisclosed attack
  • PA Type 2 – Undisclosed attack
  • PA Type 3 – Flexible silicone face masks
  • PA Type 4 – Undisclosed attack
  • PA Type 5 – Undisclosed attack
  • PA Type 6 – Protective face masks
  • PA Type 7 – Undisclosed attack
  • PA Type 8 – Photo print/replay attacks
  • PA Type 9 – Undisclosed attack

Some key observations about the tested attacks:

  • Both impersonation and evasion attacks were tested.
  • Known attacks like silicone masks, photo prints, and protective masks were disclosed.
  • Other undisclosed attacks were also included to prevent tuning to known PAIs.
  • The same PAIs were tested for both impersonation and evasion tasks.
  • Print/replay attacks were tested both un-zoomed (showing frame) and zoomed in.

In which tests did ID R&D participate? 

ID R&D participated only in the impersonation tests and not the evasion tests. Within each use case are measurements for several presentation attack types, including silicon masks, photo prints, and replay attacks. For each attack type, there were figures provided for both a convenience-focused metric and a security-focused metric.

How did ID R&D perform in relation to other algorithms across all attack types?

The following table shows rankings of the top 10 of the 82 participating algorithms for the detection of impersonation attacks in still images. The results show ID R&D achieving a ranking of 1 for four categories and overall top ranking by a large margin when ranks are averaged across all attack types. 

Algorithm

(best per participant)

PA Type 1

Undisclosed

PA Type 3

Flexible
silicone mask

PA Type 4

Undisclosed

PA Type 7

Undisclosed

PA Type 8

Photo print
/ replay

PA Type 8

Photo print /
replay (zoomed)

Average Total
Ranking
Conv* Sec** Conv Sec Conv Sec Conv Sec Conv Sec Conv Sec
idrnd-001 2 2 5 5 5 5 10 10 1 1 1 1 4.0 1
kakao-001 1 3 24 33 16 17 3 3 5 5 5 4 9.9 2
iproov-001 5 10 11 25 8 13 16 17 7 8 6 10 11.3 3
stcon-000 8 13 2 3 4 26 21 35 9 10 3 3 11.4 4
idemia-011 15 34 21 20 19 14 13 28 1 4 4 5 14.8 5
cyberlink-002 6 5 3 4 1 2 1 1 17 20 46 74 15.0 6
onfido-001 21 28 12 12 20 15 8 4 19 24 9 14 15.5 7
intema-001 23 19 22 21 18 10 34 24 10 14 21 28 20.3 8
aware-002 18 9 13 13 22 4 31 16 27 33 34 31 20.9 9
alice-001 12 67 3 2 9 68 4 8 1 2 23 74 22.8 10

*Convenience Rank **Security Rank

Why is the evaluation meaningful? 

The results reinforce ID R&D’s position as one of the industry’s leading providers of passive facial liveness. Particularly notable in the test results are top rankings for ID R&D in both security and convenience for detection of photo print and replay impersonation attacks. Detection of these common attacks is vital to protecting digital onboarding and authentication from fraud, but it must be done without burdening legitimate users with false positives.

 

Results shown from NIST do not constitute an endorsement for any particular system, product, service, or company by NIST. More information can be found on the NIST FRVT webpage.

 

__________________
1 Since 1999, the NIST program has been referred to as Face Recognition Vendor Tests (FRVT). In 2023, this name was retired and replaced with two new brands: Face Recognition Technology Evaluation (FRTE) and Face Analysis Technology Evaluation (FATE).