Ethical Considerations Surrounding Biometric Data

The global biometric market was estimated to be worth $19.8 billion in 2020, signaling a more mainstream adoption in the years to follow. Greater acceptance of biometrics could not have come at a more appropriate time following the rapid digitalization of 2020. From primarily being used in criminal identification, biometrics are now used for bank transactions, workplace clock-ins, and unlocking smartphones.

However, a 2019 Biometric Consumer Sentiment Survey underlines another important factor when it comes to biometrics: while 70% of respondents say they would like to expand the use of biometrics, over 57% say they’re not sure if their biometric data is being stored ethically. So just what are the ethical biometric considerations we should discuss?

Non-inclusive Data Sets

Data DiversityUnfortunately, most biometric tech is developed on datasets not representative of a globally diverse population. In fact, a study by MIT researchers found that facial recognition software was created using imagery datasets that were 77% male and 83% white. Such is the case with retail giant Amazon, whose facial recognition algorithm consistently made mistakes in identifying women or darker-skinned individuals.

To correct this, developers must be aware of a root cause of these mistakes: the lack of balanced data. Data scientists play an important role in ensuring the data used for training the models is inclusive and making accommodations in the algorithm for any lack of quality or diversity. Understanding and extracting insights from big data like that used in biometric algorithm training is also driving interest in careers in data analytics.

Currently, Google is similarly realigning their datasets after finding a fault in their algorithm—these mistakes included automatically identifying individuals in cooking pictures as women—to now use gender-neutral image-recognition.

Racial Biases

The National Institute of Standards and Technology analyzed 189 facial recognition algorithms and found that inaccuracy for people of color (POC) was a factor of ten to a hundred times. A 2018 MIT Media Lab study also found that the facial recognition systems by companies like IBM had a 0.8% error on white men versus a 34.7% error on dark-skinned women. Law enforcement agencies have also used facial recognition that led to false arrests of people of color.

Although some argue that AI technology is impartial because it’s void of emotion, it is important to recognize that the team behind it is not. Unconscious biases can influence codes and even machine learning. Because of this, activists such as Joy Buolamwini, the founder of the Algorithmic Justice League (AJL), are raising public awareness about the harmful biases in AI systems. Recently, the AJL with support from Georgetown Law, has introduced the Safe Face Pledge which aims to lessen the weaponization of biometric tech. Microsoft, meanwhile, has also expanded their recognition software to recognize 20 more skin tones in an effort to reduce racial prejudice.

Hacking Vulnerabilities

The absolute nature of biometrics is its strength and weakness. For instance, although many people consider the use of fingerprints secure, researchers found that many cellphone fingerprints can be hacked. Though, to be fair, liveness detection can prevent hackers and fraudsters from taking advantage of your biometrics for long. Nevertheless, it’s important to note that biometrics are not hack-proof, and once compromised it can create a domino effect that puts other password-protected information at risk.

For instance, let’s say that your cellphone’s fingerprint lock was successfully hacked, this does not mean that the thieves can now continue to use your fingerprint multiple times to unlock other devices or to verify transactions on your behalf. But that unlocked smartphone, most likely has your personal address, bank details, and more.

The benefits of biometrics can certainly bring added convenience and security to our lives. This is most evident in how biometrics are successfully utilized in multi-factor authorization systems which can actively prevent account takeovers and data breaches. But as with any new innovation, if we are to use it to bring positive change, we need to have open discussions on the ethical improvements that this technology still has to face.

Exclusively written for idrnd.ai by Yvette Cook