Biometric Security Jargon: CER, EER, FRR, FAR
Biometrics are at the cutting edge of cybersecurity. Get ahead of the game by learning all the jargon associated with the burgeoning field!
Join the DZone community and get the full member experience.
Join For FreeA few weeks ago I wrote about how there are many ways to summarize the operating characteristics of a test. The most basic terms are accuracy, precision, and recall, but there are many others. Nobody uses all of them. Each application area has their own jargon.
Biometric security has its own lingo, and it doesn't match any of the terms in the list I gave before.
False Acceptance Rate
Biometric security uses False Acceptance Rate (FAR) for the proportion of times a system grants access to an unauthorized person. In statistical terms, FAR is a Type II error. Also known as False Match Rate (FRM).
False Rejection Rate
False Rejection Rate (FRR) is the proportion of times a biometric system fails to grant access to an authorized person. In statistical terms, FRR is a Type I error. FAR is also known as False Non-Match Rate (FNMR).
Crossover Error Rate
One way to summarize the operating characteristics of a biometric security system is to look at the Crossover Error Rate (CER), also known as the Equal Error Rate (EER). The system has parameters that can be tuned to adjust the FAR and FRR. Adjust these to the point where the FAR and FRR are equal. When the two are equal, their common value is the CER or EER.
The CER gives a way to compare systems. The smaller the CER the better. A smaller CER value means it's possible to tune the system so that both the Type I and Type II error rates are smaller than they would be for another system.
Loss Function
CER is kind of a strange metric. Everyone agrees that you wouldn't want to calibrate a system so that FAR = FRR. In security applications, FAR (unauthorized access) is worse than FRR (authorized user locked out). The former could be a disaster while the latter is an inconvenience. Of course, there could be a context where the consequences of FAR and FRR are equal, or that FRR is worse, but that's not usually the case.
A better approach would be to specify a loss function (or its negative, a utility function). If unauthorized access is K times more costly than locking out an authorized user, then you might want to know at what point K * FAR = FRR or your minimum expected loss [1] over the range of tuning parameters. The better system for you, in your application, is the one corresponding to your value of K.
Since everyone has a different value of K, it is easier to just use K = 1, even though everyone's value of K is likely to be much larger than 1. Unfortunately, this often happens in decision theory. When people can't agree on a realistic loss function, they standardize on a mathematically convenient implicit loss function that nobody would consciously choose.
If everyone had different values of K near 1, the CER metric might be robust, i.e. it might often make the right comparison between two different systems even though the criteria is wrong. But since K is probably much larger than 1 for most people, it's questionable that CER would rank two systems the same way people would if they could use their own value of K.
***
[1] These are not the same thing. To compute expected loss you'd need to take into account the frequency of friendly and unfriendly access attempts. In a physically secure location, friends may attempt to log in much more often than foes. On a public web site, the opposite is more likely to be true.
Published at DZone with permission of John Cook, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments