Chances are, a camera is pointing at your face as you are reading these words. Facial recognition technology is becoming ubiquitous - you are being watched relentlessly the moment you leave the safety of your home. Should we hit the pause button and ban its use like US cities including San Francisco and Oakland, CA, and Somerville, MA, have done? At present government agencies and businesses are operating in a veritable Wild West raising serious concerns about privacy, bias, and human rights infringements. Regulation is lagging the development and adoption of this critical technology and the release of a major US report last week documenting serious racial and gender bias in existing algorithms is an eye-opener. Here's why.

The study conducted by the National Institute of Standards and Technology (NIST) sought to determine demographic differences in the processing of images by facial recognition algorithms. NIST looked at databases of US law enforcement mugshots, photos of applicants for immigration benefits and visa applications, and images of people crossing the US border. These totaled 18.27 million images of 8.49 million people which were processed via 189 algorithms produced by 99 developers. NIST sought to determine the rate of false positives (wrongly identifying a person) and false negatives (wrongly omitting to identify a person). False negatives occur when the two images are not similar because there has been a change in the person's appearance or differences between the images' properties. In contrast, false positives happen when the two image samples are similar.

In the real world, both false positives and false negatives are problematic. For instance, if your image is wrongly matched with that of a drug trafficker, you are likely to be detained when you try to cross the border and potentially subject to prosecution and deportation. On the other hand, when there is a false negative, you may not be allowed to access your bank account or enter the country despite possessing a visa because the algorithm determines that you are not the owner. Even in relatively benign situations, false positives may allow strangers access to your phone or bank account whereas false negatives enable the wrong person to access resources they are not entitled to.

NIST's research found that there were high false positive differentials across many algorithms. More troubling, the false positive differentials were highly racialised - they varied by factors of 10 to over 100 times. Even when the higher resolution images in applications were used, West and East Africans, and East Asians had the highest false positive rates. Of concern in the US law enforcement context, American Indians had the highest false positives, followed by African Americans, and Asians. This is extremely significant because both American Indians and African Americans are disproportionately represented in the criminal justice system - experiencing vastly higher rates of incarceration and policing interactions than the white population. Now if facial recognition is adopted more widely, these disproportionate rates of incarceration, and police stops and searches are likely to go up - perpetuating an unfair criminal justice system and undermining public confidence.

The study underlines that as with any algorithmic system, facial recognition algorithms also appear to suffer from the garbage-in, garbage-out problem. In other words, their quality depends on the training data being free of biases and possessing sufficient diversity. NIST's study confirms this noting that Chinese developed algorithms provided low false positives on East Asian faces. This suggests that government agencies and businesses employing facial recognition systems must be required to ensure the training data is diverse and perhaps be required to accommodate developers from different locations rather than sourcing from solely one location when they are dealing with a diverse population. Extrapolating from the Chinese experience, one solution might be to require more African American and Asian developed algorithms because of the implication that these are less likely to be biased against these groups.

Aside from racial effects, NIST's study also observed higher false positives in women, the elderly, and in children. This is also concerning in the light of previous research which showed that African American women's faces were falsely identified as men by some facial recognition algorithms.

The racial and gender effects identified by this study should trigger responses from developers to include diversity in algorithm development. Needless to say, this is may not offer an immediate solution - a sufficiently diverse talent pipeline is necessary to make a difference. The talent problem can only be solved if industry and government agencies collaborate to offer incentives to attract more women and minorities into machine learning and related educational domains. Absent that, algorithm development will continue to be skewed with consequent adverse effects for certain groups. At the governmental level, regulators must consider adopting safeguards to prevent harsh consequences from false positives particularly in the law enforcement context. For instance, prior to an adverse action being taken, the law enforcement agency may be required to match the person with additional data such as an iris scan. In other words, the facial recognition must be supplemented by iris recognition before a person's civil liberties are infringed. The difficulty, of course, is that iris imaging may not be possible in many circumstances where facial recognition is used - for example in street or crowd surveillance contexts.

We live in a surveillance society - ranging from India's Aadhaar to China's widespread use of facial recognition, to Baltimore's use of aerial surveillance, Big Brother is constantly watching. As the technology improves, existing biases might get entrenched and be enforced in an array of daily contexts in a far more efficient way than humans could manage unassisted by AI. Whereas previous prejudices against African Americans or minorities may have been limited to individual humans, they may now become systemic - for instance, in a facial recognition-based job application screening system used by companies that denies interviews to minorities at a higher rate.

We have a choice: accept the trade-off of purported security for privacy, human rights, and fairness, or put the genie back in the bottle before it is too late. My vote is for severely limiting the use of facial recognition within narrow contexts after legally vetted checks and balances have been guaranteed. Absent such protections, we are condemning some groups to a genetic lottery based on facial appearance - the very antithesis of Lady Iustitia, who is blindfolded to ensure impartiality.

Sandeep Gopalan is the Vice Chancellor of Piedmont International University, North Carolina, USA


Copyright 2019 Khaleej Times. All Rights Reserved. Provided by SyndiGate Media Inc. (Syndigate.info).

Disclaimer: The content of this article is syndicated or provided to this website from an external third party provider. We are not responsible for, and do not control, such external websites, entities, applications or media publishers. The body of the text is provided on an as is and as available basis and has not been edited in any way. Neither we nor our affiliates guarantee the accuracy of or endorse the views or opinions expressed in this article. Read our full disclaimer policy here.