African Elements Daily
African Elements Daily
Facial Recognition Technology and Racial Bias in Arrests
Loading
/
Facial recognition technology has led to wrongful arrests, highlighting racial bias in law enforcement practices. (AI-generated Image)

Facial Recognition’s Racial Bias

By Darius Spearman (africanelements)

Support African Elements at patreon.com/africanelements and hear recent news in a single playlist. Additionally, you can gain early access to ad-free video content.

Facial Recognition and Wrongful Arrests

Facial recognition technology (FRT) has emerged as a powerful tool in law enforcement, yet its deployment has been marred by a disturbing pattern of wrongful arrests, particularly impacting Black individuals. This technology, which identifies or verifies a person’s identity by analyzing unique facial features from an image or video, compares these features to a database of known faces to find a match (digitalcommons.law.uw.edu; mit-serc.pubpub.org). While proponents argue it serves as a mere “lead” in investigations, the reality for many, especially Black men, has been far more severe.

Understanding Facial Recognition Technology (FRT)

What it is, how it works, and where police may use it.

Identify & Verify by Face

Facial Recognition Technology (FRT) uses AI to extract facial features from images or video and compares them to known faces to identify or verify a person’s identity.

How it works
  • 1
    Capture

    Collect a face from a photo, video frame, or live camera.

  • 2
    Encode

    Algorithm maps measurements (feature vectors) describing unique facial traits.

  • 3
    Compare

    Computed faceprint is matched against a database to return possible identities.

  • 4
    Decision

    Analyst reviews candidates; results may inform investigative leads.

Policing use (examples)

FRT may be used to generate leads from surveillance stills or other imagery. Agencies set policies for usage and verification.

Surveillance footage Booking photos Missing persons Event security

Consider the harrowing experience of Robert Williams, who was wrongfully arrested and held for nearly 30 hours in Michigan after FRT incorrectly identified him as a shoplifting suspect (abcnews.go.com; time.com). The system returned his expired driver’s license photo as a potential match, highlighting a critical flaw: outdated images can significantly affect accuracy (pubmed.ncbi.nlm.nih.gov; surface.syr.edu). This incident, and others like it, directly refute the assertion that police only use FRT as a preliminary step before conducting thorough investigations (time.com). At least eight people in the United States have been wrongfully arrested due to FRT misidentification (washingtonpost.com). In nearly every publicized case of false arrest based on FRT, the victim has been Black, including two arrests in Detroit, one involving a pregnant woman accused of robbery and carjacking (criminallegalnews.org).

Police Protocols and AI

A significant concern surrounding the use of FRT by law enforcement is the frequent disregard for established protocols and traditional policing standards. Many police departments fail to report their use of FRT and keep minimal records of its application (washingtonpost.com). This lack of transparency makes it difficult to accurately assess the technology’s actual impact and hold departments accountable for its misuse.

A review of documents from 23 police departments revealed that 15 arrested suspects identified through AI matches without any independent evidence (washingtonpost.com). This practice directly contradicts their own internal policies, which typically require corroboration of AI leads. Some officers appear to abandon traditional policing standards entirely, with one police report referring to an uncorroborated AI result as a “100% match” and another stating police used the software to “immediately and unquestionably” identify a suspect (washingtonpost.com). This overreliance on AI suggestions as definitive facts, rather than probabilistic outputs, highlights a dangerous trend. Despite very low error rates in the accuracy of FRT systems, these instances of wrongful arrests due to inaccurate results from AI-assisted searches of facial databases are increasing (pubmed.ncbi.nlm.nih.gov). Even a low error rate, when applied to massive databases, can generate a substantial number of false positives, especially when algorithmic biases are at play (pubmed.ncbi.nlm.nih.gov).

Racial Bias in Facial Recognition

The use of facial recognition technology by law enforcement raises significant concerns about racial bias and its disproportionate impact on Black individuals. FRT systems have access to massive databases, including millions of photos like driver’s license photos, which can be scanned for similar-looking faces (time.com). However, the algorithms are often trained on datasets that are not diverse enough, leading to less accurate results for people of color (mit-serc.pubpub.org). This algorithmic bias means the technology is more prone to false positives when identifying non-white faces (mit-serc.pubpub.org).

Why FRT is Biased Against Black Individuals

Key drivers of racial disparities in facial recognition outcomes.
Algorithmic Bias
FRT algorithms are trained on limited, non-diverse datasets, leading to higher error rates for people of color.
Disproportionate Data
Black individuals are overrepresented in mugshot databases, increasing the chance of false matches by FRT systems.
Systemic Disparities
FRT operates within institutions shaped by racial disparities, compounding harms to Black communities.

Since Black people are substantially overrepresented in mugshot databases, FRT systems are more inclined to return Black matches, leading to disproportionate targeting (criminallegalnews.org). This exacerbates existing racial disparities within the criminal justice system (pubmed.ncbi.nlm.nih.gov). The trauma from justice system involvement is compounded by wrongful arrest and conviction (pubmed.ncbi.nlm.nih.gov). Wrongful arrests inflict significant psychological distress, including anxiety, depression, and PTSD, as well as social repercussions such as damage to reputation and relationships (pubmed.ncbi.nlm.nih.gov). Economically, individuals may face legal fees, loss of income, and difficulty securing future employment (pubmed.ncbi.nlm.nih.gov). The experience of being wrongfully accused and processed by the criminal justice system can be profoundly destabilizing, with long-lasting negative effects on a person’s life and well-being (pubmed.ncbi.nlm.nih.gov).

The Trevis Williams Case

The case of Trevis Williams further illustrates the dangers of relying on flawed facial recognition technology. Williams, a Black man, was arrested and jailed for two days after FRT allegedly misidentified him as a suspect in a public indecency case. The actual criminal was described as being 5 feet 6 inches tall and weighing 160 pounds, while Williams stands at 6 feet 2 inches and weighs 230 pounds. The only similarities were that both men were Black, had thick beards and mustaches, and wore their hair in braids. This stark physical discrepancy highlights the limitations of FRT when racial bias is present.

Despite Williams having an alibi, with phone location data placing him 12 miles away at the time of the crime, and his willingness to have his employment records checked, police proceeded with the arrest. The victim’s “positive identification” was cited as probable cause, even though it was based on a photo lineup likely influenced by the initial FRT match. This case underscores how police departments can disregard traditional investigative methods and rely heavily on AI suggestions, even when evidence contradicts them. The NYPD claimed that arrests are never made “solely using facial recognition technology,” yet in Williams’ case, it appears that little else was needed beyond the initial FRT match and a subsequent, potentially biased, victim identification. The negative impact on Williams extended beyond his time in jail; his application to become a correctional officer was frozen, demonstrating the long-term consequences of wrongful arrests.

Addressing the FRT Problem

The issues surrounding facial recognition technology necessitate robust legal and policy responses. Some jurisdictions have already proposed or enacted legislation to limit its application by law enforcement (digitalcommons.law.uw.edu). These responses often include outright bans, moratoriums, or strict regulations requiring transparency, accountability, and independent oversight. The aim is to mitigate the risks of wrongful arrests and racial bias associated with FRT, ensuring that its use aligns with civil liberties and due process.

Wrongful Arrests by Facial Recognition

Known U.S. cases tied to facial recognition technology (FRT) and who is affected.
Total wrongful arrests (publicly known)
8+
Figure reflects documented cases; actual number may be higher.
Black individuals affected
~100%

To mitigate the risks of FRT and reduce wrongful arrests, several alternatives and safeguards are being explored. These include developing more accurate and less biased algorithms through diverse training data, implementing stricter protocols for corroborating FRT leads with independent evidence, and prioritizing traditional investigative methods that rely on human intelligence and established forensic techniques (pubmed.ncbi.nlm.nih.gov). Additionally, experts suggest the need for independent audits of FRT systems, clear legal frameworks for their use, and robust accountability mechanisms for law enforcement (pubmed.ncbi.nlm.nih.gov). The use of FRT in policing is deeply intertwined with broader social issues of systemic racism, pervasive surveillance, and civil rights (mit-serc.pubpub.org). Its disproportionate impact on communities of color highlights how new technologies can exacerbate existing racial disparities within the criminal justice system (pubmed.ncbi.nlm.nih.gov). The expansion of FRT also raises significant concerns about mass surveillance, privacy erosion, and the potential for chilling effects on free speech and assembly, thereby impinging on fundamental civil liberties (mit-serc.pubpub.org).

Understanding False Positives

A false positive in the context of facial recognition technology refers to an incorrect match, where the technology identifies someone as a person they are not. Error rates indicate how often the system makes mistakes, including false positives. For FRT, high false positive rates or even low rates when applied to large databases can lead to wrongful arrests, as individuals are incorrectly identified as suspects, initiating a chain of events that can result in significant trauma and injustice (pubmed.ncbi.nlm.nih.gov). Variables like facial database size, the race of the culprit, and the quality of the probe photo can increase the likelihood that FRT systems will return false positive matches (pubmed.ncbi.nlm.nih.gov).

The reliability of evidence provided by an FRT match is a significant concern, and problems can arise from its use in criminal investigations, especially regarding false positives (pubmed.ncbi.nlm.nih.gov). Even with “very low error rates,” FRT can still lead to significant harm and wrongful arrests, especially when applied at scale or within biased systems. A low error rate applied to a massive database of millions of faces can still generate a substantial number of false positives. When these false positives disproportionately affect certain demographic groups due to algorithmic bias, or when police misuse the technology by failing to corroborate FRT leads, the seemingly low error rate translates into real-world injustices and wrongful arrests for innocent individuals (pubmed.ncbi.nlm.nih.gov).

The Impact of Outdated Photos

The use of an expired driver’s license photo in an FRT match, as seen in Robert Williams’ case, is problematic because facial features change over time due to aging, weight fluctuations, changes in hairstyle, and other factors. An outdated image may not accurately represent a person’s current appearance, leading to a higher likelihood of a false negative (failing to match a person who is actually present) or, more critically, in the context of wrongful arrests, a false positive (incorrectly matching someone to an outdated image that vaguely resembles them). This reduces the accuracy and reliability of the FRT system, increasing the risk of misidentification (pubmed.ncbi.nlm.nih.gov).

The quality of the probe photo can significantly increase the likelihood that FRT systems will return false positive matches (pubmed.ncbi.nlm.nih.gov). An expired driver’s license photo would likely be considered a low-quality or outdated probe photo. A case involved police placing a picture of a driver’s license in front of a suspect and asking, “Is this you?” in relation to surveillance photos, highlighting the reliance on such images (surface.syr.edu). This practice, combined with the inherent limitations of outdated images, contributes to the risk of wrongful arrests and underscores the need for more stringent protocols regarding the quality and recency of images used in FRT searches.

Surveillance and Privacy

The existence of large facial recognition databases, often compiled from sources like mugshots, driver’s licenses, and social media, raises significant privacy concerns and contributes to fears of mass surveillance. These databases allow for widespread tracking and identification of individuals without their consent, eroding personal privacy and creating a chilling effect on civil liberties (mit-serc.pubpub.org). Public concerns revolve around the potential for government overreach, the misuse of personal data, and the creation of a society where individuals are constantly monitored, leading to a broader debate about the balance between security and individual rights (mit-serc.pubpub.org).

The use of FRT is present in people’s everyday lives, indicating a widespread collection of facial data that contributes to these large databases (digitalcommons.law.uw.edu). Citizens’ rights and social justice groups have identified undesirable societal consequences arising from the uncritical use of FRT algorithms, including false arrest and excessive government surveillance (mit-serc.pubpub.org). This broader context of surveillance and privacy concerns is crucial for understanding the full implications of FRT’s deployment and the urgent need for comprehensive regulation.

ABOUT THE AUTHOR

Darius Spearman has been a professor of Black Studies at San Diego City College since 2007. He is the author of several books, including Between The Color Lines: A History of African Americans on the California Frontier Through 1890. You can visit Darius online at africanelements.org.