
Live Facial Recognition: A Threat to Black Communities
By Darius Spearman (africanelements)
Support African Elements at patreon.com/africanelements and hear recent news in a single playlist. Additionally, you can gain early access to ad-free video content.
The Expanding Reach of Surveillance
Live facial recognition (LFR) technology is quickly spreading across the United Kingdom, sparking serious concerns about civil liberties and democratic freedoms. The Metropolitan Police have already started permanently installing LFR cameras in South London. Furthermore, the government has launched a £20 million tender to expand this technology nationwide (ipsnews.net). This push for increased surveillance is part of a larger trend that worries many.
Civil society organizations are sounding the alarm, warning that this technology poses significant risks. These risks include privacy invasions, incorrect identifications, and a gradual expansion of its use beyond original intentions, known as function creep (ipsnews.net). As authorities increasingly deploy these systems at public gatherings and demonstrations, there are growing fears about their potential to limit our freedoms. This expansion is particularly troubling for Black communities, who often bear the brunt of over-policing and biased technologies.
Understanding Facial Recognition
Facial recognition technology works by analyzing an image of a person’s face to create a unique biometric map. This map measures distances between facial features, creating a pattern as distinct as a fingerprint. This biometric data is then converted into code, which can be matched against other facial images.
There are different ways this technology is used. One-to-one matching compares a face to a single image, such as an ID photo, to confirm identity. However, one-to-many matching is far more concerning. This method scans facial data against larger databases and is commonly used by law enforcement and intelligence agencies for surveillance (ipsnews.net). Live facial recognition (LFR) is the most controversial type. It involves using CCTV cameras with special software to scan everyone passing by, mapping faces and comparing them to watchlists in real time (ipsnews.net). This real-time, mass surveillance capability is what makes LFR so dangerous to our freedoms.
Other forms of facial recognition include retrospective facial recognition, which uses still images from crime scenes or social media against police databases. There is also operator-initiated recognition, where officers use phone apps for real-time checks against custody images (ipsnews.net). While these methods also raise concerns, LFR’s ability to conduct mass surveillance on an unsuspecting public is uniquely alarming.
Key Terms in Facial Recognition
Live Facial Recognition (LFR): A technology that uses cameras and software to scan faces in public spaces in real time, comparing them to watchlists. It treats everyone as a potential suspect, undermining privacy and eroding presumed innocence.
Function Creep: The gradual expansion of a technology’s use beyond its original purpose. For LFR, this means starting with one stated goal and slowly expanding its application to other areas, often without public consent or proper oversight.
Watchlist: A list of individuals that law enforcement or other agencies are actively looking for. In the context of LFR, these are the people whose faces the system is programmed to identify and flag in real time. The criteria for inclusion on these lists are often unclear.
Eroding Freedoms and Presumed Innocence
Live facial recognition fundamentally violates the principles of a free society. It conducts mass identity checks on everyone in real time, regardless of whether there is any suspicion (ipsnews.net). This is like police stopping every person on the street to check their DNA or fingerprints. It reverses the core idea that suspicion should come before surveillance, giving police immense power to identify and track people without their knowledge or permission. Instead of investigating after a crime, LFR treats everyone as a potential suspect, which chips away at our privacy and the idea that we are innocent until proven guilty.
The threat this technology poses to civic freedoms is severe. Anonymity in crowds is vital for protest, allowing people to stand together as a collective without fear of individual targeting. Live facial recognition destroys this anonymity, creating a chilling effect. People become less likely to protest if they know they will be biometrically identified and tracked (ipsnews.net). Despite warnings from the United Nations against using biometric surveillance at protests, UK police have deployed LFR at demonstrations, including arms fairs, environmental protests, and even the King’s coronation. These tactics mirror those used in authoritarian regimes, which is deeply concerning for a country that claims to uphold rights.
The Stain of Discrimination
Facial recognition technology is inherently discriminatory. Independent studies consistently show that its accuracy is significantly lower for women and people of color (ipsnews.net). This is because the algorithms used in these systems have largely been trained on data sets dominated by white male faces. Even with recent improvements, the technology still performs worse for women of color (ipsnews.net). This inherent bias in the technology itself is a major problem.
This technological bias only makes existing problems worse within UK policing. Independent reports have already found that UK policing shows systemic racist, misogynistic, and homophobic biases (ipsnews.net). Black communities, in particular, face disproportionate criminalization. When you combine these existing biases with flawed technology, it deepens inequalities. If police watchlists contain a disproportionate number of people of color, the system will repeatedly flag them, reinforcing patterns of over-policing and validating existing biases (ipsnews.net). The locations where LFR is deployed also reveal targeting patterns. For instance, mobile units are often used in poorer areas with higher populations of people of color, and one of the earliest deployments was at Notting Hill Carnival, London’s biggest celebration of Afro-Caribbean culture. These choices raise serious concerns about who is being targeted by this surveillance.
A Legal Void
There is a significant legal gap concerning the use of facial recognition technology in the UK. Unlike many other countries, the UK does not have a written constitution, and police powers have developed through common law. Police forces argue that vague common law powers to prevent crime allow them to use facial recognition, falsely claiming it improves public safety (ipsnews.net). This lack of clear legal framework is alarming.
Parliamentary committees have expressed serious concerns about this legal vacuum. Currently, each police force creates its own rules for deployment locations, watchlist criteria, and safeguards (ipsnews.net). They even use different algorithms, which have varying levels of accuracy and bias. For such an intrusive technology, this inconsistent approach is simply unacceptable. Despite trials beginning in 2015, successive governments have failed to introduce proper regulation. It remains unclear whether the new Labour government will introduce comprehensive legislation or merely codes of practice. This lack of oversight stands in stark contrast to the European Union’s AI Act, which introduces strong safeguards on facial recognition and remote biometric identification (ipsnews.net).
Privacy at Stake
The use of live facial recognition technology raises significant privacy concerns for individuals. When law enforcement agencies deploy LFR in public spaces, they are processing biometric data, which falls under data protection laws (ico.org.uk). The technology collects sensitive personal data, but the report does not fully explain how this data is collected, stored, or shared, or what rights individuals have to access or delete their own data. This lack of transparency leaves people vulnerable.
The Biometrics and Forensics Ethics Group (BFEG) has also raised concerns about the potential for racial bias in facial recognition, which could lead to privacy risks for certain groups (assets.publishing.service.gov.uk). Police forces are expected to show that their use of facial recognition complies with human rights, equality, and data protection standards (mctd.ac.uk). However, without clear regulations and transparent practices, the privacy of individuals, especially those from marginalized communities, remains at risk.
The Watchlist Enigma
A critical, yet often overlooked, aspect of LFR deployment is the creation and management of police watchlists. The report mentions these watchlists but does not explain the criteria for including individuals on them, nor how these lists are reviewed or updated (repository.essex.ac.uk). This lack of transparency is deeply troubling. For a lay audience, understanding what a police watchlist is and how it is compiled is essential to grasping the full implications of LFR.
While the Metropolitan Police Service (MPS) has data practices related to watchlist criteria, the specific details of these practices are not made clear to the public (repository.essex.ac.uk). This opacity means that we do not know if these lists are fair, accurate, or free from bias. Given the documented racial biases in the technology itself and in policing, there is a significant risk that watchlists could disproportionately target Black individuals, leading to further over-policing and harassment. Without clear guidelines and public oversight, the potential for abuse and discrimination is immense.
Economic and Social Ripples
The discussion around facial recognition technology often focuses on privacy and civil liberties, but it is important to consider its broader economic and social impacts. The facial recognition market in the UK is projected to reach US$174.10 million in 2025 (mctd.ac.uk). This significant market size indicates strong economic drivers behind its expansion. However, the report does not delve into the deeper economic consequences, such as its effects on employment or innovation, beyond the immediate surveillance industry.
Beyond economics, the social impacts are profound. The concerns about racial bias in facial recognition technology, for example, highlight how it can exacerbate social inequalities within specific communities (assets.publishing.service.gov.uk). When a technology disproportionately misidentifies or targets people of color, it creates a two-tiered system of surveillance and justice. This can lead to increased distrust in law enforcement, further marginalization, and a chilling effect on public participation, especially for those who are already vulnerable to systemic biases. The social fabric of communities can be strained when residents feel constantly watched and unfairly targeted by technology.
Projected UK Facial Recognition Market Size (2025)
The Path Forward: Regulation and Engagement
The current lack of regulation for facial recognition technology in the UK is a pressing issue. While the report highlights this gap, it does not clearly outline potential policy solutions or processes for public consultation. Clear next steps for governance and public involvement are desperately needed to ensure accountability and fairness. For example, South Wales Police did not consult the public or civil society before their trials, which shows a significant gap in public engagement (mctd.ac.uk).
To move forward, there must be a commitment to greater transparency. Audits of police deployments of facial recognition are intended to improve accountability to the public, suggesting an ongoing need for scrutiny (mctd.ac.uk). The Information Commissioner’s Office (ICO) has already provided opinions on the use of live facial recognition technology by law enforcement, indicating that there are existing regulatory efforts within the UK (ico.org.uk).
ABOUT THE AUTHOR
Darius Spearman has been a professor of Black Studies at San Diego City College since 2007. He is the author of several books, including Between The Color Lines: A History of African Americans on the California Frontier Through 1890. You can visit Darius online at africanelements.org.