African Elements Daily
African Elements Daily
Equity-First AI: Fixing Medical Bias
Loading
/
A photojournalistic style image of a diverse group of healthcare professionals, prominently featuring a Black female researcher and other clinicians of varying backgrounds, intensely focused around a large, advanced holographic display in a modern, sterile medical AI laboratory. The display projects intricate, dynamic data visualizations initially showing biased algorithms with stark red indicators highlighting disparities in patient care outcomes based on race. The lead Black researcher points decisively to a segment of the data, her face reflecting a mix of concern and determination. A section of the display subtly shifts to green, indicating a corrected, equitable data flow. Cinematic lighting casts soft, focused illumination on their faces and the holographic screen, creating areas of dramatic shadow and bright highlights, emphasizing the gravity and hope of their work. Realistic textures are visible on their scrubs, the lab equipment, and the nuanced skin tones of the individuals. Text overlay: "CODE EQUITY". The text must be rendered in a multi-line H2 'impact' font, in ALL CAPS. "CODE" in **Bronze** color, "EQUITY" in **White** color. The text should 'pop' with visual separation between the words. Place the entire text block in the top right corner of the image, ensuring it remains completely outside a 15% "safe zone" from the edges of the frame and does not obstruct any part of the main subject (the researchers or the holographic display), appearing completely visible and balanced.
Health-equity groups demand ‘equity-first medical AI’ standards to combat deep-seated racial bias. This ensures AI in healthcare works for all, particularly Black patients.

Equity-First AI: Fixing Medical Bias

By Darius Spearman (africanelements)

Support African Elements at patreon.com/africanelements and hear recent news in a single playlist. Additionally, you can gain early access to ad-free video content.

Artificial intelligence promises a new era in healthcare, one filled with revolutionary diagnostics and personalized treatments. Yet, a shadow looms over this bright future. Mounting evidence shows that many medical AI tools, instead of being objective, carry forward and even amplify deep-seated health disparities affecting Black patients. Consequently, a coalition of U.S. health-equity groups is now demanding that regulators establish “equity-first” standards for medical AI. This push arrives at a critical moment, as attacks on Diversity, Equity, and Inclusion (DEI) initiatives are on the rise, creating a small window to secure these vital protections. This is not a new problem but a modern manifestation of a long history of medical racism, now encoded in digital logic.

The Historical Roots of Medical Bias

The story of biased AI in medicine did not begin with the first line of code. It began centuries ago, rooted in a medical system built alongside racial hierarchies. Long before computers, medical practices themselves embedded racial biases that later seeped into the data used to train algorithms. The gruesome history of medical experimentation on Black people laid a foundation of distrust and inequity. Furthermore, this history was justified by flawed ideas about biological differences between races, a concept modern science has thoroughly debunked (nih.gov).

For example, doctors have long used “race correction” in clinical tools. One such tool is spirometry, a test that measures lung function to diagnose conditions like asthma (nih.gov). For decades, these tests included adjustments based on the false assumption that Black people naturally have lower lung capacity (nih.gov). Similarly, the estimated glomerular filtration rate (eGFR), a crucial test for kidney function, historically used a multiplier for Black patients. This adjustment made their kidney function appear healthier than it actually was, often delaying diagnoses and life-saving transplants (kidneyfund.org). These “corrections” were not based on sound science but on pseudoscientific beliefs used to justify slavery and inequality (nih.gov).

Defining Algorithmic Bias in AI

To understand the current crisis, it is important to define the core problem. Algorithmic bias occurs when a computer system produces results that unfairly favor one group over another (thedecisionlab.com). AI systems are not born biased; they learn bias from the data they are given. If the data reflects historical injustice, the AI will learn and automate that injustice. This happens through machine learning, a type of AI that allows computers to learn from data without being explicitly programmed for every task (datacamp.com). These systems analyze huge datasets to find patterns and make predictions.

A pivotal moment in understanding this danger came in 2019 with a “bombshell study” published in the journal Science (nih.gov). Researchers found that a widely used commercial algorithm was systematically underestimating the health needs of the sickest Black patients. The algorithm used a person’s past healthcare spending as a substitute, or proxy, for their health needs. However, because of systemic inequality, less money has historically been spent on Black patients, even when they were sicker than their White counterparts (nih.gov). As a result, the algorithm falsely concluded that Black patients were healthier, locking them out of programs designed to provide extra care. Correcting this single bias could have nearly doubled the number of Black patients flagged for additional support (nih.gov).

AI Models Mirror Human Bias in Pain Assessment

A 2024 study revealed that AI models exhibit false beliefs about racial biology, similar to human trainees, leading to underestimation of pain in Black patients.

Gemini Pro
24%

Human Trainees
12%

GPT-4
9%

Source: 2024 study on AI and racial bias in pain assessment.

Modern AI Bias in Healthcare

Despite growing awareness, algorithmic bias continues to plague modern medicine across many fields. AI-powered chatbots have been shown to underestimate the pain of Black patients, mirroring the biases of human doctors and perpetuating false beliefs about biological differences (painmedicinenews.com). A 2024 study found that the AI model Gemini Pro held the highest rate of these false beliefs at 24%, followed by human trainees at 12% and GPT-4 at 9%. Large Language Models (LLMs), which are advanced AI trained on massive text databases to generate human-like language, also show bias in psychiatric care ((ibm.com), (cedars-sinai.org)). A 2025 Cedars-Sinai study found that when a patient was identified as Black, leading LLMs often proposed different or inappropriate treatments (cedars-sinai.org).

In one alarming instance, an LLM suggested “guardianship” for a Black patient with depression. Guardianship is a legal process that strips an individual of their autonomy, giving another person control over their medical and financial decisions (aclu.org). For a Black person, such a recommendation is deeply problematic, echoing a history of coercive mental health interventions and reinforcing stereotypes about incapacity (aclu.org). In addition, diagnostic tools show significant inaccuracies for people with darker skin. AI trained to detect skin cancer often fails because its training data is overwhelmingly composed of images of light skin (harvard.edu). Pulse oximeters are nearly three times more likely to miss low oxygen levels in Black patients, and forehead thermometers are 26% less likely to detect fevers (harvard.edu). These are not minor glitches; they are systemic failures with life-or-death consequences.

The Call for Equity-First AI Standards

In response to this crisis, health-equity advocates are championing an “equity-first” approach. This framework insists that fairness and justice must be intentionally designed into AI systems from the very beginning (greenlining.org). It is a human-centered model that requires developers to include marginalized communities in the design process, ensuring the data used reflects their real-world experiences (accessh.org). In December 2025, the NAACP and the healthcare company Sanofi released an authoritative report, known as a white paper, titled “Building a Healthier Future: Equity-First AI in Healthcare” (naacp.org). This document calls for every medical AI system to be tested for racial bias, for the results to be made public, and for Black community groups to have a seat at the development table (naacp.org).

This movement is especially urgent because of the political climate. A wave of attacks on DEI initiatives is sweeping through medical schools and hospitals (healthcare-brew.com). These attacks, which often come in the form of legislation or policy changes, seek to dismantle programs aimed at diversifying the healthcare workforce and addressing health disparities (upenn.edu). This pushback against equity threatens to widen racial health gaps and represents a form of anti-Black politics that could halt progress (upenn.edu). Advocates see a narrow window to establish strong, lasting protections before these anti-DEI efforts can further erode the foundation for equitable healthcare.

Impact of Removing Bias from Healthcare Algorithm

The 2019 study in *Science* showed that correcting racial bias in a risk-prediction algorithm dramatically increased the number of Black patients identified for extra care.

Before Fix

17.7%

After Fix

46.5%

Source: Obermeyer et al., *Science* (2019).

Regulatory Response and Legal Frameworks

Federal regulators are beginning to take action. In May 2024, the U.S. Department of Health and Human Services (HHS) Office for Civil Rights (OCR) issued a new rule under Section 1557 of the Affordable Care Act (ACA) (mintz.com). The OCR is the agency tasked with enforcing federal civil rights laws in healthcare (hhs.gov). Section 1557 is a broad non-discrimination provision that prohibits discrimination based on race, sex, age, or disability in federally funded health programs (peoplekeep.com). The new rule explicitly extends these protections to cover “patient care decision support tools,” a category that includes medical AI (mintz.com).

This rule requires healthcare providers using these tools to make “reasonable efforts” to identify and reduce their discriminatory impact. However, some critics argue the rule does not go far enough. They point out that bias can arise even when the input data seems neutral, such as through proxy variables (medium.com). The 2019 study’s algorithm did not use race as a direct input. Instead, it used healthcare spending, which acted as a proxy for race due to systemic inequities (nih.gov). Therefore, advocates believe the duty to mitigate bias should apply to all AI tools, not just those with obviously discriminatory inputs, to ensure comprehensive protection.

How Neutral Data Creates Biased AI

Bias can enter AI systems indirectly, even without using race as an input. This flowchart shows how a seemingly neutral factor like “cost” can lead to a discriminatory outcome.

Systemic Inequality

Leads to…

Less Spending on Black Patients

Is used as…

AI Proxy for “Health Need”

Which results in…

Biased Outcome

Sicker Black patients are denied care.

A New Front in an Old Struggle

The push for equity-first medical AI is a critical continuation of the long fight against health disparities. It acknowledges that technology is not inherently neutral; it is shaped by the society that creates it. The biases embedded in medicine for centuries have provided the raw material for algorithms to learn and perpetuate those same injustices on a massive scale. From flawed “race corrections” to biased diagnostic tools, the evidence is undeniable. Equity cannot be an add-on or an afterthought in the development of medical AI.

Ultimately, the current efforts by health-equity groups and regulators represent a pivotal moment. The goal is to reshape the future of digital medicine to serve all people fairly. This requires constant monitoring for bias, public transparency about AI performance, and meaningful engagement with the communities most affected. Only through this deliberate and sustained action can the promise of AI be realized for everyone. This ensures technology helps close, rather than widen, the gaps in health equity, continuing the long, difficult journey toward true freedom from systemic harm.

About the Author

Darius Spearman is a professor of Black Studies at San Diego City College, where he has been teaching for over 20 years. He is the founder of African Elements, a media platform dedicated to providing educational resources on the history and culture of the African diaspora. Through his work, Spearman aims to empower and educate by bringing historical context to contemporary issues affecting the Black community.