
Black Women Lead the Fight for Digital Justice in AI
By Darius Spearman (africanelements)
Support African Elements at patreon.com/afreon.com/africanelements and hear recent news in a single playlist. Additionally, you can gain early access to ad-free video content.
The digital world is moving fast. Today, on March 27, 2026, Howard University is taking a stand. The Center for Women, Gender, and Global Leadership is hosting a major summit. This event is called “Toward Just Digital Futures.” It is a gathering of some of the brightest minds in tech and social justice. They are looking at how Artificial Intelligence, or AI, is changing the way people work. More importantly, they are looking at how these changes hurt Black women more than anyone else. This is a story about power. It is about who builds the future and who gets left behind.
President Donald Trump currently leads the nation during this time of rapid tech growth. His administration focuses on American dominance in AI. However, groups at Howard University argue that dominance should not come at the cost of equity. The history of Black women in technology is long. It did not start with a smartphone or a chatbot. It started with pencils, paper, and the stars. To understand what is happening today at Howard, one must look back at the women who were the original computers.
The Original Computers and the FORTRAN Revolution
In the 1940s and 1950s, the word “computer” meant a person. It was a job title. Many of these workers were Black women at NASA. Katherine Johnson, Dorothy Vaughan, and Mary Jackson are now famous names. They did the math for the space race by hand. These women calculated the paths for rockets using only their minds and simple tools. They were essential to the mission. Yet, they worked in segregated offices. They did not get the same pay or title as white men doing similar work.
When electronic machines arrived, these women had to adapt. This was the birth of FORTRAN. This stands for Formula Translating System. It was the first big programming language. It allowed humans to talk to machines using math codes. Dorothy Vaughan saw the future. She taught herself and her team how to use FORTRAN (purpose.jobs). This shift was vital. It turned manual calculators into software engineers. Even then, Black women contributed to movements in science while facing deep bias. They often did the hardest work but received the least credit.
The history of computing is full of these “human computers” who paved the way. They mastered the complex logic of FORTRAN to program IBM mainframes. They helped the United States reach the moon. However, as the field of programming became more professional, it also became more exclusive. Many Black women were pushed into administrative roles. Their technical expertise was often hidden. This set a pattern that still exists today. The Howard summit aims to break this pattern by reclaiming that technical history.
The New Jim Code and Algorithmic Bias
Today, bias does not always look like a “whites only” sign. It looks like a computer program. Scholars call this “The New Jim Code.” This term was created by Ruha Benjamin. It describes how new technology can repeat old racism. People think machines are fair because they use math. But machines learn from data. If the data is biased, the machine will be biased too. This is why facial recognition often fails to see Black faces correctly (medium.com).
In the workforce, this bias shows up in hiring. Companies use AI to scan resumes. These systems are meant to find the best workers. Instead, they often find workers who look like the people already at the company. If a company has mostly white male managers, the AI might learn to prefer those traits. This creates a cycle where economic justice is harder to reach. Black women often find their resumes filtered out before a human even sees them.
The Howard summit is looking at how these algorithms “launder” bias. This means they take human prejudice and hide it inside a “black box” of code. When a person is denied a job by an AI, they often do not know why. They cannot see the logic used by the software. This makes it very hard to fight back. The summit speakers argue that we must open these black boxes. We need to see how they work. This is the only way to ensure they treat everyone fairly.
The Design Deficit
Only 5% of the AI workforce is Black.
When we don’t build it, it doesn’t serve us.
The Second Wave of Harm: Employment Gaps
There is a new problem called the “second wave of harm.” This happens when AI punishes people for things they cannot control. For example, many Black women work in service and office support roles. These jobs are at high risk for AI automation (iwpr.org, mckinsey.com). When these jobs disappear, workers have gaps in their resumes. They might spend months looking for a new path. They might take time off to care for family members. This is common for Black women during economic shifts.
AI hiring filters see these gaps as a bad sign. They do not see a mother caring for her children or a worker learning new skills. They see a “risk.” The computer lowers the score for that person. This creates a trap. A woman loses her job to AI, and then AI prevents her from getting a new job because she was unemployed. This cycle is very dangerous. It can push thousands of qualified people out of the workforce entirely (hiredaiapp.com).
Researchers at the Howard summit are highlighting this issue. They explain that AI filters often use “proxy variables.” A proxy is something that stands in for something else. A gap in a resume is a proxy for “lack of commitment” to a computer. However, in the real world, it is often just a sign of a life well-lived or a struggle through a tough economy. By penalizing these gaps, AI systems are making the racial wealth gap even wider. They are making it harder for Black families to build stability.
New Laws for a New Era: 2026 Regulations
The year 2026 has brought some big changes in the law. Several states have passed rules to stop AI bias. Illinois has updated its Human Rights Act. Now, it is illegal for companies to use AI that discriminates, even if they do not mean to (jdsupra.com). This is a big step. It means companies must check their tools before they use them. They are responsible for the outcomes of their software.
Texas has passed the Responsible AI Governance Act, or TRAIGA. This law looks at “high-impact” settings like hiring and housing. It demands transparency. Companies must tell people if an AI is making a decision about them. They must also check for “intentional” discrimination (seyfarth.com). In California, the CPPA has new rules. People can now “opt out” of automated decisions. If a person does not want a computer to judge them, they can ask for a human to review their application (berkshireassociates.com).
These laws are a start, but they are not perfect. In Texas, the Attorney General is the only one who can enforce the law. This means individuals cannot sue on their own. This is where sharing of power between the state and the people becomes important. Advocacy groups must watch these agencies closely. They must make sure the laws are actually working to protect Black workers.
2026 Legal Landscape
The Wealth Gap and the $43 Billion Chasm
The economic impact of AI is massive. Estimates show that generative AI could widen the racial wealth gap by $43 billion every year (mckinsey.com). This is because of how wealth is created today. People with money can invest in AI companies. They get richer as the technology grows. Meanwhile, workers in roles like customer service or data entry lose their income. Black workers are often at the bottom of the wealth ladder to begin with.
Automation vulnerability is a real threat. About 24% of Black workers are in jobs that could be done by machines. Compare this to 20% of white workers. This small difference adds up over time. It means Black families have a higher chance of losing their primary source of income. Without a plan to help people move into new roles, the gains of AI will go only to a few. This is why the Howard summit is so critical.
Wealth is not just about a paycheck. It is about ownership. Currently, Black professionals make up less than 5% of the AI workforce (purpose.jobs, seattlemedium.com). When you do not own the technology, you do not get the profit. You also do not get to decide how the technology is used. This lack of “design-side” representation means AI is built without Black perspectives. The summit is calling for more Black women to lead in the creation of AI, not just use it.
Howard University: A Command Center for Justice
Howard University has always been a leader in Black excellence. Today, it is a command center for digital advocacy. Dr. Talitha Washington is moderating the “Toward Just Digital Futures” summit. She is a top mathematician. She leads Howard’s Center for Applied Data Science and Analytics. Her leadership shows that Black women are more than capable of mastering the hardest technical fields. They have been doing it since the NASA days.
The speakers at the summit are not just talking about code. They are talking about “trauma-informed” AI. This means building technology that understands the history of the people it serves. Renée Cummings and Lucretia Williams are leading this charge. They want to reconfigure power. This means moving away from the idea that tech companies should have all the control. Instead, the communities most impacted by AI should have a say in its rules.
This summit is unique because it combines “Black feminist consciousness” with “AI workforce shifts.” It argues that you cannot fix the tech without fixing the social problems. Digital justice is not an extra goal. It is a core requirement for leadership in the 21st century. By hosting this event, Howard is training a new generation of leaders. These leaders will build a future where technology helps everyone, not just those at the top.
Looking Ahead: Reconfiguring the Future
The road ahead is not easy. AI is here to stay. However, the future is not set in stone. The work being done today at Howard University shows that we can choose a different path. We can choose to build systems that are fair. We can pass laws that protect workers. We can ensure that Black women are in the rooms where decisions are made. This is what digital justice looks like in practice.
The transition from “human computers” to AI ethicists is a powerful journey. It shows the resilience of Black women. They have survived every technological shift so far. They have mastered FORTRAN and they will master AI. The goal now is to make sure they are not just surviving, but thriving. When Black women lead in tech, the whole world benefits from their vision and their expertise. The “Toward Just Digital Futures” summit is a bold step toward that reality.
As this article has shown, the history behind the headlines is a story of struggle and triumph. From the segregated labs of NASA to the modern halls of Howard, the fight continues. The data shows the risks are real. The $43 billion gap and the 7.1% unemployment rate are warnings. But the response from leaders at Howard is a reason for hope. They are not waiting for the future to happen to them. They are building it themselves.
About the Author
Darius Spearman is a professor of Black Studies at San Diego City College, where he has been teaching for over 20 years. He is the founder of African Elements, a media platform dedicated to providing educational resources on the history and culture of the African diaspora. Through his work, Spearman aims to empower and educate by bringing historical context to contemporary issues affecting the Black community.