
Listen to this article
Download AudioAI Weaponized: Spreading Hate on TikTok
By Darius Spearman (africanelements)
Support African Elements at patreon.com/africanelements and hear recent news in a single playlist. Additionally, you can gain early access to ad-free video content.
The rise of artificial intelligence has brought both wonder and concern. This powerful technology is now being weaponized to spread hate content online, particularly targeting Black communities. AI-generated content refers to media, such as videos, images, or text, that is created using artificial intelligence tools. These tools leverage complex algorithms and vast datasets to produce new content that can be highly realistic (DW.com).
In the context of video creation, AI models can generate entire video clips, including visuals, characters, and even dialogue, based on text prompts or other inputs. This technology allows for the rapid production of content that might otherwise require significant human effort and resources (DW.com). Unfortunately, this capability is being exploited to create viral posts that depict Black women as primates and perpetuate other racist tropes (Wired.com). The discovery of these racist AI-generated videos highlights the immense challenge social media companies face in preventing the weaponization of powerful AI video-generating tools to spread hate (BandT.com.au).
AI’s Racist Weaponization on TikTok
Racist AI-generated videos, likely created with Google’s Veo 3, are circulating widely on TikTok. These disturbing clips depict Black people with racist stereotypes, including portrayals as monkeys, “baby mommas,” and “cop bait” lured by watermelon (NewsOne.com). The content also includes stereotypical tropes against Asian, Muslim, and Jewish people (NewsOne.com). All of these videos are eight seconds long, which is the current time limit on video production in Veo 3, strongly suggesting its use (PCMag.com).
One particularly egregious video, which garnered 4 million views, shows a monkey wearing a pink wig, long pink acrylic nails, and pink eyelashes. In the video, the monkey says, “So my probation officer called. Good news – I ain’t gotta do no more community service. Bad news – that’s cause there’s a new warrant out for my arrest” (NewsOne.com). Racist stereotypes are oversimplified and often negative generalizations about entire groups of people, typically based on race or ethnicity. These stereotypes are harmful because they dehumanize individuals, perpetuate prejudice, and can lead to discrimination and violence (BandT.com.au). Historically, they have been used to justify oppression, slavery, and systemic inequality. Depicting Black people as monkeys or criminals, or associating them with specific foods like watermelon and fried chicken, are deeply rooted in racist caricatures from the Jim Crow era. These caricatures were designed to portray Black individuals as less intelligent, uncivilized, or subservient (BandT.com.au).
Google Veo 3: A Powerful Tool Misused
Google Veo 3 is an advanced AI video generation tool developed by Google. It gained significant attention upon its release at Google’s developer conference in May, known for its ability to create surreal and realistic video content from text prompts (Wired.com). Its capabilities include generating diverse scenarios, from biblical characters to cryptids like Bigfoot, engaging in influencer-style vlogging (Wired.com). Google even used AI-generated Bigfoot vlogs as a selling point for the tool (Wired.com).
However, the powerful generation capabilities of Veo 3 have also been exploited to create and disseminate harmful and racist content on social media platforms (PCMag.com). Evidence points to Google’s new Veo 3 as the likely tool used to produce the racist AI videos flooding TikTok (PCMag.com). Google has plans to integrate Veo 3 into YouTube Shorts, which could further facilitate the spread of similar content if not properly managed (ArsTechnica.com).
Beyond Stereotypes: Propaganda and Trauma Re-enactment
The AI-generated content extends beyond racial stereotypes to include misleading propaganda and the re-enactment of historical traumas. Users are posting misleading AI-generated videos of immigrants and protesters, including clips in which protesters are run over by cars (NewsOne.com). This type of content aims to incite fear and hatred against marginalized groups.
Furthermore, AI-generated videos are re-enacting marginalized groups’ historical traumas, depicting concentration camps and Ku Klux Klan attacks on Black Americans (NewsOne.com). Historical traumas refer to deeply distressing events that have had widespread and lasting negative impacts on a group of people, often across generations. The Holocaust, for instance, was the systematic, state-sponsored persecution and murder of six million Jews by the Nazi regime and its collaborators during World War II (PCMag.com). Concentration camps were central to this genocide, serving as sites of forced labor, torture, and mass extermination (BandT.com.au). The Ku Klux Klan (KKK) is a white supremacist hate group in the United States that has historically used terrorism, violence, and intimidation to oppress Black Americans and other minority groups, particularly during the Reconstruction era and the Civil Rights Movement (PCMag.com). Re-enacting or mocking these events, especially through AI-generated content, trivializes immense suffering, disrespects victims and survivors, and can normalize hateful ideologies (BandT.com.au).
These clips are designed to outrage users, encourage reactions, and therefore reach more people through TikTok’s algorithm (PCMag.com). The deliberate attempt to exploit sensitive historical events for engagement is deeply troubling (PCMag.com).
TikTok’s Algorithmic Amplification
TikTok’s algorithm is designed to maximize user engagement by rapidly identifying and promoting content that is likely to keep users on the platform. It achieves this by analyzing user interactions, such as likes, shares, comments, and watch time, to understand individual preferences (PCMag.com).
Content that generates strong reactions, including outrage or controversy, often leads to higher engagement metrics. These metrics include comments, shares, and re-watches, which the algorithm interprets as a signal of popularity (PCMag.com). This can inadvertently amplify sensational or provocative content, including racist AI-generated videos, as they are “designed to outrage users, encourage reactions, and therefore reach more people through TikTok’s algorithm” (PCMag.com). The spread of such content highlights the battle social media companies face in monitoring and preventing the weaponization of AI video tools (BandT.com.au).
TikTok’s Enforcement Struggle
TikTok has policies against hate speech and AI-generated content, but it struggles to enforce them effectively against this influx of material. TikTok’s community guidelines prohibit videos dehumanizing racial and ethnic groups (NewsOne.com). They also prohibit “threatening or expressing a desire to cause physical injury to a person or a group” (NewsOne.com). TikTok encourages creators to label content that has been either completely generated or significantly edited by AI to support authentic and transparent experiences (TikTok Support). TikTok’s safety guidelines explicitly address countering hate speech and behavior, including abusive attacks, hateful images, slurs, stereotypes, and conspiracy theories founded in hate (TikTok.com).
Despite these clear prohibitions, TikTok’s enforcement challenges stem from several factors. These include the sheer volume of content uploaded daily, the sophisticated nature of AI-generated hate speech, and the difficulty in scaling human moderation (ArsTechnica.com). Millions of videos are uploaded to TikTok constantly, making it nearly impossible for human moderators to review every piece of content (ArsTechnica.com). AI-generated videos, especially those created with advanced tools like Google Veo 3, can be highly realistic and subtle in their hateful messaging. They sometimes use coded language or visual metaphors that bypass automated detection systems (ArsTechnica.com). This leads to a situation where “TikTok is seemingly unable to keep up with the flood of video uploads, and Google’s guardrails appear insufficient to block the creation of this content” (ArsTechnica.com).
Statistical data indicates TikTok’s ongoing efforts to remove hate speech, though challenges remain. Since the start of 2020, TikTok has removed more than 380,000 videos in the US for violating its hate speech policy (TikTok Newsroom). TikTok also banned more than 1,300 accounts for hateful content or behavior and removed over 64,000 hateful comments (TikTok Newsroom). These numbers do not reflect a 100% success rate in catching every piece of hateful content or behavior, but they do indicate a commitment to action (TikTok Newsroom).
TikTok’s Content Moderation Efforts (Since 2020)
User Engagement and Endorsement
The spread of this content highlights a concerning trend of users endorsing and engaging with racist propaganda. Media Matters noted that “it’s evident that viewers understand and endorse this racist propaganda,” citing comment sections under the videos (NewsOne.com). One comment with over 2,000 likes reads, “Bro even AI has black fatigue” (NewsOne.com). This indicates a disturbing level of acceptance and even approval for such hateful messages.
While the provided articles do not delve deeply into the specific psychological or social factors driving user engagement with racist AI content, they do offer some insights. The content is “designed to outrage users, encourage reactions,” suggesting that a significant driver of engagement is the emotional response it provokes (PCMag.com). This could include users sharing the content to express disgust or condemnation, or, conversely, users who align with the hateful messages sharing it to reinforce their own biases. The viral nature of these videos, with some racking up “over a million views,” indicates that the content successfully taps into existing social dynamics, whether it is a desire for shock value, a platform for expressing prejudice, or simply the algorithmic amplification of controversial material (Wired.com). The content is described as “racist AI-generated videos” and “the newest slop garnering millions of views on TikTok,” indicating significant, albeit problematic, user engagement (BandT.com.au).
The Path Forward: Regulation and Accountability
The provided articles highlight the problem of AI misuse but do not detail specific regulatory measures being considered or implemented to prevent the creation of hate content by tools like Google Veo 3. However, they do indicate that social media companies like TikTok and Google have “clear prohibitions on this content,” suggesting internal policies are in place, even if enforcement is lacking (ArsTechnica.com). The challenge lies in the “battle that social media companies face in monitoring and preventing highly powerful AI video generating tools from being weaponized” (BandT.com.au).
While direct government regulation of AI content generation is not explicitly mentioned, the ongoing struggle points to a need for more robust technical safeguards within AI models themselves. Google’s guardrails appear insufficient to block the creation of this content (ArsTechnica.com). There is also a need for increased accountability for platforms that host such content. The articles imply that current efforts are primarily reactive, focusing on removing identified videos, rather than proactive regulatory frameworks. A more comprehensive approach is needed to combat the weaponization of AI against marginalized communities.
The weaponization of AI to spread hate content, particularly against Black communities, is a grave concern that demands immediate and sustained attention. The proliferation of racist AI-generated videos on platforms like TikTok, fueled by powerful tools such as Google Veo 3, highlights a dangerous intersection of technology and prejudice. These videos not only perpetuate harmful stereotypes but also trivialize historical traumas, further eroding social cohesion and respect. While platforms like TikTok have policies in place and demonstrate efforts to combat hate speech, the sheer volume of content and the sophisticated nature of AI-generated material present significant enforcement challenges. The disturbing trend of user engagement and endorsement of such content underscores the urgent need for more effective solutions. Moving forward, it is imperative to develop robust technical safeguards within AI models, implement stronger accountability measures for social media platforms, and foster a collective commitment to preventing the misuse of AI for hateful purposes. The fight against digital racism requires vigilance, innovation, and a unified front to protect the dignity and safety of all communities.
About the Author: Darius Spearman is a passionate advocate for social justice and an insightful commentator on issues affecting the African Diaspora. With a deep understanding of historical and contemporary challenges, Darius leverages his platform to educate and empower. His work focuses on shedding light on systemic inequalities and promoting constructive dialogue for a more equitable future.
ABOUT THE AUTHOR
Darius Spearman has been a professor of Black Studies at San Diego City College since 2007. He is the author of several books, including Between The Color Lines: A History of African Americans on the California Frontier Through 1890. You can visit Darius online at africanelements.org.