A cinematic image of a powerful AI figure standing in a futuristic cityscape, illuminated by neon lights, with a backdrop of towering data centers and a vast digital network. The figure, a diverse individual with a determined expression, gazes towards the horizon, symbolizing ambition and the ethical dilemmas of technology. The lighting is dramatic, casting deep shadows and highlighting the figure's face. The mood is intense and thought-provoking, with a striking detail of holographic data streams flowing around the figure. Use contrasting colors of deep blues and vibrant oranges to create a visually striking composition. Incorporate the high-impact phrase 'AI'S EMPIRE' in a multi-line H2 impact font, with 'AI'S' in Bronze, 'EMPIRE' in White, and a subtle Olive accent. Ensure the text is prominent and stands out against the background.
AI colonialism highlights the scale-at-all-costs approach in tech and its impact on society. (AI Generated Image)

Listen to this article

Download Audio

AI’s Empire: A Critical Look at Tech’s New Colonialism

By Darius Spearman (africanelements)

Support African Elements at patreon.com/africanelements and hear recent news in a single playlist. Additionally, you can gain early access to ad-free video content.

Unpacking AI’s “Scale-at-all-Costs” Approach

Karen Hao, a seasoned technology reporter, has unveiled a powerful critique of artificial intelligence development in her new book, “Empire of AI.” She argues that the current path of AI, especially from Silicon Valley, mirrors historical colonialism. This “scale-at-all-costs” approach means prioritizing rapid growth and acquiring resources above all else, often ignoring ethical or environmental concerns (karendhao.com).

Hao explains that AI companies, like OpenAI, seize and extract resources without proper consent or payment. These resources include the creative work of artists and writers, personal data from countless individuals, and even land, energy, and water for massive data centers (cambridgeday.com). This process is similar to how colonial powers operated in the past, taking what they wanted from colonized lands. The book reveals the hidden human and environmental costs behind AI products, including the exploitation of data workers in Kenya and the global race for land, water, and cheap labor (karendhao.com).

The True Cost of AI’s Resource Extraction

Resource extraction by AI companies primarily involves the intense use of computational power, energy, and data. These resources are often obtained in ways that raise serious ethical and environmental questions. For instance, data centers demand huge amounts of energy and water (muse.jhu.edu). Modern AI models, especially large language models, are trained on vast datasets, including the entire English-language internet, books, and scientific articles. This training requires supercomputers with tens to hundreds of thousands of computer chips (nybooks.com).

In addition to these material resources, AI development relies heavily on human labor. This labor often comes from the Global South, where workers are hired to clean and label massive datasets. This work, known as data annotation, is critical for AI models but is often performed under exploitative conditions (muse.jhu.edu). The “bigger-is-better” mindset in AI development drives an explosion in investment and a corresponding spike in the size of these models, which then requires even more computational resources (arxiv.org).

Key AI Resource Extraction Areas

Computational Power

Massive supercomputers with tens to hundreds of thousands of chips are needed for training large AI models.

Energy Consumption

Data centers consume energy equal to entire cities, often extending coal plant lifespans and using fossil fuels.

Water Usage

Freshwater is required for cooling data centers, often tapping into public drinking water supplies, especially in water-scarce areas.

Data & Intellectual Property

Vast datasets, including the entire internet, books, and creative works, are scraped for model training.

Human Labor (Data Annotation)

Workers in the Global South are exploited for cleaning and labeling data, often under poor conditions and low pay.

This visualization illustrates the primary resources extracted by AI companies. Source: (nybooks.com), (muse.jhu.edu), (arxiv.org)

Environmental and Human Tolls

The environmental footprint of AI is staggering. Data centers, the backbone of AI development, consume energy equivalent to entire cities (nybooks.com). A McKinsey report projects that within five years, AI computational infrastructure expansion will require an amount of energy on the global grid equal to two to six times California’s annual consumption (nybooks.com). This demand is largely serviced by fossil fuels, leading to the extension of coal plant lifespans and the use of unlicensed methane gas turbines (nybooks.com).

In addition to energy, data centers require fresh water for cooling, often tapping into public drinking water supplies (nybooks.com). A Bloomberg analysis indicates that two-thirds of new data centers are being placed in water-scarce areas, making water access issues worse for local communities (nybooks.com). For example, in Chile, Google attempted to build a data center that would use a thousand times more freshwater annually than a local community, which still had access to a public freshwater resource due to a historical anomaly (nybooks.com). The community resisted, arguing they would receive no direct benefit or tax revenue from the data center (nybooks.com).

The human cost is also significant. AI companies exploit labor in the Global South through data annotation firms. These firms hire contract workers to clean and annotate massive datasets, including content moderation, for AI models (nybooks.com). Kenyan workers, for example, were contracted by OpenAI to review and categorize extremely harmful and disturbing text, including AI-generated content, to train content filters (nybooks.com). These workers are paid very little, often a few dollars an hour, and experience significant psychological trauma. Meanwhile, AI researchers in Silicon Valley receive multi-million dollar compensation packages (nybooks.com). This stark contrast highlights the deep inequalities embedded in the AI industry, reinforcing a new form of “transnational colonialism” or “AI colonialism” (muse.jhu.edu).

Data Centers in Water-Scarce Regions

66%
New Data Centers in Water-Scarce Areas
New Data Centers in Other Areas
This chart shows that two-thirds of new data centers are located in water-scarce regions. Source: (nybooks.com)

OpenAI’s Capitalistic Drive

OpenAI’s journey, especially under Sam Altman, perfectly shows the “scale-at-all-costs” approach and the push for capitalistic expansion within the AI industry (karendhao.com). Altman, a product of Silicon Valley’s startup culture, strategically positioned OpenAI for rapid growth, even though it started as a nonprofit (nybooks.com). Within a short time, OpenAI’s executives decided that to lead in the AI space, they “had to” adopt this aggressive growth strategy, which required immense capital (nybooks.com).

Altman, known for his fundraising skills, created a unique structure: a for-profit arm nested within the nonprofit (nybooks.com). This allowed them to raise tens, and eventually hundreds, of billions of dollars. Hao argues that the concept of artificial general intelligence (AGI), a driving force behind some AI development, is not scientifically proven but rather a “quasi-religious movement” within Silicon Valley (democracynow.org). OpenAI has faced criticism for its secrecy and for being driven by questionable ideologies, with some suggesting that the company’s focus on empire-building has overshadowed its initial utopian dreams (karendhao.com). OpenAI was “deeply unhappy” with Hao’s initial profile of the company and refused to communicate with her for three years, but current and former employees later sought her out, believing she would accurately portray the company’s internal truth (democracynow.org).

AI Safety and Ethical Concerns

The “scale-at-all-costs” approach in AI development can harm AI safety and ethics. It prioritizes rapid advancement and computational power over careful thought about societal impacts, transparency, and responsible governance (ainowinstitute.org). This can lead to a lack of understanding about how large models work, limited oversight, and the continuation of existing inequalities (ainowinstitute.org).

Companies developing large-scale AI models, such as OpenAI with GPT-4, often do not share details about the model’s architecture, size, hardware, training compute, data construction, or training methods (ainowinstitute.org). They often cite competitive and safety concerns for this secrecy. This lack of transparency makes it difficult for outsiders to examine and understand potential risks. The rapid growth of AI technologies within the global capitalist system is seen as a new form of “AI colonialism,” which makes global socioeconomic inequalities worse (muse.jhu.edu). This suggests that the “scale-at-all-costs” approach can deepen existing disparities and create new ethical challenges (muse.jhu.edu).

The focus on increasingly larger language models also raises questions about the associated costs, including ethical and societal implications, and whether such scale is truly necessary (faculty.washington.edu).

AI’s Military Ambitions

The significant financial investments in AI are leading to a close connection between Silicon Valley and the defense industry (nybooks.com). AI companies, having spent hundreds of billions developing their technologies, are now turning to the defense industry to get their money back, as it offers substantial contracts (nybooks.com). This trend is concerning because AI technologies, which were not designed for sensitive military contexts, are being aggressively pushed into military infrastructure (nybooks.com).

The relationship between Silicon Valley and the U.S. government is viewed as a mutual effort in “empire-building ambitions” (nybooks.com). While Silicon Valley seeks to recoup its massive investments, the government sees an opportunity to advance its military capabilities. This convergence raises serious questions about the future of warfare and the ethical implications of using AI in military operations. President Trump’s administration has also shown interest in this convergence, with figures like Sam Altman and Elon Musk engaging with the defense sector.

Massive Investments in AI Development

Venture Funding (Last Year)
>$100 Billion
OpenAI Stargate Project
~$500 Billion
This visualization highlights the significant financial investments flowing into AI development. Source: (nybooks.com)

Charting a More Ethical AI Future

The current “scale-at-all-costs” AI development model has many problems, but alternatives do exist. These alternatives focus on sustainability, ethical considerations, and a more complete approach to AI design and deployment (arxiv.org). One key step is fostering interdisciplinary collaboration, bringing together experts from various fields to discuss and shape AI’s future responsibly (arxiv.org).

Developing clear standards for responsible AI is also crucial. This means moving away from unchecked growth and exploring approaches that do not rely solely on ever-increasing computational power (arxiv.org). The “bigger-is-better” narrative surrounding AI is being questioned, suggesting a need for new ways of thinking that do not just focus on scaling up models (arxiv.org). Addressing the ethical and political issues of AI, especially how it worsens global socioeconomic inequalities, requires examining alternative approaches that do not reinforce “AI colonialism” (muse.jhu.edu). This shift would prioritize human well-being and environmental health over unchecked corporate expansion.

ABOUT THE AUTHOR

Darius Spearman has been a professor of Black Studies at San Diego City College since 2007. He is the author of several books, including Between The Color Lines: A History of African Americans on the California Frontier Through 1890. You can visit Darius online at africanelements.org.