by Karen Hao
If you are only thinking of A.I. in terms of what it can do for you, you must read Empire of AI.
Artificial Intelligence, and beyond that . . . Artificial General Intelligence.
Scientific wonder?
Climate change breakthrough?
Cancer cure?
Or . . . existential threat.
Karen Hao traces the growth of AI, particularly as it relates to Sam Altman and OpenAI, a company that slipped on the banana peel of profit while running toward the common good. In this thorough expose, Karen Hao helps us see the brilliant minds, big dreams and potential nightmares that accompany the quest for artificial general intelligence.
About the author:
Karen Hao is an award-winning journalist covering the impacts of artificial intelligence on society. She writes for publications including The Atlantic and leads the Pulitzer Center’s AI Spotlight Series, a program training thousands of journalists around the world on how to cover AI. She was formerly a reporter for the Wall Street Journal, covering American and Chinese tech companies, and a senior editor for AI at MIT Technology Review. (from her website)
The book in a sentence (or two):
Karen Hao has one metaphor for the burgeoning AI power players: empires. Like European colonialism – or any empire-building state, the benefits “mostly accrue upward.” What may have begun as “open” AI, a promised blessing to solve the world’s ills, has for some quickly devolved into a race to commercialize and monetize. Hao writes:
The empires of AI are not engaged in the same overt violence and brutality that marked [empire building of the past]. But they, too, see and extract precious resources to feed their vision of artificial intelligence: the work of artists and writers; the data of countless individuals posting about their experiences and observations online; the land, energy, and water required to house and run massive data centers and supercomputers. So too do the new empires exploit the labor of people globally to clean, tabulate, and prepare that data for spinning into lucrative AI technologies. 17
My quick take on Empire of AI
If Reid Hoffman and Greg Beato’s Superagency gives us the glowing picture of what could go right with our AI future, Karen Hao lays bare its societal drawbacks. The author, not adverse to artificial intelligence, but wary of those who exalt applications apart from safety, will help you think differently next time you query the latest edition of ChatGPT.
Overview and Analysis:
The Empire of AI is an analysis and expose of the work of Sam Altman and OpenAI. In the author’s words, the book “is a profile of a scientific ambition turned into an aggressive, ideological, money-fueled quest; an examination of its multifaceted and expansive footprint; a meditation of power.” 12
Examine OpenAI’s “About” page and you will read:
OpenAI is an AI research and deployment company. Our mission is to ensure that artificial general intelligence benefits all of humanity.
At the outset, that was the mission . . . until it wasn’t.
As Musk, Altman, and Brockman discussed how best to position OpenAI at launch, all were keenly aware of the importance of its public perception. They agreed with Altman’s proposal to make it a nonprofit and to play up the openness for which it was named. OpenAI, the anti-Google, would conduct its research for everyone, open source science, and be the paragon of transparency. 49
All well and good – on the surface. OpenAI was pursuing its quest for the public good and driven by democratic participation in the technology. However, Hao devotes more than 400 pages to show what was imagined as a non-profit to benefit humanity quickly became a Cyber Empire with all the ills and irks that accompany a land-grabbing colonial power. Hao writes,
Under the hood, generative AI models are monstrosities, built from consuming previously unfathomable amounts of data, labor, computing power, and natural resources. 16-17
As her research demonstrates, the initial careful tension between innovation to advance science, further education, and solve societal ills (e.g. The Thinking Game) and safety to protect AI from being co-opted for evil (rogue AI) and leading to doomsday predictions (see Her, Minority Report, MI: The Final Reckoning) gave way. Profit won this tug-o-war. Innovation fanned the fires of commerce while “profit and safety” gave way to “progress and profit.” While it is not as simple as Doomers and Boomers, the two words summarize perspectives, one that “[warns] of fire and brimstone, the other tantalized with visions of heaven.” 234
Hao outlines the creep from safety to profitable innovation:
2015 - OpenAI’s mission meant being a nonprofit “unconstrained by a need to generate financial return” and open-sourcing research, as OpenAI wrote in its launch announcement.
2016 - The mission meant “Everyone should benefit from the fruits of AI after its [sic] built, but it’s totally OK to not share the science.”
2018-2019 - The mission meant the creation of a capped profit structure “to marshal substantial resources” while avoiding “a competitive race without time for adequate safety precautions,” as OpenAI wrote in its charter.
2020 - The mission meant walling off the model and building an “API as strategy for openness and benefit sharing,” while avoiding “a competitive race without time for adequate safety precautions” as written in its charter.
2022 - The mission meant “iterative deployment” and working as fast as possible to deploy ChatGPT.
2024 - The mission meant “A key part of [the] mission is to put very capable AI tools in the hands of people for free (or at a great price).” 412
Dreams and Nightmares
Hao’s subtitle, Dreams and Nightmares in Sam Altman’s OpenAI is fitting. The dreams are endless: address and overcome scientific problems, solve global warming, beat cancer, create jobs, provide psychological care at an affordable rate. The challenge, of course, is that many confident predictions to solving societal ills are just that, predictions. Certainly, there have been great gains as Demis Hassabis and his team at DeepMind demonstrated when they solved the seemingly unsolvable problem of protein folding, and for which Hassabis won the Nobel Prize in Chemistry in 2024 (see The Thinking Game). Yet how long will it take for the hundreds of billions being spent on data centers to yield profit and at what cost to the environment, labor, artists, and employment.
Dreams . . .
The positive impact of AI and artificial general intelligence is real. Just watch The Thinking Game. Here’s another example. In 2022, Bill Gates was unimpressed with GPT-4. The model couldn’t tackle complex scientific problems. Gates told the team he wouldn’t start paying attention until “GPT-4 scored a 5 on an AP biology test--AP Bio because he felt it tested critical scientific thinking rather than a memorization of facts.” The team got to work. By late August, Altman and Brockman (leaders of OpenAI) were back. Over dinner at Gates’ home, the team showed him what the program could do.
“The crowning moment was the model acing AP Bio: It nailed fifty-nine out of sixty multiple-choice questions and generated impressive answers to six open-ended ones. An outside expert would score the test: 5 out of 5. Gates couldn't believe it.... “This showcase,” Gates said, was “one of the two most stunning demos he'd ever seen in his life.” 246
Nightmares:
With every future promise, however, comes a present nightmare, which Hao chronicles in Part III.
Garbage In - Garbage Out: Large Language Models (LLMs) must be fed massive amounts of data, including pornographic images by content monitors. The latter must be processed and assessed and tagged by content monitors — at what cost to the soul for those doing this low-labor work. Imagine this: Contractors were having to distinguish (among hundreds of images a day) “sexual content involving seventeen-year-old minors versus eighteen-year-old legal adults.” 242
CSAM (Child Sexual Abuse Material): OpenAI worked to keep it out, but left in other types of sexual images (“It’s part of the human experience after all”). Not only was the database polluted (some CSAM made its way in), but now “the model would still be able to produce synthetic CSAM.” 217
Acceleration Risk: Working to beat others to AGI can create carelessness with respect to safety issues, i.e. are AI and AGI controlled by those who mean it for good or ill?
It’s a machine, not a person! How do you help people to stop imagining a mind behind the machine when the machine sounds real, responds in real time, and caters to an individual’s need for attention? 254
I called Lowe’s recently to check on an order and heard, “Would you like our AI Assistant to help you?” “AI Assistant” is a euphemism for “machine with a pretty voice.”Plundered Earth: Mega Data Centers: Google, Microsoft, Amazon, and Meta now spend more money building data centers each year than almost all the others … combined. 274
Let’s focus on Chile. In 2024 the government announced twenty-eight new data centers — on top of the existing twenty-two it has. This is twenty-eight more contributors to AI and also contributors to environmental damage, water consumption, noise pollution, and electrical demands.Electrical Consumption:
Each ChatGPt query is estimated to need on average about ten times more electricity than a typical search on Google.
Data Centers that started out needing 150-megawatts (equivalent to the energy used by 122,000 homes) are now looking for 1,000 to 2,000 megawatts. “A single [data center] could use as much energy per year as around one and a half to three and a half San Francisco’s.” 275 “AI computing globally could use more energy than all of India, the world’s third-largest electricity consumer.” 275-6How much charging power are we talking about?
1000 pieces of AI text:
How much energy does one thousand pieces of AI generated text consume? On average it would take as much energy as it would to charge the standard smartphone nearly four times. Generating 1000 images used, on average, as much energy as 242 full smartphone charges. 277
OpenAI/Oracle Initiative:
Consider the OpenAI/Oracle Stargate initiative. It will have 5 gigawatts of total capacity, which refers to the amount of electrical power these data centers will consume when fully operational—enough to power roughly 4.4 million American homes. It turns out that telling users their every idea is brilliant requires a lot of energy (Source).Five gigawatts (GW) of power can run a vast number of devices and systems. For comparison, if a fast-charging station operates at a power level of 1 GW, it could charge approximately 1,000 electric vehicles simultaneously at a rate of 1,000 kWh per hour (Source).
How much water consumption are we talking about?
A Google data center in Chile planned to use an estimated 169 liters of fresh drinking water PER SECOND to cool its servers. A Google data center in Uruguay planned to use two million gallons of water A DAY – directly from the drinking supply; equivalent to the daily water consumption of 55,000 people. 288, 294
Worldview & Philosophical Issues
Effective altruism (EA):
This philosophy advocates dedicating oneself to “doing maximal good in the world by extreme rationality and counterintuitive logic” 56, embracing capitalism and libertarianism in the name of morality. In short, this is the philosophy of “earn to give.” It is “more altruistic in the long run to take a more morally ambiguous job to get rich and donate that money through optimized philanthropy than to commit to a life of working for a morally good charity.” 229
At the same time Effective Altruism assumes something it cannot guarantee, i.e. that adherents will be around tomorrow:
Why, you do not even know what will happen tomorrow. What is your life? You are a mist that appears for a little while and then vanishes. Instead, you ought to say, “If it is the Lord’s will, we will live and do this or that.” As it is, you boast in your arrogant schemes. All such boasting is evil. James 5:14-16 NIV
Prioritizing Application over Safety
ChatGPT firmly codified OpenAI’s turn away from nonprofit and toward commercialization. Altman and other executives pushed to build on the momentum of the chatbot’s success by launching a slew of paid products. . . . “The burst of new products overwhelmed the trust and safety anew.” This was February 2023, by early 2025, many/most of the safety team had resigned. 267
Distinguishing AI from AGI:
The Holy Grail of artificial intelligence is artificial general intelligence. To clarify the distinctions I utilized OpenAI’s free ChatGPT. What is artificial general intelligence? It’s response (in a couple of seconds):
Narrow AI: Excels at specific tasks like image recognition, translation, or playing chess.
Artificial General Intelligence (AGI): Refers to a type of artificial intelligence that possesses general cognitive abilities comparable to those of a human being.
AGI would be able to:
Understand, learn, and apply knowledge across a wide range of domains.
Reason and plan in unfamiliar situations.
Adapt to new tasks without needing extensive retraining.
Communicate and perceive context in a flexible, human-like way.
Exhibit creativity, common sense, and self-reflection.
In essence, AGI would not just mimic intelligent behavior — it would possess a form of generalized intelligence, allowing it to solve novel problems much like a human could.
There’s currently no true AGI in existence; all existing AI systems (even advanced ones like GPT models) are considered narrow or specialized AI. Researchers are still debating how to achieve AGI, how to measure it, and how to ensure it’s aligned with human values and safety once it’s developed.
Key individuals highlighted in this book:
Sam Altman: Co-founder of OpenAI with Elon Musk
Greg Brockman: Co-founder of OpenAI
IIlya Sutskever: Co-Founder of OpenAI, it’s chief scientist.
My Takeaways:
Why AI might be deemed an Empire: Karen Hao argues that AI is the equivalent of a “data land grab.” Keoni Mahelona says, “Data is the last frontier of colonization. Where the empires of old seized land from Indigenous communities and sold it back to them, AI is just a land grab all over again. Big Tech likes to collect your data more or less for free—to build whatever they want to, whatever their endgame is—and then turn it around and sell it back to you as a service.” 412
AI offers a colonialist pay scale for data workers, who are often working in poorer countries such as Kenya and Venezuela.
Boomers and Doomers: Both groups have legitimate rallying points. The Boomers have the deeper pockets. Do the Doomers have access to a sufficient megaphone?
Applications and Safety: I was one who saw only the “applications” side of AI, not recognizing the impact to the environment, artists whose work was essentially pirated to train AI models, as well as labor that must monitor and tag this data. Karen Hao has opened my eyes to the downsides of this significant upside industry. At this point, government is allowing AI unencumbered advancement. Should we be concerned?
Toppling Empire AI: Karen Hao notes three axes of power:
Knowledge: Through centralizing talent, eroding open science, and sealing their models from public scrutiny, they control knowledge production.
Resources: Through hoarding funding, data, labor, computing, energy, and land, they control and diminish other people’s resources.
Influence: Through creating and reinforcing ideologies and producing wildly popular demonstrations that captivate global imagination, they command far-reaching influence. 418-19
These three feed on and reinforce each other: “controlling knowledge production fuels influence; Growing influence accumulates resources; Amassing resources secures knowledge production. The formula for dissolving empire thus requires the redistribution of power along each axis.” 419
The author piqued my curiosity about these books:
Superintelligence: Paths, Dangers, Strategies by Nick Bostrom
Profiles of the Future: An Inquire into the Limits of the Possible by Arthur C. Clarke
Words to ponder:
Hao opens her book with this reflection from Sam Altman; leaving little doubt as to her premise that there is more to AI, at least OpenAI, than the common good:
Successful people create companies. More successful people create countries. The most successful people create religions. I heard this from Qi Lu; I'm not sure what the source is. He got me thinking, though--the most successful founders do not set out to create companies. They are on a mission to create something closer to a religion, and at some point it turns out that forming a company is the easiest way to do so. Sam Altman 2013, p. i
Recommendation:
I learned about Empire of AI through Fareed Zakaria’s GPS on CNN. Zakaria interviewed Hao and pointed his viewers to her book. As I noted at the outset, Hao’s Empire of AI is a fine point-counterpoint to Reid Hoffman’s Superagency (click here for my review). I am sure some will write her off as an alarmist, but with 420 noted sources, Hao’s work is not an argument to ignore, but a conversation to join. Much so-called reporting today feels like an ideological spitting contest; propaganda from the left and right. Journalism, on the other hand, is “the collection, preparation, and distribution of news and related commentary.” Hao’s work is journalism. For that I am grateful. I highly recommend Empire of AI.
