If Anyone Builds It, Everyone Dies
Why AI Is on Track to Kill Us All—and How We Can Avert Extinction
Contributors
By Nate Soares
Formats and Prices
- On Sale
- Sep 30, 2025
- Page Count
- 256 pages
- Publisher
- Little, Brown and Company
- ISBN-13
- 9780316595643
Price
$30.00Price
$40.00 CADFormat
Format:
- Hardcover $30.00 $40.00 CAD
- ebook $14.99 $19.99 CAD
- Audiobook Download (Unabridged) $24.99
Buy from Other Retailers:
In 2023, hundreds of machine-learning scientists signed an open letter warning about our risk of extinction from smarter-than-human AI. Yet today, the race to develop superhuman AI is only accelerating, as many tech CEOs throw caution to the wind, aggressively scaling up systems they don’t understand—and won’t be able to restrain. There is a good chance that they will succeed in building an artificial superintelligence on a timescale of years or decades. And no one is prepared for what will happen next.
For over 20 years, two signatories of that letter—Eliezer Yudkowsky and Nate Soares— have been studying the potential of AI and warning about its consequences. As Yudkowsky and Soares argue, sufficiently intelligent AIs will develop persistent goals of their own: bleak goals that are only tangentially related to what the AI was trained for; lifeless goals that are at odds with our own survival. Worse yet, in the case of a near-inevitable conflict between humans and AI, superintelligences will be able to trivially crush us, as easily as modern algorithms crush the world’s best humans at chess, without allowing the conflict to be close or even especially interesting.
How could an AI kill every human alive, when it’s just a disembodied intelligence trapped in a computer? Yudkowsky and Soares walk through both argument and vivid extinction scenarios and, in so doing, leave no doubt that humanity is not ready to face this challenge—ultimately showing that, on our current path, If Anyone Builds It, Everyone Dies.
-
“The most important book of the decade. This captivating page-turner, from two of today’s clearest thinkers, reveals that the competition to build smarter-than-human machines isn’t an arms race but a suicide race, fueled by wishful thinking."Max Tegmark, author of Life 3.0: Being Human in the Age of AI
-
“If Anyone Builds It, Everyone Dies may prove to be the most important book of our time. Yudkowsky and Soares believe we are nowhere near ready to make the transition to superintelligence safely, leaving us on the fast track to extinction. Through the use of parables and crystal-clear explainers, they convey their reasoning, in an urgent plea for us to save ourselves while we still can.”Tim Urban, cofounder, Wait But Why
-
“The most important book I’ve read for years: I want to bring it to every political and corporate leader in the world and stand over them until they’ve read it. Yudkowsky and Soares, who have studied AI and its possible trajectories for decades, sound a loud trumpet call to humanity to awaken us as we sleepwalk into disaster.”Stephen Fry
-
“The best no-nonsense, simple explanation of the AI risk problem I've ever read.”Yishan Wong, former CEO of Reddit
-
“Soares and Yudkowsky lay out, in plain and easy-to-follow terms, why our current path toward ever-more-powerful AIs is extremely dangerous.”Emmett Shear, former interim CEO of OpenAI
-
“Everyone should read this book. There’s a 70% chance that you—yes, you reading this right now—will one day grudgingly admit that we all should have listened to Yudkowsky and Soares when we still had the chance."Daniel Kokotajlo, AI Futures Project
-
"A compelling introduction to the world's most important topic. Artificial general intelligence could be just a few years away. This is one of the few books that takes the implications seriously, published right as the danger level begins to spike."Scott Alexander, founder, Astral Codex Ten
-
“Claims about the risks of AI are often dismissed as advertising, but this book disproves it. Yudkowsky and Soares are not from the AI industry, and have been writing about these risks since before it existed in its present form. Read their disturbing book and tell us what they get wrong.”Huw Price, Bertrand Russell Professor Emeritus, Trinity College, Cambridge
-
“This book offers brilliant insights into history’s most consequential standoff between technological utopia and dystopia, and shows how we can and should prevent superhuman AI from killing us all. Yudkowsky and Soares’s memorable storytelling about past disaster precedents (e.g., the inventor of two environmental nightmares: tetra-ethyl-lead gasoline and Freon) highlights why top thinkers so often don't see the catastrophes they create.”George Church, Founding Core Faculty & Lead, Synthetic Biology, Wyss Institute at Harvard University
-
“Silicon Valley calls it inevitable. Your survival instinct knows better. Humanity is funding its own delete key—an unblinking intelligence that never sleeps, never stops, perfectly indifferent. Wonder-time is over; this is our warning. Read today. Circulate tomorrow. Demand the guardrails. I’ll keep betting on humanity, but first we must wake up.”R.P. Eddy, former director, White House, National Security Council
-
“You will feel actual emotions when you read this book. We are currently living in the last period of history where we are the dominant species. Humans are lucky to have Soares and Yudkowsky in our corner, reminding us not to waste the brief window of time that we have to make decisions about our future in light of this fact.”Grimes
-
“A timely and terrifying education on the galloping havoc AI could unleash—unless we grasp the reins and take control."Kirkus
-
“A clearly written and compelling account of the existential risks that highly advanced AI could pose to humanity. Recommended.”Ben Bernanke, Nobel laureate and former chairman of the Federal Reserve
-
“A sober but highly readable book on the very real risks of AI. Both skeptics and believers need to understand the authors’ arguments, and work to ensure that our AI future is more beneficial than harmful.”Bruce Schneier, Lecturer, Harvard Kennedy School and author of A Hacker’s Mind
-
“You’re likely to close this book fully convinced that governments need to shift immediately to a more cautious approach to AI, an approach more respectful of the civilization-changing enormity of what's being created. I’d like everyone on earth who cares about the future to read this book and debate its ideas.”Scott Aaronson, Schlumberger Centennial Chair of Computer Science, The University of Texas at Austin
-
“While I’m skeptical that the current trajectory of AI development will lead to human extinction, given AI’s exponential pace of change there’s no better time to take prudent steps to guard against worst-case outcomes. The authors offer important proposals for global guardrails and risk mitigation that deserve serious consideration.”Lieutenant General John (Jack) N.T. Shanahan (USAF, Ret.), Inaugural Director, Department of Defense Joint AI Center
-
“[An] urgent clarion call to prevent the creation of artificial superintelligence…A frightening warning that deserves to be reckoned with.”Publishers Weekly