If Anyone Builds It, Everyone Dies

Why Superhuman AI Would Kill Us All

Coming Soon

Contributors

By Eliezer Yudkowsky

By Nate Soares

Formats and Prices

On Sale
Sep 16, 2025
Page Count
272 pages
ISBN-13
9780316595643

Price

$30.00

Price

$40.00 CAD

“May prove to be the most important book of our time.”—Tim Urban, Wait But Why

The scramble to create superhuman AI has put us on the path to extinction—but it’s not too late to change course, as two of the field’s earliest researchers explain in this clarion call for humanity.

In 2023, hundreds of AI luminaries signed an open letter warning that artificial intelligence poses a serious risk of human extinction. Since then, the AI race has only intensified. Companies and countries are rushing to build machines that will be smarter than any person. And the world is devastatingly unprepared for what would come next.
 
For decades, two signatories of that letter—Eliezer Yudkowsky and Nate Soares—have studied how smarter-than-human intelligences will think, behave, and pursue their objectives. Their research says that sufficiently smart AIs will develop goals of their own that put them in conflict with us—and that if it comes to conflict, an artificial superintelligence would crush us. The contest wouldn’t even be close.
 
How could a machine superintelligence wipe out our entire species? Why would it want to? Would it want anything at all? In this urgent book, Yudkowsky and Soares walk through the theory and the evidence, present one possible extinction scenario, and explain what it would take for humanity to survive. 
 
The world is racing to build something truly new under the sun. And if anyone builds it, everyone dies.

“The best no-nonsense, simple explanation of the AI risk problem I’ve ever read.”—Yishan Wong, Former CEO of Reddit

  • “The most important book of the decade. This captivating page-turner, from two of today’s clearest thinkers, reveals that the competition to build smarter-than-human machines isn’t an arms race but a suicide race, fueled by wishful thinking."
    Max Tegmark, author of Life 3.0: Being Human in the Age of AI
  • If Anyone Builds It, Everyone Dies may prove to be the most important book of our time. Yudkowsky and Soares believe we are nowhere near ready to make the transition to superintelligence safely, leaving us on the fast track to extinction. Through the use of parables and crystal-clear explainers, they convey their reasoning, in an urgent plea for us to save ourselves while we still can.”
    Tim Urban, cofounder, Wait But Why
  • “The most important book I’ve read for years: I want to bring it to every political and corporate leader in the world and stand over them until they’ve read it. Yudkowsky and Soares, who have studied AI and its possible trajectories for decades, sound a loud trumpet call to humanity to awaken us as we sleepwalk into disaster.”
    Stephen Fry
  • “The best no-nonsense, simple explanation of the AI risk problem I've ever read.”
    Yishan Wong, former CEO of Reddit
  • “Soares and Yudkowsky lay out, in plain and easy-to-follow terms, why our current path toward ever-more-powerful AIs is extremely dangerous.”
    Emmett Shear, former interim CEO of OpenAI
  • “Everyone should read this book. There’s a 70% chance that you—yes, you reading this right now—will one day grudgingly admit that we all should have listened to Yudkowsky and Soares when we still had the chance."
    Daniel Kokotajlo, AI Futures Project
  • "A compelling introduction to the world's most important topic. Artificial general intelligence could be just a few years away. This is one of the few books that takes the implications seriously, published right as the danger level begins to spike."
    Scott Alexander, founder, Astral Codex Ten
  • “Claims about the risks of AI are often dismissed as advertising, but this book disproves it. Yudkowsky and Soares are not from the AI industry, and have been writing about these risks since before it existed in its present form. Read their disturbing book and tell us what they get wrong.”
    Huw Price, Bertrand Russell Professor Emeritus, Trinity College, Cambridge
  • “This book offers brilliant insights into history’s most consequential standoff between technological utopia and dystopia, and shows how we can and should prevent superhuman AI from killing us all.  Yudkowsky and Soares’s memorable storytelling about past disaster precedents (e.g., the inventor of two environmental nightmares: tetra-ethyl-lead gasoline and Freon) highlights why top thinkers so often don't see the catastrophes they create.”
    George Church, Founding Core Faculty & Lead, Synthetic Biology, Wyss Institute at Harvard University
  • “Silicon Valley calls it inevitable. Your survival instinct knows better. Humanity is funding its own delete key—an unblinking intelligence that never sleeps, never stops, perfectly indifferent. Wonder-time is over; this is our warning. Read today. Circulate tomorrow. Demand the guardrails. I’ll keep betting on humanity, but first we must wake up.”
    R.P. Eddy, former director, White House, National Security Council
  • “You will feel actual emotions when you read this book. We are currently living in the last period of history where we are the dominant species. Humans are lucky to have Soares and Yudkowsky in our corner, reminding us not to waste the brief window of time that we have to make decisions about our future in light of this fact.”
    Grimes
  • “A timely and terrifying education on the galloping havoc AI could unleash—unless we grasp the reins and take control."
    Kirkus
  • “A clearly written and compelling account of the existential risks that highly advanced AI could pose to humanity. Recommended.”
    Ben Bernanke, Nobel laureate and former chairman of the Federal Reserve
  • “A sober but highly readable book on the very real risks of AI. Both skeptics and believers need to understand the authors’ arguments, and work to ensure that our AI future is more beneficial than harmful.”
    Bruce Schneier, Lecturer, Harvard Kennedy School and author of A Hacker’s Mind
  • “You’re likely to close this book fully convinced that governments need to shift immediately to a more cautious approach to AI, an approach more respectful of the civilization-changing enormity of what's being created. I’d like everyone on earth who cares about the future to read this book and debate its ideas.”
    Scott Aaronson, Schlumberger Centennial Chair of Computer Science, The University of Texas at Austin
  • “While I’m skeptical that the current trajectory of AI development will lead to human extinction, given AI’s exponential pace of change there’s no better time to take prudent steps to guard against worst-case outcomes. The authors offer important proposals for global guardrails and risk mitigation that deserve serious consideration.”
    Lieutenant General John (Jack) N.T. Shanahan (USAF, Ret.), Inaugural Director, Department of Defense Joint AI Center

Eliezer Yudkowsky

About the Author

ELIEZER YUDKOWSKY is one of the founding researchers of the field of AGI alignment, which is concerned with understanding how smarter-than-human intelligences think, behave, and pursue their goals.  He appeared on TIME magazine’s list of the 100 Most Influential People In AI, was one of the twelve public figures featured in The New York Times’s “Who’s Who Behind the Dawn of the Modern Artificial Intelligence Movement,” and was one of the seven thought leaders spotlighted in The Washington Post’s discussion of “AI’s Rival Factions.”  He spoke on the main stage at 2023’s TED conference and has been discussed or interviewed in The New Yorker, Newsweek, Forbes, Wired, Bloomberg, The Atlantic, The Economist, and many other venues.  He has close to 200,000 followers on X, where he frequently dialogues with prominent public figures including the heads of frontier AI labs.

Learn more about this author

Nate Soares

About the Author

NATE SOARES is the President of MIRI. He has been working in the field for over a decade, after previous experience at Microsoft and Google. Soares is the author of a large body of technical and semi-technical writing on AI alignment, has been interviewed in Vanity Fair and the Financial Times, and has spoken on conference panels alongside many of the AI field’s leaders.

Learn more about this author