If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI

Author:   Eliezer Yudkowsky ,  Nate Soares
Publisher:   Vintage Publishing
ISBN:  

9781847928924


Pages:   272
Publication Date:   18 September 2025
Format:   Hardback
Availability:   To order   Availability explained
Stock availability from the supplier is unknown. We will order it for you and ship this item to you once it is received by us.

Our Price $55.00 Quantity:  
Add to Cart

Share |

If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI


Overview

The founder of the field of AI risk explains why superintelligent AI is a global suicide bomb and we must halt development immediately AI is the greatest threat to our existence that we have ever faced. The scramble to create superhuman AI has put us on the path to extinction - but it's not too late to change course. Two pioneering researchers in the field, Eliezer Yudkowsy and Nate Soares, explain why artificial superintelligence would be a global suicide bomb and call for an immediate halt to its development. The technology may be complex, but the facts are simple- companies and countries are in a race to build machines that will be smarter than any person, and the world is devastatingly unprepared for what will come next. How could a machine superintelligence wipe out our entire species? Will it want to? Will it want anything at all? In this urgent book, Yudkowsky and Soares explore the theory and the evidence, present one possible extinction scenario, and explain what it would take for humanity to survive. The world is racing to build something truly new - and if anyone builds it, everyone dies.

Full Product Details

Author:   Eliezer Yudkowsky ,  Nate Soares
Publisher:   Vintage Publishing
Imprint:   The Bodley Head Ltd
Dimensions:   Width: 16.20cm , Height: 2.70cm , Length: 24.30cm
Weight:   0.468kg
ISBN:  

9781847928924


ISBN 10:   1847928927
Pages:   272
Publication Date:   18 September 2025
Audience:   College/higher education ,  Professional and scholarly ,  General/trade ,  Tertiary & Higher Education ,  Professional & Vocational
Format:   Hardback
Publisher's Status:   Active
Availability:   To order   Availability explained
Stock availability from the supplier is unknown. We will order it for you and ship this item to you once it is received by us.

Table of Contents

Reviews

The most important book I’ve read for years: I want to bring it to every political and corporate leader in the world and stand over them until they’ve read it. Yudkowsky and Soares, who have studied AI and its possible trajectories for decades, sound a loud trumpet call to humanity to awaken us as we sleepwalk into disaster. Their brilliant gift for analogy, metaphor and parable clarifies for the general reader the tangled complexities of AI engineering, cognition and neuroscience better than any book on the subject I’ve ever read, and I’ve waded through scores of them. We really must rub our eyes and wake the fuck up! -- Stephen Fry Should you worry about superintelligent AI? The answer from one of the tech world’s most influential doomsayers, Eliezer Yudkowsky, is emphatically yes. The good news? We aren’t there yet, and there are still steps we can take to avert disaster * Guardian ** Biggest Books of the Autumn ** * The most important book of the decade ... This captivating page-turner, from two of today's clearest thinkers, reveals that the competition to build smarter-than-human machines isn't an arms race but a suicide race, fuelled by wishful thinking -- Max Tegmark, author of Life 3.0 If Anyone Builds It, Everyone Dies may prove to be the most important book of our time. Yudkowsky and Soares believe we are nowhere near ready to make the transition to superintelligence safely, leaving us on the fast track to extinction. Through the use of parables and crystal-clear explainers, they convey their reasoning, in an urgent plea for us to save ourselves while we still can -- Tim Urban, co-founder of Wait But Why Given the gravity of the case [Yudkowsky and Soares] make, it feels an odd thing to say that this book is good. It is readable. It tells stories well. At points it is like a thriller – albeit one where the thrills come from the obliteration of literally everything of value … This is the apocalypse du jour … The achievement of this book is, given the astonishing claims they make, that they make a credible case for not being mad. But I really hope they are: because I can’t see a way we get off that ladder. * The Times * The authors tell their story with clarity, verve and a kind of barely suppressed glee. For a book about human extinction, If Anyone Builds It, Everyone Dies is a lot of fun. -- Ian Leslie * Observer * Despite the complexity of its subject, If Anyone Builds It, Everyone Dies is as clear as its conclusions are hard to swallow...everyone with an interest in the future has a duty to read what Yudkowsky and Soares have to say. -- David Shariatmadari * Guardian, Book of the Day * The best no-nonsense, simple explanation of the AI risk problem I've ever read -- Yishan Wong, former CEO of Reddit An apocalyptic plea for the world to get off the AI escalation ladder before humanity is wiped off the map * Irish Times * A provocative warning that one hopes is not too late to heed -- Ajay Chowdhury * Daily Express * Soares and Yudkowsky lay out, in plain and easy-to-follow terms, why our current path toward ever-more-powerful AIs is extremely dangerous -- Emmett Shear, former interim CEO of OpenAI An eloquent and urgent plea for us to step back from the brink of self-annihilation -- Fiona Hill, Defence Advisor to UK government Everyone should read this book. I’m 70% confident that you – yes, you reading this right now – will one day grudgingly admit that we all should have listened to Yudkowsky and Soares when we still had the chance -- Daniel Kokotajlo, OpenAI whistleblower and lead author, AI 2027 A fire alarm ringing with clarity and urgency. Yudkowsky and Soares pull no punches -- Mark Ruffalo A compelling introduction to the world's most important topic. Artificial general intelligence could be just a few years away. This is one of the few books that takes the implications seriously, published right as the danger level begins to spike -- Scott Alexander, founder of Astral Codex Ten Claims about the risks of AI are often dismissed as advertising, but this book disproves it. Yudkowsky and Soares are not from the AI industry, and have been writing about these risks since before it existed in its present form. Read their disturbing book and tell us what they get wrong -- Huw Price, Professor of Philosophy, University of Cambridge You will feel actual emotions when you read this book. We are currently living in the last period of history where we are the dominant species. Humans are lucky to have Soares and Yudkowsky in our corner, reminding us not to waste the brief window of time that we have to make decisions about our future in light of this fact -- Grimes This book offers brilliant insights into history’s most consequential standoff between technological utopia and dystopia, and shows how we can and should prevent superhuman AI from killing us all. Yudkowsky and Soares’s memorable storytelling about past disaster precedents ... highlights why top thinkers so often don't see the catastrophes they create -- George Church, Professor of Genetics, Harvard University Silicon Valley calls it inevitable. Your survival instinct knows better. Humanity is funding its own delete key - an unblinking intelligence that never sleeps, never stops, perfectly indifferent. Wonder-time is over; this is our warning. Read today. Circulate tomorrow. Demand the guardrails. I’ll keep betting on humanity, but first we must wake up -- R.P. Eddy, former director, White House, National Security Council A timely and terrifying education on the galloping havoc AI could unleash - unless we grasp the reins and take control * Kirkus * A clearly written and compelling account of the existential risks that highly advanced AI could pose to humanity -- Ben Bernanke, Nobel Prize winner in economics A sober but highly readable book on the very real risks of AI. Both sceptics and believers need to understand the authors’ arguments, and work to ensure that our AI future is more beneficial than harmful -- Bruce Schneier, author of A Hacker's Mind You’re likely to close this book fully convinced that governments need to shift immediately to a more cautious approach to AI, an approach more respectful of the civilization-changing enormity of what's being created. I’d like everyone on earth who cares about the future to read this book and debate its ideas -- Scott Aaronson, Professor and Chair of Computer Science, University of Texas at Austin [An] urgent clarion call to prevent the creation of artificial superintelligence … A frightening warning that deserves to be reckoned with * Publishers Weekly *


The most important book I’ve read for years: I want to bring it to every political and corporate leader in the world and stand over them until they’ve read it. Yudkowsky and Soares, who have studied AI and its possible trajectories for decades, sound a loud trumpet call to humanity to awaken us as we sleepwalk into disaster. Their brilliant gift for analogy, metaphor and parable clarifies for the general reader the tangled complexities of AI engineering, cognition and neuroscience better than any book on the subject I’ve ever read, and I’ve waded through scores of them. We really must rub our eyes and wake the fuck up! -- Stephen Fry Should you worry about superintelligent AI? The answer from one of the tech world’s most influential doomsayers, Eliezer Yudkowsky, is emphatically yes. The good news? We aren’t there yet, and there are still steps we can take to avert disaster * Guardian ** Biggest Books of the Autumn ** * The most important book of the decade ... This captivating page-turner, from two of today's clearest thinkers, reveals that the competition to build smarter-than-human machines isn't an arms race but a suicide race, fuelled by wishful thinking -- Max Tegmark, author of Life 3.0 If Anyone Builds It, Everyone Dies may prove to be the most important book of our time. Yudkowsky and Soares believe we are nowhere near ready to make the transition to superintelligence safely, leaving us on the fast track to extinction. Through the use of parables and crystal-clear explainers, they convey their reasoning, in an urgent plea for us to save ourselves while we still can -- Tim Urban, co-founder of Wait But Why Given the gravity of the case [Yudkowsky and Soares] make, it feels an odd thing to say that this book is good. It is readable. It tells stories well. At points it is like a thriller – albeit one where the thrills come from the obliteration of literally everything of value … This is the apocalypse du jour … The achievement of this book is, given the astonishing claims they make, that they make a credible case for not being mad. But I really hope they are: because I can’t see a way we get off that ladder. * The Times * The best no-nonsense, simple explanation of the AI risk problem I've ever read -- Yishan Wong, former CEO of Reddit Soares and Yudkowsky lay out, in plain and easy-to-follow terms, why our current path toward ever-more-powerful AIs is extremely dangerous -- Emmett Shear, former interim CEO of OpenAI An eloquent and urgent plea for us to step back from the brink of self-annihilation -- Fiona Hill, Defence Advisor to UK government Everyone should read this book. I’m 70% confident that you – yes, you reading this right now – will one day grudgingly admit that we all should have listened to Yudkowsky and Soares when we still had the chance -- Daniel Kokotajlo, OpenAI whistleblower and lead author, AI 2027 A fire alarm ringing with clarity and urgency. Yudkowsky and Soares pull no punches -- Mark Ruffalo A compelling introduction to the world's most important topic. Artificial general intelligence could be just a few years away. This is one of the few books that takes the implications seriously, published right as the danger level begins to spike -- Scott Alexander, founder of Astral Codex Ten Claims about the risks of AI are often dismissed as advertising, but this book disproves it. Yudkowsky and Soares are not from the AI industry, and have been writing about these risks since before it existed in its present form. Read their disturbing book and tell us what they get wrong -- Huw Price, Professor of Philosophy, University of Cambridge You will feel actual emotions when you read this book. We are currently living in the last period of history where we are the dominant species. Humans are lucky to have Soares and Yudkowsky in our corner, reminding us not to waste the brief window of time that we have to make decisions about our future in light of this fact -- Grimes This book offers brilliant insights into history’s most consequential standoff between technological utopia and dystopia, and shows how we can and should prevent superhuman AI from killing us all. Yudkowsky and Soares’s memorable storytelling about past disaster precedents ... highlights why top thinkers so often don't see the catastrophes they create -- George Church, Professor of Genetics, Harvard University Silicon Valley calls it inevitable. Your survival instinct knows better. Humanity is funding its own delete key - an unblinking intelligence that never sleeps, never stops, perfectly indifferent. Wonder-time is over; this is our warning. Read today. Circulate tomorrow. Demand the guardrails. I’ll keep betting on humanity, but first we must wake up -- R.P. Eddy, former director, White House, National Security Council A timely and terrifying education on the galloping havoc AI could unleash - unless we grasp the reins and take control * Kirkus * A clearly written and compelling account of the existential risks that highly advanced AI could pose to humanity -- Ben Bernanke, Nobel Prize winner in economics A sober but highly readable book on the very real risks of AI. Both sceptics and believers need to understand the authors’ arguments, and work to ensure that our AI future is more beneficial than harmful -- Bruce Schneier, author of A Hacker's Mind You’re likely to close this book fully convinced that governments need to shift immediately to a more cautious approach to AI, an approach more respectful of the civilization-changing enormity of what's being created. I’d like everyone on earth who cares about the future to read this book and debate its ideas -- Scott Aaronson, Professor and Chair of Computer Science, University of Texas at Austin [An] urgent clarion call to prevent the creation of artificial superintelligence … A frightening warning that deserves to be reckoned with * Publishers Weekly *


The most important book I’ve read for years: I want to bring it to every political and corporate leader in the world and stand over them until they’ve read it. Yudkowsky and Soares, who have studied AI and its possible trajectories for decades, sound a loud trumpet call to humanity to awaken us as we sleepwalk into disaster. Their brilliant gift for analogy, metaphor and parable clarifies for the general reader the tangled complexities of AI engineering, cognition and neuroscience better than any book on the subject I’ve ever read, and I’ve waded through scores of them. We really must rub our eyes and wake the fuck up! -- Stephen Fry If Anyone Builds It, Everyone Dies may prove to be the most important book of our time. Yudkowsky and Soares believe we are nowhere near ready to make the transition to superintelligence safely, leaving us on the fast track to extinction. Through the use of parables and crystal-clear explainers, they convey their reasoning, in an urgent plea for us to save ourselves while we still can -- Tim Urban, co-founder of Wait But Why The best no-nonsense, simple explanation of the AI risk problem I've ever read -- Yishan Wong, former CEO of Reddit Soares and Yudkowsky lay out, in plain and easy-to-follow terms, why our current path toward ever-more-powerful AIs is extremely dangerous -- Emmett Shear, former interim CEO of OpenAI


Author Information

Eliezer Yudkowsky (Author) Eliezer Yudkowsky is a founding researcher of the field of AI alignment, with influential work spanning more than twenty years. As co-founder of the non-profit Machine Intelligence Research Institute (MIRI), Yudkowsky sparked early scientific research on the problem and has played a major role in shaping the public conversation about smarter-than-human AI. He appeared on Time magazine's 2023 list of the 100 Most Influential People In AI, and has been discussed or interviewed in the New York Times, New Yorker, Newsweek, Forbes, Wired, Bloomberg, The Atlantic, The Economist, Washington Post, and elsewhere. Nate Soares (Author) Nate Soares is the president of the non-profit Machine Intelligence Research Institute (MIRI). He has been working in the field for over a decade, after previous experience at Microsoft and Google. Soares is the author of a large body of technical and semi-technical writing on AI alignment, including foundational work on value learning, decision theory, and power-seeking incentives in smarter-than-human AIs.

Tab Content 6

Author Website:  

Countries Available

All regions
Latest Reading Guide

RGFEB26

 

Shopping Cart
Your cart is empty
Shopping cart
Mailing List