|
|
|||
|
||||
OverviewThis is the first book to investigate the nature and extent of artificial intelligence (AI) suffering risks. It argues that AI suffering risk is a serious near-term concern and analyzes approaches for addressing it. AI systems are currently treated as mere objects, not as bearers of moral standing whose wellbeing may matter in its own right. However, we may soon create AI systems which are capable of suffering and thus have moral standing. This book examines the philosophy and science of AI suffering risks. Its investigation is deeply grounded in philosophy of mind, comparative psychology, the science of consciousness, AI research, and applied AI ethics. The book has three primary goals: It argues that there is a significant probability that we will soon create AI systems capable of suffering It presents the first systematic assessment of approaches for reducing AI suffering risks It provides a rigorous overview and discussion of the most important research and ideas on AI sentience, AI agency, and the grounds of moral status Saving Artificial Minds is essential reading for researchers and graduate students working on the philosophy or ethics of AI. Full Product DetailsAuthor: Leonard Dung (Ruhr-University Bochum, Germany)Publisher: Taylor & Francis Ltd Imprint: Routledge Weight: 0.650kg ISBN: 9781041144670ISBN 10: 1041144679 Pages: 204 Publication Date: 23 October 2025 Audience: College/higher education , Tertiary & Higher Education Format: Hardback Publisher's Status: Active Availability: Not yet available This item is yet to be released. You can pre-order this item and we will dispatch it to you upon its release. Table of ContentsReviewsSober and synoptic, evenhanded and ecumenical, this book brings artificial suffering into clear view as a pressing, high-stakes problem to which our civilization has not yet awakened. Bradford Saad, Global Priorities Institute, University of Oxford, UK This timely and tightly-argued monograph makes the case that the potential for grave suffering in future AI systems poses a pressing ethical risk. Synthesizing work in value theory, the metaphysics of mind, and cognitive science, this book is essential reading for anyone interested in the issue of AI moral status. Adam Bradley, Lingnan University, Hong Kong “Sober and synoptic, evenhanded and ecumenical, this book brings artificial suffering into clear view as a pressing, high-stakes problem to which our civilization has not yet awakened.” Bradford Saad, Global Priorities Institute, University of Oxford, UK “This timely and tightly-argued monograph makes the case that the potential for grave suffering in future AI systems poses a pressing ethical risk. Synthesizing work in value theory, the metaphysics of mind, and cognitive science, this book is essential reading for anyone interested in the issue of AI moral status.” Adam Bradley, Lingnan University, Hong Kong Author InformationLeonard Dung is a philosopher at the Ruhr-University Bochum. His research especially focuses on AI sentience, AI moral status, and risks, including existential risks, from advanced AI systems. Moreover, he investigates topics related to animal consciousness and welfare, which were also the focus of his PhD. Tab Content 6Author Website:Countries AvailableAll regions |
||||