WIN $150 GIFT VOUCHERS: ALADDIN'S GOLD

Close Notification

Your cart does not contain any items

Superintelligence

Paths, Dangers, Strategies

Nick Bostrom

$30.95

Paperback

In stock
Ready to ship

QTY:

English
Oxford University Press
29 March 2016
The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. Other animals have stronger muscles or sharper claws, but we have cleverer brains.

If machine brains one day come to surpass human brains in general intelligence, then this new superintelligence could become very powerful. As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species then would come to depend on the actions of the machine superintelligence.

But we have one advantage: we get to make the first move. Will it be possible to construct a seed AI or otherwise to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation?

To get closer to an answer to this question, we must make our way through a fascinating landscape of topics and considerations. Read the book and learn about oracles, genies, singletons; about boxing methods, tripwires, and mind crime; about humanity's cosmic endowment and differential technological development; indirect normativity, instrumental convergence, whole brain emulation and technology couplings; Malthusian economics and dystopian evolution; artificial intelligence, and biological cognitive enhancement, and collective intelligence.

This profoundly ambitious and original book picks its way carefully through a vast tract of forbiddingly difficult intellectual terrain. Yet the writing is so lucid that it somehow makes it all seem easy. After an utterly engrossing journey that takes us to the frontiers of thinking about the human condition and the future of intelligent life, we find in Nick Bostrom's work nothing less than a reconceptualization of the essential task of our time.
By:  
Imprint:   Oxford University Press
Country of Publication:   United Kingdom
Dimensions:   Height: 195mm,  Width: 130mm,  Spine: 23mm
Weight:   448g
ISBN:   9780198739838
ISBN 10:   0198739834
Pages:   390
Publication Date:  
Audience:   Professional and scholarly ,  Undergraduate
Format:   Paperback
Publisher's Status:   Active
Preface 1: Past Developments and Present Capabilities 2: Roads to Superintelligence 3: Forms of Superintelligence 4: Singularity Dynamics 5: Decisive Strategic Advantage 6: Intellectual Superpowers 7: The Superintelligent Will 8: Is the Default Outcome Doom? 9: The Control Problem 10: Oracles, Genies, Sovereigns, Tools 11: Multipolar Scenarios 12: Acquiring Values 13: Design Choices 14: The Strategic Picture 15: Nut-Cutting Time Afterword

Nick Bostrom is Professor in the Faculty of Philosophy at Oxford University and founding Director of the Future of Humanity Institute and of the Programme on the Impacts of Future Technology within the Oxford Martin School. He is the author of some 200 publications, including Anthropic Bias (Routledge, 2002), Global Catastrophic Risks (ed., OUP, 2008), and Human Enhancement (ed., OUP, 2009). He previously taught at Yale, and he was a Postdoctoral Fellow of the British Academy. Bostrom has a background in physics, computational neuroscience, and mathematical logic as well as philosophy.

Reviews for Superintelligence: Paths, Dangers, Strategies

I highly recommend this book - Bill Gates Nick Bostrom makes a persuasive case that the future impact of AI is perhaps the most important issue the human race has ever faced. Instead of passively drifting, we need to steer a course. Superintelligence charts the submerged rocks of the future with unprecedented detail. It marks the beginning of a new era - Stuart Russell, Professor of Computer Science, University of California, Berkley Those disposed to dismiss an 'AI takeover' as science fiction may think again after reading this original and well-argued book - Martin Rees, Past President, Royal Society This superb analysis by one of the worlds clearest thinkers tackles one of humanitys greatest challenges: if future superhuman artificial intelligence becomes the biggest event in human history, then how can we ensure that it doesnt become the last? - Max Tegmark, Professor of Physics, MIT Terribly important ... groundbreaking... extraordinary sagacity and clarity, enabling him to combine his wide-ranging knowledge over an impressively broad spectrum of disciplines - engineering, natural sciences, medicine, social sciences and philosophy - into a comprehensible whole... If this book gets the reception that it deserves, it may turn out the most important alarm bell since Rachel Carson's Silent Spring from 1962, or ever - Olle Haggstrom, Professor of Mathematical Statistics Valuable. The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking - The Economist There is no doubting the force of [Bostroms] arguments the problem is a research challenge worthy of the next generations best mathematical talent. Human civilisation is at stake - Financial Times Worth reading... We need to be super careful with AI. Potentially more dangerous than nukes - Elon Musk, Founder of SpaceX and Tesla


See Also