Existential risk from artificial general intelligence

Existential risk from artificial general intelligence

Existential risk from artificial general intelligence is the hypothesis that substantial progress in artificial general intelligence (AGI) could result in human extinction or some other unrecoverable global catastrophe. It is argued that the human species currently dominates other species because the human brain has some distinctive capabilities that other animals lack. If AI surpasses humanity in general intelligence and becomes "superintelligent", then it could become difficult or impossible for humans to control. Just as the fate of the mountain gorilla depends on human goodwill, so might the fate of humanity depend on the actions of a future machine superintelligence.

Comment
enExistential risk from artificial general intelligence is the hypothesis that substantial progress in artificial general intelligence (AGI) could result in human extinction or some other unrecoverable global catastrophe. It is argued that the human species currently dominates other species because the human brain has some distinctive capabilities that other animals lack. If AI surpasses humanity in general intelligence and becomes "superintelligent", then it could become difficult or impossible for humans to control. Just as the fate of the mountain gorilla depends on human goodwill, so might the fate of humanity depend on the actions of a future machine superintelligence.
Depiction
Bill Gates June 2015.jpg
Has abstract
enExistential risk from artificial general intelligence is the hypothesis that substantial progress in artificial general intelligence (AGI) could result in human extinction or some other unrecoverable global catastrophe. It is argued that the human species currently dominates other species because the human brain has some distinctive capabilities that other animals lack. If AI surpasses humanity in general intelligence and becomes "superintelligent", then it could become difficult or impossible for humans to control. Just as the fate of the mountain gorilla depends on human goodwill, so might the fate of humanity depend on the actions of a future machine superintelligence. The chance of this type of scenario is widely debated, and hinges in part on differing scenarios for future progress in computer science. Once the exclusive domain of science fiction, concerns about superintelligence started to become mainstream in the 2010s, and were popularized by public figures such as Stephen Hawking, Bill Gates, and Elon Musk. One source of concern is that controlling a superintelligent machine, or instilling it with human-compatible values, may be a harder problem than naïvely supposed. Many researchers believe that a superintelligence would naturally resist attempts to shut it off or change its goals—a principle called instrumental convergence—and that preprogramming a superintelligence with a full set of human values will prove to be an extremely difficult technical task. In contrast, skeptics such as computer scientist Yann LeCun argue that superintelligent machines will have no desire for self-preservation. A second source of concern is that a sudden and unexpected "intelligence explosion" might take an unprepared human race by surprise. To illustrate, if the first generation of a computer program able to broadly match the effectiveness of an AI researcher is able to rewrite its algorithms and double its speed or capabilities in six months, then the second-generation program is expected to take three calendar months to perform a similar chunk of work. In this scenario the time for each generation continues to shrink, and the system undergoes an unprecedentedly large number of generations of improvement in a short time interval, jumping from subhuman performance in many areas to superhuman performance in all relevant areas. Empirically, examples like AlphaZero in the domain of Go show that AI systems can sometimes progress from narrow human-level ability to narrow superhuman ability extremely rapidly.
Homepage
Bloomberg.com
Hypernym
Risk
Is primary topic of
Existential risk from artificial general intelligence
Label
enExistential risk from artificial general intelligence
Link from a Wikipage to an external page
web.archive.org/web/20151030202356/http:/www.bloomberg.com/news/articles/2015-07-01/musk-backed-group-probes-risks-behind-artificial-intelligence
www.bloomberg.com/news/articles/2015-07-01/musk-backed-group-probes-risks-behind-artificial-intelligence
Link from a Wikipage to another Wikipage
2001: A Space Odyssey
AI box
AI control problem
AI takeover
AI takeovers in popular culture
Alan Turing
AlphaZero
Amazon Mechanical Turk
Andrew Ng
Anthropomorphism
Artificial general intelligence
Artificial intelligence
Artificial Intelligence: A Modern Approach
Artificial intelligence arms race
Artificial philosophy
Association for the Advancement of Artificial Intelligence
Astroturfing
Autonomy
Baidu
Barack Obama
Bart Selman
Bill Gates
Bill Joy
BRAIN Initiative
Brian Christian
Brian Krzanich
British Science Association
Category:Doomsday scenarios
Category:Existential risk from artificial general intelligence
Category:Future problems
Category:Human extinction
Category:Technology hazards
Center for Human-Compatible AI
Centre for the Study of Existential Risk
CERN
Charles T. Rubin
China Brain Project
Cockroach
Collaboration
Common good
Communications of the ACM
Competition
Computational complexity
Computer scientist
Conference on Neural Information Processing Systems
Convergent evolution
Cybercrime
Dario Floreano
DARPA
Darwin among the Machines
DeepMind
Dick Cheney
Edward Feigenbaum
Effective altruism
Eliezer Yudkowsky
Elon Musk
Erewhon
Eric Horvitz
File:Bill Gates June 2015.jpg
Francesca Rossi
Frank Wilczek
Friendly artificial intelligence
Future of Humanity Institute
Future of Life Institute
Geoffrey Hinton
Global catastrophic risk
Go (game)
Google DeepMind
Gordon Bell
Gordon Moore
Gray goo
HAL 9000
Hanson Robotics
Herbert A. Simon
Hillary Clinton
Human brain
Human Brain Project
Human Compatible
Human enhancement
Human extinction
Human Genome Project
Human species
I. J. Good
IBM
Information Technology and Innovation Foundation
Instrumental convergence
Instrumentalism
Intelligence explosion
Intelligent agent
International Conference on Machine Learning
International Space Station
Isaac Asimov
Is-ought distinction
Jaron Lanier
John Rawls
Joi Ito
Lethal autonomous weapon
Life 3.0
Limits of computation
Loss function
Machine Intelligence Research Institute
Mark Zuckerberg
Martha Nussbaum
Martin Ford (author)
Marvin Minsky
Max More
Max Tegmark
Michael Chorost
Michio Kaku
Military-civil fusion
Moore's Law
Mountain gorilla
Murray Shanahan
Nanotechnology
National Public Radio
Nature (journal)
Nick Bostrom
Nuclear warfare
OpenAI
Open Letter on Artificial Intelligence
Open Philanthropy Project
Optimization problem
Our Final Invention
Paperclip maximizer
Peter Norvig
Peter Thiel
Physics of the Future
Politicization of science
Pre-emptive nuclear strike
Psychopathy
Regulation of algorithms
Regulation of artificial intelligence
Richard Posner
Robert D. Atkinson
Robin Hanson
Robot ethics
Rodney Brooks
Roman Yampolskiy
Samuel Butler (novelist)
Scenario planning
Slate (magazine)
Smithsonian (magazine)
Social engineering (security)
Stanley Kubrick
Steganography
Stephen Hawking
Steven Pinker
Steve Omohundro
Stuart J. Russell
Suffering risks
Sun microsystems
Superintelligence
Superintelligence: Paths, Dangers, Strategies
SurveyMonkey
System accident
Tay (bot)
Technological determinism
Technological singularity
Technological supremacy
Technological utopianism
Terminator (franchise)
Tesla, Inc.
The Alignment Problem
The Atlantic (magazine)
The Economist
The New York Times
The Precipice: Existential Risk and the Future of Humanity
The Wall Street Journal
The Washington Post
Thomas G. Dietterich
Three Laws of Robotics
Uncertainty
Unintended consequences
USA Today
Utility
Vicarious (company)
Weaponization of artificial intelligence
What Happened (Clinton book)
Why The Future Doesn't Need Us
Wikipedia:WEIGHT
Wired (magazine)
Yann LeCun
YouGov
SameAs
23whd
Existenční rizika vývoje umělé inteligence
Existential risk from artificial general intelligence
Existentiell risk orsakad av artificiell generell intelligens
Krisis eksistensial dari kecerdasan buatan
m.0134 90x
Q21715237
Risc existențial cauzat de inteligența artificială puternică
الخطر الوجودي من الذكاء الاصطناعي العام
SeeAlso
AI alignment
Regulation of algorithms
Subject
Category:Doomsday scenarios
Category:Existential risk from artificial general intelligence
Category:Future problems
Category:Human extinction
Category:Technology hazards
Thumbnail
Bill Gates June 2015.jpg?width=300
WasDerivedFrom
Existential risk from artificial general intelligence?oldid=1123781842&ns=0
WikiPageLength
118681
Wikipage page ID
46583121
Wikipage revision ID
1123781842
WikiPageUsesTemplate
Template:Artificial intelligence
Template:Blockquote
Template:Citation needed
Template:Cite news
Template:Cquote
Template:Div col
Template:Div col end
Template:Doomsday
Template:Effective altruism
Template:Efn
Template:Endash
Template:Excerpt
Template:Existential risk from artificial intelligence
Template:Further
Template:Main
Template:Meaning%3F
Template:Nbsp
Template:Notelist
Template:Reflist
Template:See also
Template:Sfn
Template:Short description
Template:Use dmy dates