Our Final Invention - Superintelligence

Transcription

SuperintelligenceOur Final InventionKaspar Etter, kaspar.etter@gbs-schweiz.orgAdrian Hutter, adrian.hutter@gbs-schweiz.orgBern, Switzerland24 March 20151

Outline– Introduction– Singularity– Superintelligence– State and Trends– Strategy– SummaryMore ur Final Invention2

IntroductionWhat are we talking about?SuperintelligenceOur Final Invention3

Intelligence«Intelligence measures an agent’sability to achieve its goals in a widerange of unknown environments.»(adapted from Legg and Hutter)Intelligence Universal n PowerUsed ResourcesSuperintelligenceIntroduction4

Ingredients– Epistemology: Learn model of world– Utility Function: Rate states of world– Decision Theory: Plan optimal action(There are still some open problems, e.g.classical decision theory breaks down whenthe algorithm itself becomes part of the game.)Luke Muehlhauser: Decision Theory FAQlesswrong.com/lw/gu1/decision theory faq/SuperintelligenceIntroduction5

Consciousness– is a completely separate question!– Not required for an agent to reshapethe world according to its preferenceConsciousness is– reducible or– fundamental– and universalHow Do You Explain Consciousness?David Chalmers: go.ted.com/DQJSuperintelligenceIntroduction6

Machine SentienceOpen questions of immense importance:– Can simulated entities be conscious?– Can machines be moral patients?If yes:– Machines deserve moral consideration– We might live in a computer simulationAre You Living in a genceIntroduction7

Crucial Consideration– an idea or argument that entails amajor change of direction or priority.– Overlooking just one consideration,our best efforts might be for naught.– When headed thewrong way, the lastthing we need is progress.Edge: What will change lligenceIntroduction8

Attractor StatesMaturityTechnological MaturityPast(Singleton?)Instability(Great Filter?)LifeExtinction0Big Bang9 billion yearsSolar System13.8 billionTodayBostrom: The Future of Human e15 – 20 billionEnd of our SunSuperintelligenceIntroduction9

Singleton ultimate fate?– World order with a single decisionmaking agency at the highest level– Ability to prevent existential threatsAdvantages:It would avoid– arms races– DarwinismDisadvantages:It might result in a– dystopian world– durable lock-inNick Bostrom: What is a perintelligenceIntroduction10

SingularityWhat is the basic argument?SuperintelligenceOur Final Invention11

Accelerating ChangeProgress feeds on itself:Knowledge TechnologyTechnology?Rate of Progressin the year 2’00002’000 2’100Time in years ADThe Law of Accelerating returns22’000SuperintelligenceSingularity12

Intelligence ExplosionProportionality Thesis: An increase inintelligence leads to similar increasesin the capacity to designintelligent systems.?Recursive SelfImprovement?Intelligence lligenceSingularity13

Technological SingularityTheoretic phenomenon: There arearguments why it should exist but it hasnot yet been confirmed experimentally.Three major singularity schools:– Accelerating Change (Ray Kurzweil)– Intelligence Explosion (I.J. Good)– Event Horizon (Vernor Vinge)David Chalmers: The telligenceSingularity14

SuperintelligenceWhat are potential outcomes?SuperintelligenceOur Final Invention15

Definition of SuperintelligenceAn agent is called superintelligent ifit exceeds the level of current humanintelligence in all areas of interest.RockChimp GeniusSuperintelligenceMouseFoolNick Bostrom: How long before gence.htmlSuperintelligenceSuperintelligence16

– whole brain emulation}– biological cognition– brain-computer interfaces– networks and organizationsEmbryo Selection for Cognitive trong Superintelligence– artificial intelligence– neuromorphic– syntheticWeak SuperintelligencePathways to e17

Advantages of AIs over BrainsHardware:– Size– Speed– MemorySoftware:– Editability– Copyability– ExpandabilityHuman Brain86 billion neuronsEffectiveness:– Rationality– Coordination– CommunicationModern Microprocessor1.4 billion transistorsfiring rate of 200 Hz120 m/s signal speedAdvantages of AIs, Uploads and Digital 400’000’000 Hz300’000’000 m/sSuperintelligenceSuperintelligence18

Cognitive Superpowers– Intelligence amplification: bootstrapping– Strategizing: overcome smart opposition– Hacking: hijack computing infrastructure– Social manipulation: persuading people– Economic productivity: acquiring wealth– Technology research: inventing new aidsHollywood Movie genceSuperintelligence19

Orthogonality ThesisIntelligencelikelier because easierPaperclipMaximizerAdolfHitlerall goals equallypossible – don’tanthropomorphize!MahatmaGandhiFinal GoalsIntelligence and final goals are orthogonal:Almost any level of intelligence could inprinciple be combined with any final goal.Nick Bostrom: The Superintelligent perintelligenceSuperintelligence20

Convergent Instrumental Goals– Self-Preservation– Goal-Preservation}necessary toachieve goal– Resource Accumulation– Intelligence Accumulation}to achievegoal betterDefault Outcome: Doom(Infrastructure Profusion)Stephen M. Omohundro: The Basic AI Drivesselfawaresystems.[ ].com/2008/01/ai drives final.pdfSuperintelligenceSuperintelligence21

Single-Shot SituationOur first superhuman AI mustbe a safe one for we may notget a second chance!– We’re good at iterating with testing and feedback– We’re terrible at getting things right the first time– Humanity only learns when catastrophe occurredList of Cognitive Biasesen.wikipedia.org/wiki/List of cognitive biasesSuperintelligenceSuperintelligence22

Takeoff ScenariosIntelligencePhysical LimitSuperintelligenceSeparate Questions!Human Level?Feedbacknow?time until takeofftakeoff durationThe Hanson-Yudkowsky AI-Foom uperintelligenceSuperintelligence23

Potential OutcomesFast TakeoffSlow Takeoffhours, days, weeksseveral months, yearsUnipolar OutcomeMultipolar OutcomeSingleton(Slide 10)Second TransitionUnification by TreatyThoughts on Robots, AI, and Intelligence ce24

State and TrendsWhere are we heading to?SuperintelligenceOur Final Invention25

Brain vs. ComputerBrainComputerConsciousness sequentialSoftware parallelMindware parallelHardware sequentialeasyPattern RecognitionhardhardLogic and ThinkingeasyparallelGPUs but there is massive progress*!* We have superhuman image recognition since February 2015.Dennett: Consciousness enceState and Trends26

State of the ArtCheckersSuperhumanBackgammon SuperhumanDeep Blue: 1997IBM Watson: 2011Stanley: 2005Schmidhuber: rt LevelScrabbleSuperhumanBridgeEqual to anGoStrong AmateurHow bio-inspired deep learning keeps winning competitions ed-deep-learning-[ ]State and Trends27

Machine Learning by GoogleVicarious AI passes first Turing Test: CAPTCHAnews.vicarious.com/[ e and Trends28

Predicting AI TimelinesGreat uncertainties:– Hardware or software the bottleneck?– Small team or a Manhattan Project?– More speed bumps or accelerators?Probability for AGI10%50%90%AI scientists, median202420502070Luke Muelhauser, MIRI203020702140How We’re Prediciting AI – or Failing lligenceState and Trends29

Speed Bumps– Depletion of low-hanging fruit– An end to Moore’s law– Societal collapse– DisinclinationEvolutionary Arguments and Selection elligenceState and Trends30

Accelerators– Faster hardware– Better algorithms– Massive datasets enormous economic, militaryand egoistic incentives!Machine Intelligence Research Institute: When ated/SuperintelligenceState and Trends31

StrategyWhat is to be done?SuperintelligenceOur Final Invention32

Prioritization– Scope: How big/important is the issue?– Tractability: What can be done about it?– Crowdedness: Who else is working on it?Work on the matters that matter the most!– AI is the key lever on the long-term future– Issue is urgent, tractable and uncrowded– The stakes are astronomical: our light coneLuke Muehlhauser: Why elligenceStrategy33

Flow-Through EffectsGoing meta: Solve the problem-solving problem!– Extreme Poverty– Factory Farming– Climate Changecouldsolveotherissue– Artificial IntelligenceHolden Karnofsky: Flow-Through ffects/SuperintelligenceStrategy34

Controlled DetonationDifficulty:Friendly AI General AIAI as a Positive and Negative Factor in Global intelligenceStrategy35

Control ProblemWill AI outsmart us?Capability ControlMotivation SelectionBoxingDirect SpecificationStuntingIndirect NormativityTripwiresIncentive MethodsRoman V. Yampolskiy: Leakproofing the ingularity.pdfSuperintelligenceStrategy36

Stable Self-ImprovementFriendlyMIRI Research elligenceStrategy37

Differential Intellectual ProgressPrioritize risk-reducing intellectual progressover risk-increasing intellectual progressAI safety should outpace AI capability researchFAI researchers 12GAI researchers12'00005'00010'000Differential Intellectual Progress as a Positive-Sum Projectfoundational-research.org/[ ]/differential-progress-[ ]/15'000SuperintelligenceStrategy38

International Cooperation– We are the ones who willcreate superintelligent AI– Not primarily a technicalproblem, rather a social– International regulation?In face of uncertainty, cooperation is robust!Lower Bound on the Importance of Promoting Cooperation Superintelligencefoundational-research.org/[ ]/[ ]-promoting-cooperation/Strategy39

SummaryWhat have we learned?SuperintelligenceOur Final Invention40

Crucial CrossroadInstead of passively drifting,we need to steer a course!– Philosophy– Mathematics– Cooperationwith a deadline.Luke Muehlhauser: Steering the Future of AIintelligence.org/[ ]Steering-the-Future-of-AI.pdfSuperintelligenceOur Final Invention41

«Before the prospect of an intelligence explosion,we humans are like children playing with a bomb.Such is the mismatch between the power of ourplay-thing and the immaturity of our conduct.Superintelligence is a challenge for which we arenot ready now and will not be ready for a long time.We have little idea when the detonation will occur,though if we hold the device to our ear we can heara faint ticking sound.»— Prof. Nick Bostrom in his book SuperintelligenceSuperintelligenceOur Final Invention42

Discussionwww.superintelligence.chKaspar Etter, kaspar.etter@gbs-schweiz.orgAdrian Hutter, adrian.hutter@gbs-schweiz.orgBern, Switzerland24 March 201543

Superintelligence is a challenge for which we are not ready now and will not be ready for a long time. We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound.» — Prof. Nick Bostrom in his book Superintel