Artwork

Content provided by The Nonlinear Fund. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Nonlinear Fund or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ro.player.fm/legal.
Player FM - Aplicație Podcast
Treceți offline cu aplicația Player FM !

LW - AI takeoff and nuclear war by owencb

18:59
 
Distribuie
 

Fetch error

Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on September 22, 2024 16:12 (5d ago)

What now? This series will be checked again in the next day. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.

Manage episode 423242713 series 3337129
Content provided by The Nonlinear Fund. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Nonlinear Fund or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ro.player.fm/legal.
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI takeoff and nuclear war, published by owencb on June 11, 2024 on LessWrong.
Summary
As we approach and pass through an AI takeoff period, the risk of nuclear war (or other all-out global conflict) will increase.
An AI takeoff would involve the automation of scientific and technological research. This would lead to much faster technological progress, including military technologies. In such a rapidly changing world, some of the circumstances which underpin the current peaceful equilibrium will dissolve or change. There are then two risks[1]:
1. Fundamental instability. New circumstances could give a situation where there is no peaceful equilibrium it is in everyone's interests to maintain.
e.g.
If nuclear calculus changes to make second strike capabilities infeasible
If one party is racing ahead with technological progress and will soon trivially outmatch the rest of the world, without any way to credibly commit not to completely disempower them after it has done so
2. Failure to navigate. Despite the existence of new peaceful equilibria, decision-makers might fail to reach one.
e.g.
If decision-makers misunderstand the strategic position, they may hold out for a more favourable outcome they (incorrectly) believe is fair
If the only peaceful equilibria are convoluted and unprecedented, leaders may not be able to identify or build trust in them in a timely fashion
Individual leaders might choose a path of war that would be good for them personally as they solidify power with AI; or nations might hold strongly to values like sovereignty that could make cooperation much harder
Of these two risks, it is likely simpler to work to reduce the risk of failure to navigate. The three straightforward strategies here are research & dissemination, to ensure that the basic strategic situation is common knowledge among decision-makers, spreading positive-sum frames, and crafting and getting buy-in to meaningful commitments about sharing the power from AI, to reduce incentives for anyone to initiate war.
Additionally, powerful AI tools could change the landscape in ways that reduce either or both of these risks. A fourth strategy, therefore, is to differentially accelerate risk-reducing applications of AI. These could include:
Tools to help decision-makers make sense of the changing world and make wise choices;
Tools to facilitate otherwise impossible agreements via mutually trusted artificial judges;
Tools for better democratic accountability.
Why do(n't) people go to war?
To date, the world has been pretty good at avoiding thermonuclear war. The doctrine of mutually assured destruction means that it's in nobody's interest to start a war (although the short timescales involved mean that accidentally starting one is a concern).
The rapid development of powerful AI could disrupt the current equilibrium. From a very outside-view perspective, we might think that this is equally likely to result in, say, a 10x decrease in risk as a 10x increase. Even this would be alarming, since the annual probability seems fairly low right now, so a big decrease in risk is merely nice-to-have, but a big increase could be catastrophic.
To get more clarity than that, we'll look at the theoretical reasons people might go to war, and then look at how an AI takeoff period might impact each of these.
Rational reasons to go to war
War is inefficient; for any war, there should be some possible world which doesn't have that war in which everyone is better off. So why do we have war? Fearon's classic paper on Rationalist Explanations for War explains that there are essentially three mechanisms that can lead to war between states that are all acting rationally:
1. Commitment problems
If you're about to build a superweapon, I might want to attack now. We might both be better off if I didn't attack, and I paid y...
  continue reading

1851 episoade

Artwork
iconDistribuie
 

Fetch error

Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on September 22, 2024 16:12 (5d ago)

What now? This series will be checked again in the next day. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.

Manage episode 423242713 series 3337129
Content provided by The Nonlinear Fund. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Nonlinear Fund or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ro.player.fm/legal.
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI takeoff and nuclear war, published by owencb on June 11, 2024 on LessWrong.
Summary
As we approach and pass through an AI takeoff period, the risk of nuclear war (or other all-out global conflict) will increase.
An AI takeoff would involve the automation of scientific and technological research. This would lead to much faster technological progress, including military technologies. In such a rapidly changing world, some of the circumstances which underpin the current peaceful equilibrium will dissolve or change. There are then two risks[1]:
1. Fundamental instability. New circumstances could give a situation where there is no peaceful equilibrium it is in everyone's interests to maintain.
e.g.
If nuclear calculus changes to make second strike capabilities infeasible
If one party is racing ahead with technological progress and will soon trivially outmatch the rest of the world, without any way to credibly commit not to completely disempower them after it has done so
2. Failure to navigate. Despite the existence of new peaceful equilibria, decision-makers might fail to reach one.
e.g.
If decision-makers misunderstand the strategic position, they may hold out for a more favourable outcome they (incorrectly) believe is fair
If the only peaceful equilibria are convoluted and unprecedented, leaders may not be able to identify or build trust in them in a timely fashion
Individual leaders might choose a path of war that would be good for them personally as they solidify power with AI; or nations might hold strongly to values like sovereignty that could make cooperation much harder
Of these two risks, it is likely simpler to work to reduce the risk of failure to navigate. The three straightforward strategies here are research & dissemination, to ensure that the basic strategic situation is common knowledge among decision-makers, spreading positive-sum frames, and crafting and getting buy-in to meaningful commitments about sharing the power from AI, to reduce incentives for anyone to initiate war.
Additionally, powerful AI tools could change the landscape in ways that reduce either or both of these risks. A fourth strategy, therefore, is to differentially accelerate risk-reducing applications of AI. These could include:
Tools to help decision-makers make sense of the changing world and make wise choices;
Tools to facilitate otherwise impossible agreements via mutually trusted artificial judges;
Tools for better democratic accountability.
Why do(n't) people go to war?
To date, the world has been pretty good at avoiding thermonuclear war. The doctrine of mutually assured destruction means that it's in nobody's interest to start a war (although the short timescales involved mean that accidentally starting one is a concern).
The rapid development of powerful AI could disrupt the current equilibrium. From a very outside-view perspective, we might think that this is equally likely to result in, say, a 10x decrease in risk as a 10x increase. Even this would be alarming, since the annual probability seems fairly low right now, so a big decrease in risk is merely nice-to-have, but a big increase could be catastrophic.
To get more clarity than that, we'll look at the theoretical reasons people might go to war, and then look at how an AI takeoff period might impact each of these.
Rational reasons to go to war
War is inefficient; for any war, there should be some possible world which doesn't have that war in which everyone is better off. So why do we have war? Fearon's classic paper on Rationalist Explanations for War explains that there are essentially three mechanisms that can lead to war between states that are all acting rationally:
1. Commitment problems
If you're about to build a superweapon, I might want to attack now. We might both be better off if I didn't attack, and I paid y...
  continue reading

1851 episoade

Toate episoadele

×
 
Loading …

Bun venit la Player FM!

Player FM scanează web-ul pentru podcast-uri de înaltă calitate pentru a vă putea bucura acum. Este cea mai bună aplicație pentru podcast și funcționează pe Android, iPhone și pe web. Înscrieți-vă pentru a sincroniza abonamentele pe toate dispozitivele.

 

Ghid rapid de referință