Artwork

Content provided by LessWrong. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by LessWrong or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ro.player.fm/legal.
Player FM - Aplicație Podcast
Treceți offline cu aplicația Player FM !

“Principles for the AGI Race ” by William_S

31:17
 
Distribuie
 

Manage episode 437372256 series 3364760
Content provided by LessWrong. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by LessWrong or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ro.player.fm/legal.
Crossposted from https://williamrsaunders.substack.com/p/principles-for-the-agi-race
Why form principles for the AGI Race?
I worked at OpenAI for 3 years, on the Alignment and Superalignment teams. Our goal was to prepare for the possibility that OpenAI succeeded in its stated mission of building AGI (Artificial General Intelligence, roughly able to do most things a human can do), and then proceed on to make systems smarter than most humans. This will predictably face novel problems in controlling and shaping systems smarter than their supervisors and creators, which we don't currently know how to solve. It's not clear when this will happen, but a number of people would throw around estimates of this happening within a few years.
While there, I would sometimes dream about what would have happened if I’d been a nuclear physicist in the 1940s. I do think that many of the kind of people who get involved in the effective [...]
---
Outline:
(00:06) Why form principles for the AGI Race?
(03:32) Bad High Risk Decisions
(04:46) Unnecessary Races to Develop Risky Technology
(05:17) High Risk Decision Principles
(05:21) Principle 1: Seek as broad and legitimate authority for your decisions as is possible under the circumstances
(07:20) Principle 2: Don’t take actions which impose significant risks to others without overwhelming evidence of net benefit
(10:52) Race Principles
(10:56) What is a Race?
(12:18) Principle 3: When racing, have an exit strategy
(13:03) Principle 4: Maintain accurate race intelligence at all times.
(14:23) Principle 5: Evaluate how bad it is for your opponent to win instead of you, and balance this against the risks of racing
(15:07) Principle 6: Seriously attempt alternatives to racing
(16:58) Meta Principles
(17:01) Principle 7: Don’t give power to people or structures that can’t be held accountable.
(18:36) Principle 8: Notice when you can’t uphold your own principles.
(19:17) Application of my Principles
(19:21) Working at OpenAI
(24:19) SB 1047
(28:32) Call to Action
---
First published:
August 30th, 2024
Source:
https://www.lesswrong.com/posts/aRciQsjgErCf5Y7D9/principles-for-the-agi-race
---
Narrated by TYPE III AUDIO.
  continue reading

346 episoade

Artwork
iconDistribuie
 
Manage episode 437372256 series 3364760
Content provided by LessWrong. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by LessWrong or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ro.player.fm/legal.
Crossposted from https://williamrsaunders.substack.com/p/principles-for-the-agi-race
Why form principles for the AGI Race?
I worked at OpenAI for 3 years, on the Alignment and Superalignment teams. Our goal was to prepare for the possibility that OpenAI succeeded in its stated mission of building AGI (Artificial General Intelligence, roughly able to do most things a human can do), and then proceed on to make systems smarter than most humans. This will predictably face novel problems in controlling and shaping systems smarter than their supervisors and creators, which we don't currently know how to solve. It's not clear when this will happen, but a number of people would throw around estimates of this happening within a few years.
While there, I would sometimes dream about what would have happened if I’d been a nuclear physicist in the 1940s. I do think that many of the kind of people who get involved in the effective [...]
---
Outline:
(00:06) Why form principles for the AGI Race?
(03:32) Bad High Risk Decisions
(04:46) Unnecessary Races to Develop Risky Technology
(05:17) High Risk Decision Principles
(05:21) Principle 1: Seek as broad and legitimate authority for your decisions as is possible under the circumstances
(07:20) Principle 2: Don’t take actions which impose significant risks to others without overwhelming evidence of net benefit
(10:52) Race Principles
(10:56) What is a Race?
(12:18) Principle 3: When racing, have an exit strategy
(13:03) Principle 4: Maintain accurate race intelligence at all times.
(14:23) Principle 5: Evaluate how bad it is for your opponent to win instead of you, and balance this against the risks of racing
(15:07) Principle 6: Seriously attempt alternatives to racing
(16:58) Meta Principles
(17:01) Principle 7: Don’t give power to people or structures that can’t be held accountable.
(18:36) Principle 8: Notice when you can’t uphold your own principles.
(19:17) Application of my Principles
(19:21) Working at OpenAI
(24:19) SB 1047
(28:32) Call to Action
---
First published:
August 30th, 2024
Source:
https://www.lesswrong.com/posts/aRciQsjgErCf5Y7D9/principles-for-the-agi-race
---
Narrated by TYPE III AUDIO.
  continue reading

346 episoade

Toate episoadele

×
 
Loading …

Bun venit la Player FM!

Player FM scanează web-ul pentru podcast-uri de înaltă calitate pentru a vă putea bucura acum. Este cea mai bună aplicație pentru podcast și funcționează pe Android, iPhone și pe web. Înscrieți-vă pentru a sincroniza abonamentele pe toate dispozitivele.

 

Ghid rapid de referință