Artwork

Inhalt bereitgestellt von LessWrong. Alle Podcast-Inhalte, einschließlich Episoden, Grafiken und Podcast-Beschreibungen, werden direkt von LessWrong oder seinem Podcast-Plattformpartner hochgeladen und bereitgestellt. Wenn Sie glauben, dass jemand Ihr urheberrechtlich geschütztes Werk ohne Ihre Erlaubnis nutzt, können Sie dem hier beschriebenen Verfahren folgen https://de.player.fm/legal.
Player FM - Podcast-App
Gehen Sie mit der App Player FM offline!

“Principles for the AGI Race ” by William_S

31:17
 
Teilen
 

Manage episode 437379391 series 3364758
Inhalt bereitgestellt von LessWrong. Alle Podcast-Inhalte, einschließlich Episoden, Grafiken und Podcast-Beschreibungen, werden direkt von LessWrong oder seinem Podcast-Plattformpartner hochgeladen und bereitgestellt. Wenn Sie glauben, dass jemand Ihr urheberrechtlich geschütztes Werk ohne Ihre Erlaubnis nutzt, können Sie dem hier beschriebenen Verfahren folgen https://de.player.fm/legal.
Crossposted from https://williamrsaunders.substack.com/p/principles-for-the-agi-race
Why form principles for the AGI Race?
I worked at OpenAI for 3 years, on the Alignment and Superalignment teams. Our goal was to prepare for the possibility that OpenAI succeeded in its stated mission of building AGI (Artificial General Intelligence, roughly able to do most things a human can do), and then proceed on to make systems smarter than most humans. This will predictably face novel problems in controlling and shaping systems smarter than their supervisors and creators, which we don't currently know how to solve. It's not clear when this will happen, but a number of people would throw around estimates of this happening within a few years.
While there, I would sometimes dream about what would have happened if I’d been a nuclear physicist in the 1940s. I do think that many of the kind of people who get involved in the effective [...]
---
Outline:
(00:06) Why form principles for the AGI Race?
(03:32) Bad High Risk Decisions
(04:46) Unnecessary Races to Develop Risky Technology
(05:17) High Risk Decision Principles
(05:21) Principle 1: Seek as broad and legitimate authority for your decisions as is possible under the circumstances
(07:20) Principle 2: Don’t take actions which impose significant risks to others without overwhelming evidence of net benefit
(10:52) Race Principles
(10:56) What is a Race?
(12:18) Principle 3: When racing, have an exit strategy
(13:03) Principle 4: Maintain accurate race intelligence at all times.
(14:23) Principle 5: Evaluate how bad it is for your opponent to win instead of you, and balance this against the risks of racing
(15:07) Principle 6: Seriously attempt alternatives to racing
(16:58) Meta Principles
(17:01) Principle 7: Don’t give power to people or structures that can’t be held accountable.
(18:36) Principle 8: Notice when you can’t uphold your own principles.
(19:17) Application of my Principles
(19:21) Working at OpenAI
(24:19) SB 1047
(28:32) Call to Action
---
First published:
August 30th, 2024
Source:
https://www.lesswrong.com/posts/aRciQsjgErCf5Y7D9/principles-for-the-agi-race
---
Narrated by TYPE III AUDIO.
  continue reading

365 Episoden

Artwork
iconTeilen
 
Manage episode 437379391 series 3364758
Inhalt bereitgestellt von LessWrong. Alle Podcast-Inhalte, einschließlich Episoden, Grafiken und Podcast-Beschreibungen, werden direkt von LessWrong oder seinem Podcast-Plattformpartner hochgeladen und bereitgestellt. Wenn Sie glauben, dass jemand Ihr urheberrechtlich geschütztes Werk ohne Ihre Erlaubnis nutzt, können Sie dem hier beschriebenen Verfahren folgen https://de.player.fm/legal.
Crossposted from https://williamrsaunders.substack.com/p/principles-for-the-agi-race
Why form principles for the AGI Race?
I worked at OpenAI for 3 years, on the Alignment and Superalignment teams. Our goal was to prepare for the possibility that OpenAI succeeded in its stated mission of building AGI (Artificial General Intelligence, roughly able to do most things a human can do), and then proceed on to make systems smarter than most humans. This will predictably face novel problems in controlling and shaping systems smarter than their supervisors and creators, which we don't currently know how to solve. It's not clear when this will happen, but a number of people would throw around estimates of this happening within a few years.
While there, I would sometimes dream about what would have happened if I’d been a nuclear physicist in the 1940s. I do think that many of the kind of people who get involved in the effective [...]
---
Outline:
(00:06) Why form principles for the AGI Race?
(03:32) Bad High Risk Decisions
(04:46) Unnecessary Races to Develop Risky Technology
(05:17) High Risk Decision Principles
(05:21) Principle 1: Seek as broad and legitimate authority for your decisions as is possible under the circumstances
(07:20) Principle 2: Don’t take actions which impose significant risks to others without overwhelming evidence of net benefit
(10:52) Race Principles
(10:56) What is a Race?
(12:18) Principle 3: When racing, have an exit strategy
(13:03) Principle 4: Maintain accurate race intelligence at all times.
(14:23) Principle 5: Evaluate how bad it is for your opponent to win instead of you, and balance this against the risks of racing
(15:07) Principle 6: Seriously attempt alternatives to racing
(16:58) Meta Principles
(17:01) Principle 7: Don’t give power to people or structures that can’t be held accountable.
(18:36) Principle 8: Notice when you can’t uphold your own principles.
(19:17) Application of my Principles
(19:21) Working at OpenAI
(24:19) SB 1047
(28:32) Call to Action
---
First published:
August 30th, 2024
Source:
https://www.lesswrong.com/posts/aRciQsjgErCf5Y7D9/principles-for-the-agi-race
---
Narrated by TYPE III AUDIO.
  continue reading

365 Episoden

Alle Folgen

×
 
Loading …

Willkommen auf Player FM!

Player FM scannt gerade das Web nach Podcasts mit hoher Qualität, die du genießen kannst. Es ist die beste Podcast-App und funktioniert auf Android, iPhone und im Web. Melde dich an, um Abos geräteübergreifend zu synchronisieren.

 

Kurzanleitung