Artwork

Inhalt bereitgestellt von LessWrong. Alle Podcast-Inhalte, einschließlich Episoden, Grafiken und Podcast-Beschreibungen, werden direkt von LessWrong oder seinem Podcast-Plattformpartner hochgeladen und bereitgestellt. Wenn Sie glauben, dass jemand Ihr urheberrechtlich geschütztes Werk ohne Ihre Erlaubnis nutzt, können Sie dem hier beschriebenen Verfahren folgen https://de.player.fm/legal.
Player FM - Podcast-App
Gehen Sie mit der App Player FM offline!

“The Problem” by Rob Bensinger, tanagrabeast, yams, So8res, Eliezer Yudkowsky, Gretta Duleba

49:32
 
Teilen
 

Manage episode 498690392 series 3364758
Inhalt bereitgestellt von LessWrong. Alle Podcast-Inhalte, einschließlich Episoden, Grafiken und Podcast-Beschreibungen, werden direkt von LessWrong oder seinem Podcast-Plattformpartner hochgeladen und bereitgestellt. Wenn Sie glauben, dass jemand Ihr urheberrechtlich geschütztes Werk ohne Ihre Erlaubnis nutzt, können Sie dem hier beschriebenen Verfahren folgen https://de.player.fm/legal.
This is a new introduction to AI as an extinction threat, previously posted to the MIRI website in February alongside a summary. It was written independently of Eliezer and Nate's forthcoming book, If Anyone Builds It, Everyone Dies, and isn't a sneak peak of the book. Since the book is long and costs money, we expect this to be a valuable resource in its own right even after the book comes out next month.[1]
The stated goal of the world's leading AI companies is to build AI that is general enough to do anything a human can do, from solving hard problems in theoretical physics to deftly navigating social environments. Recent machine learning progress seems to have brought this goal within reach. At this point, we would be uncomfortable ruling out the possibility that AI more capable than any human is achieved in the next year or two, and [...]
---
Outline:
(02:27) 1. There isn't a ceiling at human-level capabilities.
(08:56) 2. ASI is very likely to exhibit goal-oriented behavior.
(15:12) 3. ASI is very likely to pursue the wrong goals.
(32:40) 4. It would be lethally dangerous to build ASIs that have the wrong goals.
(46:03) 5. Catastrophe can be averted via a sufficiently aggressive policy response.
The original text contained 1 footnote which was omitted from this narration.
---
First published:
August 5th, 2025
Source:
https://www.lesswrong.com/posts/kgb58RL88YChkkBNf/the-problem
---
Narrated by TYPE III AUDIO.
  continue reading

601 Episoden

Artwork
iconTeilen
 
Manage episode 498690392 series 3364758
Inhalt bereitgestellt von LessWrong. Alle Podcast-Inhalte, einschließlich Episoden, Grafiken und Podcast-Beschreibungen, werden direkt von LessWrong oder seinem Podcast-Plattformpartner hochgeladen und bereitgestellt. Wenn Sie glauben, dass jemand Ihr urheberrechtlich geschütztes Werk ohne Ihre Erlaubnis nutzt, können Sie dem hier beschriebenen Verfahren folgen https://de.player.fm/legal.
This is a new introduction to AI as an extinction threat, previously posted to the MIRI website in February alongside a summary. It was written independently of Eliezer and Nate's forthcoming book, If Anyone Builds It, Everyone Dies, and isn't a sneak peak of the book. Since the book is long and costs money, we expect this to be a valuable resource in its own right even after the book comes out next month.[1]
The stated goal of the world's leading AI companies is to build AI that is general enough to do anything a human can do, from solving hard problems in theoretical physics to deftly navigating social environments. Recent machine learning progress seems to have brought this goal within reach. At this point, we would be uncomfortable ruling out the possibility that AI more capable than any human is achieved in the next year or two, and [...]
---
Outline:
(02:27) 1. There isn't a ceiling at human-level capabilities.
(08:56) 2. ASI is very likely to exhibit goal-oriented behavior.
(15:12) 3. ASI is very likely to pursue the wrong goals.
(32:40) 4. It would be lethally dangerous to build ASIs that have the wrong goals.
(46:03) 5. Catastrophe can be averted via a sufficiently aggressive policy response.
The original text contained 1 footnote which was omitted from this narration.
---
First published:
August 5th, 2025
Source:
https://www.lesswrong.com/posts/kgb58RL88YChkkBNf/the-problem
---
Narrated by TYPE III AUDIO.
  continue reading

601 Episoden

Усі епізоди

×
 
Loading …

Willkommen auf Player FM!

Player FM scannt gerade das Web nach Podcasts mit hoher Qualität, die du genießen kannst. Es ist die beste Podcast-App und funktioniert auf Android, iPhone und im Web. Melde dich an, um Abos geräteübergreifend zu synchronisieren.

 

Kurzanleitung

Hören Sie sich diese Show an, während Sie die Gegend erkunden
Abspielen