The director’s commentary track for Daring Fireball. Long digressions on Apple, technology, design, movies, and more.
…
continue reading
Inhalt bereitgestellt von LessWrong. Alle Podcast-Inhalte, einschließlich Episoden, Grafiken und Podcast-Beschreibungen, werden direkt von LessWrong oder seinem Podcast-Plattformpartner hochgeladen und bereitgestellt. Wenn Sie glauben, dass jemand Ihr urheberrechtlich geschütztes Werk ohne Ihre Erlaubnis nutzt, können Sie dem hier beschriebenen Verfahren folgen https://de.player.fm/legal.
Player FM - Podcast-App
Gehen Sie mit der App Player FM offline!
Gehen Sie mit der App Player FM offline!
“Nice-ish, smooth takeoff (with imperfect safeguards) probably kills most ‘classic humans’ in a few decades.” by Raemon
MP3•Episode-Home
Manage episode 510869585 series 3364760
Inhalt bereitgestellt von LessWrong. Alle Podcast-Inhalte, einschließlich Episoden, Grafiken und Podcast-Beschreibungen, werden direkt von LessWrong oder seinem Podcast-Plattformpartner hochgeladen und bereitgestellt. Wenn Sie glauben, dass jemand Ihr urheberrechtlich geschütztes Werk ohne Ihre Erlaubnis nutzt, können Sie dem hier beschriebenen Verfahren folgen https://de.player.fm/legal.
I wrote my recent Accelerando post to mostly stand on it's own as a takeoff scenario. But, the reason it's on my mind is that, if I imagine being very optimistic about how a smooth AI takeoff goes, but where an early step wasn't "fully solve the unbounded alignment problem, and then end up with extremely robust safeguards[1]"...
...then my current guess is that Reasonably Nice Smooth Takeoff still results in all or at least most biological humans dying (or, "dying out", or at best, ambiguously-consensually-uploaded), like, 10-80 years later.
Slightly more specific about the assumptions I'm trying to inhabit here:
Outline:
(03:50) There is no safe muddling through without perfect safeguards
(06:24) i. Factorio
(06:27) (or: Its really hard to not just take peoples stuff, when they move as slowly as plants)
(10:15) Fictional vs Real Evidence
(11:35) Decades. Or: thousands of years of subjective time, evolution, and civilizational change.
(12:23) This is the Dream Time
(14:33) Is the resulting posthuman population morally valuable?
(16:51) The Hanson Counterpoint: So youre against ever changing?
(19:04) Cant superintelligent AIs/uploads coordinate to avoid this?
(21:18) How Confident Am I?
The original text contained 4 footnotes which were omitted from this narration.
---
First published:
October 2nd, 2025
Source:
https://www.lesswrong.com/posts/v4rsqTxHqXp5tTwZh/nice-ish-smooth-takeoff-with-imperfect-safeguards-probably
---
Narrated by TYPE III AUDIO.
…
continue reading
...then my current guess is that Reasonably Nice Smooth Takeoff still results in all or at least most biological humans dying (or, "dying out", or at best, ambiguously-consensually-uploaded), like, 10-80 years later.
Slightly more specific about the assumptions I'm trying to inhabit here:
- It's politically intractable to get a global halt or globally controlled takeoff.
- Superintelligence is moderately likely to be somewhat nice.
- We'll get to run lots of experiments on near-human-AI that will be reasonably informative about how things will generalize to the somewhat-superhuman-level.
- We get to ramp up [...]
Outline:
(03:50) There is no safe muddling through without perfect safeguards
(06:24) i. Factorio
(06:27) (or: Its really hard to not just take peoples stuff, when they move as slowly as plants)
(10:15) Fictional vs Real Evidence
(11:35) Decades. Or: thousands of years of subjective time, evolution, and civilizational change.
(12:23) This is the Dream Time
(14:33) Is the resulting posthuman population morally valuable?
(16:51) The Hanson Counterpoint: So youre against ever changing?
(19:04) Cant superintelligent AIs/uploads coordinate to avoid this?
(21:18) How Confident Am I?
The original text contained 4 footnotes which were omitted from this narration.
---
First published:
October 2nd, 2025
Source:
https://www.lesswrong.com/posts/v4rsqTxHqXp5tTwZh/nice-ish-smooth-takeoff-with-imperfect-safeguards-probably
---
Narrated by TYPE III AUDIO.
627 Episoden
MP3•Episode-Home
Manage episode 510869585 series 3364760
Inhalt bereitgestellt von LessWrong. Alle Podcast-Inhalte, einschließlich Episoden, Grafiken und Podcast-Beschreibungen, werden direkt von LessWrong oder seinem Podcast-Plattformpartner hochgeladen und bereitgestellt. Wenn Sie glauben, dass jemand Ihr urheberrechtlich geschütztes Werk ohne Ihre Erlaubnis nutzt, können Sie dem hier beschriebenen Verfahren folgen https://de.player.fm/legal.
I wrote my recent Accelerando post to mostly stand on it's own as a takeoff scenario. But, the reason it's on my mind is that, if I imagine being very optimistic about how a smooth AI takeoff goes, but where an early step wasn't "fully solve the unbounded alignment problem, and then end up with extremely robust safeguards[1]"...
...then my current guess is that Reasonably Nice Smooth Takeoff still results in all or at least most biological humans dying (or, "dying out", or at best, ambiguously-consensually-uploaded), like, 10-80 years later.
Slightly more specific about the assumptions I'm trying to inhabit here:
Outline:
(03:50) There is no safe muddling through without perfect safeguards
(06:24) i. Factorio
(06:27) (or: Its really hard to not just take peoples stuff, when they move as slowly as plants)
(10:15) Fictional vs Real Evidence
(11:35) Decades. Or: thousands of years of subjective time, evolution, and civilizational change.
(12:23) This is the Dream Time
(14:33) Is the resulting posthuman population morally valuable?
(16:51) The Hanson Counterpoint: So youre against ever changing?
(19:04) Cant superintelligent AIs/uploads coordinate to avoid this?
(21:18) How Confident Am I?
The original text contained 4 footnotes which were omitted from this narration.
---
First published:
October 2nd, 2025
Source:
https://www.lesswrong.com/posts/v4rsqTxHqXp5tTwZh/nice-ish-smooth-takeoff-with-imperfect-safeguards-probably
---
Narrated by TYPE III AUDIO.
…
continue reading
...then my current guess is that Reasonably Nice Smooth Takeoff still results in all or at least most biological humans dying (or, "dying out", or at best, ambiguously-consensually-uploaded), like, 10-80 years later.
Slightly more specific about the assumptions I'm trying to inhabit here:
- It's politically intractable to get a global halt or globally controlled takeoff.
- Superintelligence is moderately likely to be somewhat nice.
- We'll get to run lots of experiments on near-human-AI that will be reasonably informative about how things will generalize to the somewhat-superhuman-level.
- We get to ramp up [...]
Outline:
(03:50) There is no safe muddling through without perfect safeguards
(06:24) i. Factorio
(06:27) (or: Its really hard to not just take peoples stuff, when they move as slowly as plants)
(10:15) Fictional vs Real Evidence
(11:35) Decades. Or: thousands of years of subjective time, evolution, and civilizational change.
(12:23) This is the Dream Time
(14:33) Is the resulting posthuman population morally valuable?
(16:51) The Hanson Counterpoint: So youre against ever changing?
(19:04) Cant superintelligent AIs/uploads coordinate to avoid this?
(21:18) How Confident Am I?
The original text contained 4 footnotes which were omitted from this narration.
---
First published:
October 2nd, 2025
Source:
https://www.lesswrong.com/posts/v4rsqTxHqXp5tTwZh/nice-ish-smooth-takeoff-with-imperfect-safeguards-probably
---
Narrated by TYPE III AUDIO.
627 Episoden
Alle Folgen
×Willkommen auf Player FM!
Player FM scannt gerade das Web nach Podcasts mit hoher Qualität, die du genießen kannst. Es ist die beste Podcast-App und funktioniert auf Android, iPhone und im Web. Melde dich an, um Abos geräteübergreifend zu synchronisieren.