Artwork

Inhalt bereitgestellt von Matthew van der Merwe, Pablo Stafforini, Matthew van der Merwe, and Pablo Stafforini. Alle Podcast-Inhalte, einschließlich Episoden, Grafiken und Podcast-Beschreibungen, werden direkt von Matthew van der Merwe, Pablo Stafforini, Matthew van der Merwe, and Pablo Stafforini oder seinem Podcast-Plattformpartner hochgeladen und bereitgestellt. Wenn Sie glauben, dass jemand Ihr urheberrechtlich geschütztes Werk ohne Ihre Erlaubnis nutzt, können Sie dem hier beschriebenen Verfahren folgen https://de.player.fm/legal.
Player FM - Podcast-App
Gehen Sie mit der App Player FM offline!

#5: supervolcanoes, AI takeover, and What We Owe the Future

31:26
 
Teilen
 

Manage episode 340996248 series 3340630
Inhalt bereitgestellt von Matthew van der Merwe, Pablo Stafforini, Matthew van der Merwe, and Pablo Stafforini. Alle Podcast-Inhalte, einschließlich Episoden, Grafiken und Podcast-Beschreibungen, werden direkt von Matthew van der Merwe, Pablo Stafforini, Matthew van der Merwe, and Pablo Stafforini oder seinem Podcast-Plattformpartner hochgeladen und bereitgestellt. Wenn Sie glauben, dass jemand Ihr urheberrechtlich geschütztes Werk ohne Ihre Erlaubnis nutzt, können Sie dem hier beschriebenen Verfahren folgen https://de.player.fm/legal.

Future Matters is a newsletter about longtermism brought to you by Matthew van der Merwe and Pablo Stafforini. Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, read on the EA Forum and follow on Twitter.

00:00 Welcome to Future Matters. 01:08 MacAskill — What We Owe the Future. 01:34 Lifland — Samotsvety's AI risk forecasts. 02:11 Halstead — Climate Change and Longtermism. 02:43 Good Judgment — Long-term risks and climate change. 02:54 Thorstad — Existential risk pessimism and the time of perils. 03:32 Hamilton — Space and existential risk. 04:07 Cassidy & Mani — Huge volcanic eruptions. 04:45 Boyd & Wilson — Island refuges for surviving nuclear winter and other abrupt sun-reducing catastrophes. 05:28 Hilton — Preventing an AI-related catastrophe. 06:13 Lewis — Most small probabilities aren't Pascalian. 07:04 Yglesias — What's long-term about "longtermism”? 07:33 Lifland — Prioritizing x-risks may require caring about future people. 08:40 Karnofsky — AI strategy nearcasting. 09:11 Karnofsky — How might we align transformative AI if it's developed very soon? 09:51 Matthews — How effective altruism went from a niche movement to a billion-dollar force. 10:28 News. 14:28 Conversation with Ajeya Cotra. 15:02 What do you mean by human feedback on diverse tasks (HFDT) and what made you focus on it? 18:08 Could you walk us through the three assumptions you make about how this scenario plays out? 20:49 What are the key properties of the model you call Alex? 22:55 What do you mean by “playing the training game”, and why would Alex behave in that way? 24:34 Can you describe how deploying Alex would result in a loss of human control? 29:40 Can you talk about the sorts of specific countermeasures to prevent takeover?

  continue reading

9 Episoden

Artwork
iconTeilen
 
Manage episode 340996248 series 3340630
Inhalt bereitgestellt von Matthew van der Merwe, Pablo Stafforini, Matthew van der Merwe, and Pablo Stafforini. Alle Podcast-Inhalte, einschließlich Episoden, Grafiken und Podcast-Beschreibungen, werden direkt von Matthew van der Merwe, Pablo Stafforini, Matthew van der Merwe, and Pablo Stafforini oder seinem Podcast-Plattformpartner hochgeladen und bereitgestellt. Wenn Sie glauben, dass jemand Ihr urheberrechtlich geschütztes Werk ohne Ihre Erlaubnis nutzt, können Sie dem hier beschriebenen Verfahren folgen https://de.player.fm/legal.

Future Matters is a newsletter about longtermism brought to you by Matthew van der Merwe and Pablo Stafforini. Each month we collect and summarize longtermism-relevant research, share news from the longtermism community, and feature a conversation with a prominent researcher. You can also subscribe on Substack, read on the EA Forum and follow on Twitter.

00:00 Welcome to Future Matters. 01:08 MacAskill — What We Owe the Future. 01:34 Lifland — Samotsvety's AI risk forecasts. 02:11 Halstead — Climate Change and Longtermism. 02:43 Good Judgment — Long-term risks and climate change. 02:54 Thorstad — Existential risk pessimism and the time of perils. 03:32 Hamilton — Space and existential risk. 04:07 Cassidy & Mani — Huge volcanic eruptions. 04:45 Boyd & Wilson — Island refuges for surviving nuclear winter and other abrupt sun-reducing catastrophes. 05:28 Hilton — Preventing an AI-related catastrophe. 06:13 Lewis — Most small probabilities aren't Pascalian. 07:04 Yglesias — What's long-term about "longtermism”? 07:33 Lifland — Prioritizing x-risks may require caring about future people. 08:40 Karnofsky — AI strategy nearcasting. 09:11 Karnofsky — How might we align transformative AI if it's developed very soon? 09:51 Matthews — How effective altruism went from a niche movement to a billion-dollar force. 10:28 News. 14:28 Conversation with Ajeya Cotra. 15:02 What do you mean by human feedback on diverse tasks (HFDT) and what made you focus on it? 18:08 Could you walk us through the three assumptions you make about how this scenario plays out? 20:49 What are the key properties of the model you call Alex? 22:55 What do you mean by “playing the training game”, and why would Alex behave in that way? 24:34 Can you describe how deploying Alex would result in a loss of human control? 29:40 Can you talk about the sorts of specific countermeasures to prevent takeover?

  continue reading

9 Episoden

Alle Folgen

×
 
Loading …

Willkommen auf Player FM!

Player FM scannt gerade das Web nach Podcasts mit hoher Qualität, die du genießen kannst. Es ist die beste Podcast-App und funktioniert auf Android, iPhone und im Web. Melde dich an, um Abos geräteübergreifend zu synchronisieren.

 

Kurzanleitung