Artwork

Inhalt bereitgestellt von London Futurists. Alle Podcast-Inhalte, einschließlich Episoden, Grafiken und Podcast-Beschreibungen, werden direkt von London Futurists oder seinem Podcast-Plattformpartner hochgeladen und bereitgestellt. Wenn Sie glauben, dass jemand Ihr urheberrechtlich geschütztes Werk ohne Ihre Erlaubnis nutzt, können Sie dem hier beschriebenen Verfahren folgen https://de.player.fm/legal.
Player FM - Podcast-App
Gehen Sie mit der App Player FM offline!

The 4 Cs of Superintelligence

32:39
 
Teilen
 

Manage episode 366223699 series 3390521
Inhalt bereitgestellt von London Futurists. Alle Podcast-Inhalte, einschließlich Episoden, Grafiken und Podcast-Beschreibungen, werden direkt von London Futurists oder seinem Podcast-Plattformpartner hochgeladen und bereitgestellt. Wenn Sie glauben, dass jemand Ihr urheberrechtlich geschütztes Werk ohne Ihre Erlaubnis nutzt, können Sie dem hier beschriebenen Verfahren folgen https://de.player.fm/legal.

The 4 Cs of Superintelligence is a framework that casts fresh light on the vexing question of possible outcomes of humanity's interactions with an emerging superintelligent AI. The 4 Cs are Cease, Control, Catastrophe, and Consent. In this episode, the show's co-hosts, Calum Chace and David Wood, debate the pros and cons of the first two of these Cs, and lay the groundwork for a follow-up discussion of the pros and cons of the remaining two.
Topics addressed in this episode include:
*) Reasons why superintelligence might never be created
*) Timelines for the arrival of superintelligence have been compressed
*) Does the unpredictability of superintelligence mean we shouldn't try to consider its arrival in advance?
*) Two "big bangs" have caused dramatic progress in AI; what might the next such breakthrough bring?
*) The flaws in the "Level zero futurist" position
*) Two analogies contrasted: overcrowding on Mars , and travelling to Mars without knowing what we'll breathe when we'll get there
*) A startling illustration of the dramatic power of exponential growth
*) A concern for short-term risk is by no means a reason to pay less attention to longer-term risks
*) Why the "Cease" option is looking more credible nowadays than it did a few years ago
*) Might "Cease" become a "Plan B" option?
*) Examples of political dictators who turned away from acquiring or using various highly risky weapons
*) Challenges facing a "Turing Police" who monitor for dangerous AI developments
*) If a superintelligence has agency (volition), it seems that "Control" is impossible
*) Ideas for designing superintelligence without agency or volition
*) Complications with emergent sub-goals (convergent instrumental goals)
*) A badly configured superintelligent coffee fetcher
*) Bad actors may add agency to a superintelligence, thinking it will boost its performance
*) The possibility of changing social incentives to reduce the dangers of people becoming bad actors
*) What's particularly hard about both "Cease" and "Control" is that they would need to remain in place forever
*) Human civilisations contain many diametrically opposed goals
*) Going beyond the statement of "Life, liberty, and the pursuit of happiness" to a starting point for aligning AI with human values?
*) A cliff-hanger ending
The survey "Key open questions about the transition to AGI" can be found at https://transpolitica.org/projects/key-open-questions-about-the-transition-to-agi/
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

  continue reading

82 Episoden

Artwork

The 4 Cs of Superintelligence

London Futurists

12 subscribers

published

iconTeilen
 
Manage episode 366223699 series 3390521
Inhalt bereitgestellt von London Futurists. Alle Podcast-Inhalte, einschließlich Episoden, Grafiken und Podcast-Beschreibungen, werden direkt von London Futurists oder seinem Podcast-Plattformpartner hochgeladen und bereitgestellt. Wenn Sie glauben, dass jemand Ihr urheberrechtlich geschütztes Werk ohne Ihre Erlaubnis nutzt, können Sie dem hier beschriebenen Verfahren folgen https://de.player.fm/legal.

The 4 Cs of Superintelligence is a framework that casts fresh light on the vexing question of possible outcomes of humanity's interactions with an emerging superintelligent AI. The 4 Cs are Cease, Control, Catastrophe, and Consent. In this episode, the show's co-hosts, Calum Chace and David Wood, debate the pros and cons of the first two of these Cs, and lay the groundwork for a follow-up discussion of the pros and cons of the remaining two.
Topics addressed in this episode include:
*) Reasons why superintelligence might never be created
*) Timelines for the arrival of superintelligence have been compressed
*) Does the unpredictability of superintelligence mean we shouldn't try to consider its arrival in advance?
*) Two "big bangs" have caused dramatic progress in AI; what might the next such breakthrough bring?
*) The flaws in the "Level zero futurist" position
*) Two analogies contrasted: overcrowding on Mars , and travelling to Mars without knowing what we'll breathe when we'll get there
*) A startling illustration of the dramatic power of exponential growth
*) A concern for short-term risk is by no means a reason to pay less attention to longer-term risks
*) Why the "Cease" option is looking more credible nowadays than it did a few years ago
*) Might "Cease" become a "Plan B" option?
*) Examples of political dictators who turned away from acquiring or using various highly risky weapons
*) Challenges facing a "Turing Police" who monitor for dangerous AI developments
*) If a superintelligence has agency (volition), it seems that "Control" is impossible
*) Ideas for designing superintelligence without agency or volition
*) Complications with emergent sub-goals (convergent instrumental goals)
*) A badly configured superintelligent coffee fetcher
*) Bad actors may add agency to a superintelligence, thinking it will boost its performance
*) The possibility of changing social incentives to reduce the dangers of people becoming bad actors
*) What's particularly hard about both "Cease" and "Control" is that they would need to remain in place forever
*) Human civilisations contain many diametrically opposed goals
*) Going beyond the statement of "Life, liberty, and the pursuit of happiness" to a starting point for aligning AI with human values?
*) A cliff-hanger ending
The survey "Key open questions about the transition to AGI" can be found at https://transpolitica.org/projects/key-open-questions-about-the-transition-to-agi/
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

  continue reading

82 Episoden

Alle Folgen

×
 
Loading …

Willkommen auf Player FM!

Player FM scannt gerade das Web nach Podcasts mit hoher Qualität, die du genießen kannst. Es ist die beste Podcast-App und funktioniert auf Android, iPhone und im Web. Melde dich an, um Abos geräteübergreifend zu synchronisieren.

 

Kurzanleitung