Artwork

Inhalt bereitgestellt von Daniel Filan. Alle Podcast-Inhalte, einschließlich Episoden, Grafiken und Podcast-Beschreibungen, werden direkt von Daniel Filan oder seinem Podcast-Plattformpartner hochgeladen und bereitgestellt. Wenn Sie glauben, dass jemand Ihr urheberrechtlich geschütztes Werk ohne Ihre Erlaubnis nutzt, können Sie dem hier beschriebenen Verfahren folgen https://de.player.fm/legal.
Player FM - Podcast-App
Gehen Sie mit der App Player FM offline!

8 - Assistance Games with Dylan Hadfield-Menell

2:23:17
 
Teilen
 

Manage episode 294475997 series 2844728
Inhalt bereitgestellt von Daniel Filan. Alle Podcast-Inhalte, einschließlich Episoden, Grafiken und Podcast-Beschreibungen, werden direkt von Daniel Filan oder seinem Podcast-Plattformpartner hochgeladen und bereitgestellt. Wenn Sie glauben, dass jemand Ihr urheberrechtlich geschütztes Werk ohne Ihre Erlaubnis nutzt, können Sie dem hier beschriebenen Verfahren folgen https://de.player.fm/legal.

How should we think about the technical problem of building smarter-than-human AI that does what we want? When and how should AI systems defer to us? Should they have their own goals, and how should those goals be managed? In this episode, Dylan Hadfield-Menell talks about his work on assistance games that formalizes these questions. The first couple years of my PhD program included many long conversations with Dylan that helped shape how I view AI x-risk research, so it was great to have another one in the form of a recorded interview.

Link to the transcript: axrp.net/episode/2021/06/08/episode-8-assistance-games-dylan-hadfield-menell.html

Link to the paper "Cooperative Inverse Reinforcement Learning": arxiv.org/abs/1606.03137

Link to the paper "The Off-Switch Game": arxiv.org/abs/1611.08219

Link to the paper "Inverse Reward Design": arxiv.org/abs/1711.02827

Dylan's twitter account: twitter.com/dhadfieldmenell

Link to apply to the MIT EECS graduate program: gradapply.mit.edu/eecs/apply/login/?next=/eecs/

Other work mentioned in the discussion:

- The original paper on inverse optimal control: asmedigitalcollection.asme.org/fluidsengineering/article-abstract/86/1/51/392203/When-Is-a-Linear-Control-System-Optimal

- Justin Fu's research on, among other things, adversarial IRL: scholar.google.com/citations?user=T9To2C0AAAAJ&hl=en&oi=ao

- Preferences implicit in the state of the world: arxiv.org/abs/1902.04198

- What are you optimizing for? Aligning recommender systems with human values: participatoryml.github.io/papers/2020/42.pdf

- The Assistive Multi-Armed Bandit: arxiv.org/abs/1901.08654

- Soares et al. on Corrigibility: openreview.net/forum?id=H1bIT1buWH

- Should Robots be Obedient?: arxiv.org/abs/1705.09990

- Rodney Brooks on the Seven Deadly Sins of Predicting the Future of AI: rodneybrooks.com/the-seven-deadly-sins-of-predicting-the-future-of-ai/

- Products in category theory: en.wikipedia.org/wiki/Product_(category_theory)

- AXRP Episode 7 - Side Effects with Victoria Krakovna: axrp.net/episode/2021/05/14/episode-7-side-effects-victoria-krakovna.html

- Attainable Utility Preservation: arxiv.org/abs/1902.09725

- Penalizing side effects using stepwise relative reachability: arxiv.org/abs/1806.01186

- Simplifying Reward Design through Divide-and-Conquer: arxiv.org/abs/1806.02501

- Active Inverse Reward Design: arxiv.org/abs/1809.03060

- An Efficient, Generalized Bellman Update For Cooperative Inverse Reinforcement Learning: proceedings.mlr.press/v80/malik18a.html

- Incomplete Contracting and AI Alignment: arxiv.org/abs/1804.04268

- Multi-Principal Assistance Games: arxiv.org/abs/2007.09540

- Consequences of Misaligned AI: arxiv.org/abs/2102.03896

  continue reading

42 Episoden

Artwork
iconTeilen
 
Manage episode 294475997 series 2844728
Inhalt bereitgestellt von Daniel Filan. Alle Podcast-Inhalte, einschließlich Episoden, Grafiken und Podcast-Beschreibungen, werden direkt von Daniel Filan oder seinem Podcast-Plattformpartner hochgeladen und bereitgestellt. Wenn Sie glauben, dass jemand Ihr urheberrechtlich geschütztes Werk ohne Ihre Erlaubnis nutzt, können Sie dem hier beschriebenen Verfahren folgen https://de.player.fm/legal.

How should we think about the technical problem of building smarter-than-human AI that does what we want? When and how should AI systems defer to us? Should they have their own goals, and how should those goals be managed? In this episode, Dylan Hadfield-Menell talks about his work on assistance games that formalizes these questions. The first couple years of my PhD program included many long conversations with Dylan that helped shape how I view AI x-risk research, so it was great to have another one in the form of a recorded interview.

Link to the transcript: axrp.net/episode/2021/06/08/episode-8-assistance-games-dylan-hadfield-menell.html

Link to the paper "Cooperative Inverse Reinforcement Learning": arxiv.org/abs/1606.03137

Link to the paper "The Off-Switch Game": arxiv.org/abs/1611.08219

Link to the paper "Inverse Reward Design": arxiv.org/abs/1711.02827

Dylan's twitter account: twitter.com/dhadfieldmenell

Link to apply to the MIT EECS graduate program: gradapply.mit.edu/eecs/apply/login/?next=/eecs/

Other work mentioned in the discussion:

- The original paper on inverse optimal control: asmedigitalcollection.asme.org/fluidsengineering/article-abstract/86/1/51/392203/When-Is-a-Linear-Control-System-Optimal

- Justin Fu's research on, among other things, adversarial IRL: scholar.google.com/citations?user=T9To2C0AAAAJ&hl=en&oi=ao

- Preferences implicit in the state of the world: arxiv.org/abs/1902.04198

- What are you optimizing for? Aligning recommender systems with human values: participatoryml.github.io/papers/2020/42.pdf

- The Assistive Multi-Armed Bandit: arxiv.org/abs/1901.08654

- Soares et al. on Corrigibility: openreview.net/forum?id=H1bIT1buWH

- Should Robots be Obedient?: arxiv.org/abs/1705.09990

- Rodney Brooks on the Seven Deadly Sins of Predicting the Future of AI: rodneybrooks.com/the-seven-deadly-sins-of-predicting-the-future-of-ai/

- Products in category theory: en.wikipedia.org/wiki/Product_(category_theory)

- AXRP Episode 7 - Side Effects with Victoria Krakovna: axrp.net/episode/2021/05/14/episode-7-side-effects-victoria-krakovna.html

- Attainable Utility Preservation: arxiv.org/abs/1902.09725

- Penalizing side effects using stepwise relative reachability: arxiv.org/abs/1806.01186

- Simplifying Reward Design through Divide-and-Conquer: arxiv.org/abs/1806.02501

- Active Inverse Reward Design: arxiv.org/abs/1809.03060

- An Efficient, Generalized Bellman Update For Cooperative Inverse Reinforcement Learning: proceedings.mlr.press/v80/malik18a.html

- Incomplete Contracting and AI Alignment: arxiv.org/abs/1804.04268

- Multi-Principal Assistance Games: arxiv.org/abs/2007.09540

- Consequences of Misaligned AI: arxiv.org/abs/2102.03896

  continue reading

42 Episoden

Kaikki jaksot

×
 
Loading …

Willkommen auf Player FM!

Player FM scannt gerade das Web nach Podcasts mit hoher Qualität, die du genießen kannst. Es ist die beste Podcast-App und funktioniert auf Android, iPhone und im Web. Melde dich an, um Abos geräteübergreifend zu synchronisieren.

 

Kurzanleitung