Artwork

Inhalt bereitgestellt von Samuel Salzer and Aline Holzwarth, Samuel Salzer, and Aline Holzwarth. Alle Podcast-Inhalte, einschließlich Episoden, Grafiken und Podcast-Beschreibungen, werden direkt von Samuel Salzer and Aline Holzwarth, Samuel Salzer, and Aline Holzwarth oder seinem Podcast-Plattformpartner hochgeladen und bereitgestellt. Wenn Sie glauben, dass jemand Ihr urheberrechtlich geschütztes Werk ohne Ihre Erlaubnis nutzt, können Sie dem hier beschriebenen Verfahren folgen https://de.player.fm/legal.
Player FM - Podcast-App
Gehen Sie mit der App Player FM offline!

Misinformation Machines with Gordon Pennycook – Part 2

1:03:02
 
Teilen
 

Manage episode 448786542 series 2821307
Inhalt bereitgestellt von Samuel Salzer and Aline Holzwarth, Samuel Salzer, and Aline Holzwarth. Alle Podcast-Inhalte, einschließlich Episoden, Grafiken und Podcast-Beschreibungen, werden direkt von Samuel Salzer and Aline Holzwarth, Samuel Salzer, and Aline Holzwarth oder seinem Podcast-Plattformpartner hochgeladen und bereitgestellt. Wenn Sie glauben, dass jemand Ihr urheberrechtlich geschütztes Werk ohne Ihre Erlaubnis nutzt, können Sie dem hier beschriebenen Verfahren folgen https://de.player.fm/legal.

Debunkbot and Other Tools Against Misinformation

In this follow-up episode of the Behavioral Design Podcast, hosts Aline Holzwarth and Samuel Salzer welcome back Gordon Pennycook, psychology professor at Cornell University, to continue their deep dive into the battle against misinformation.

Building on their previous conversation around misinformation’s impact on democratic participation and the role of AI in spreading and combating falsehoods, this episode focuses on actionable strategies and interventions to combat misinformation effectively.

Gordon discusses evidence-based approaches, including nudges, accuracy prompts, and psychological inoculation (or prebunking) techniques, that empower individuals to better evaluate the information they encounter.

The conversation highlights recent advancements in using AI to debunk conspiracy theories and examines how AI-generated evidence can influence belief systems. They also tackle the role of social media platforms in moderating content, the ethical balance between free speech and misinformation, and practical steps that can make platforms safer without stifling expression.

This episode provides valuable insights for anyone interested in understanding how to counter misinformation through behavioral science and AI.

LINKS:

Gordon Pennycook:

Further Reading on Misinformation:

TIMESTAMPS:

01:27 Intro and Early Voting
06:45 Welcome back, Gordon!
07:52 Strategies to Combat Misinformation
11:10 Nudges and Behavioral Interventions
14:21 Comparing Intervention Strategies
19:08 Psychological Inoculation and Prebunking
32:21 Echo Chambers and Online Misinformation
34:13 Individual vs. Policy Interventions
36:21 If You Owned a Social Media Company
37:49 Algorithm Changes and Platform Quality
38:42 Community Notes and Fact-Checking
39:30 Reddit’s Moderation System
42:07 Generative AI and Fact-Checking
43:16 AI Debunking Conspiracy Theories
45:26 Effectiveness of AI in Changing Beliefs
51:32 Potential Misuse of AI
55:13 Final Thoughts and Reflections

--

Interesting in collaborating with Nuance? If you’d like to become one of our special projects, email us at hello@nuancebehavior.com or book a call directly on our website: ⁠⁠⁠nuancebehavior.com.⁠⁠⁠

Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 🚀. Members get access to extensive content databases, calls with field leaders, exclusive offers and discounts, and so much more.

Every Monday our ⁠⁠⁠⁠⁠⁠⁠Habit Weekly newsletter⁠⁠⁠⁠⁠⁠⁠ shares the best articles, videos, podcasts, and exclusive premium content from the world of behavioral science and business.

Get in touch via ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠podcast@habitweekly.com⁠⁠⁠⁠⁠⁠⁠

The song used is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Murgatroyd by David Pizarro⁠

  continue reading

59 Episoden

Artwork
iconTeilen
 
Manage episode 448786542 series 2821307
Inhalt bereitgestellt von Samuel Salzer and Aline Holzwarth, Samuel Salzer, and Aline Holzwarth. Alle Podcast-Inhalte, einschließlich Episoden, Grafiken und Podcast-Beschreibungen, werden direkt von Samuel Salzer and Aline Holzwarth, Samuel Salzer, and Aline Holzwarth oder seinem Podcast-Plattformpartner hochgeladen und bereitgestellt. Wenn Sie glauben, dass jemand Ihr urheberrechtlich geschütztes Werk ohne Ihre Erlaubnis nutzt, können Sie dem hier beschriebenen Verfahren folgen https://de.player.fm/legal.

Debunkbot and Other Tools Against Misinformation

In this follow-up episode of the Behavioral Design Podcast, hosts Aline Holzwarth and Samuel Salzer welcome back Gordon Pennycook, psychology professor at Cornell University, to continue their deep dive into the battle against misinformation.

Building on their previous conversation around misinformation’s impact on democratic participation and the role of AI in spreading and combating falsehoods, this episode focuses on actionable strategies and interventions to combat misinformation effectively.

Gordon discusses evidence-based approaches, including nudges, accuracy prompts, and psychological inoculation (or prebunking) techniques, that empower individuals to better evaluate the information they encounter.

The conversation highlights recent advancements in using AI to debunk conspiracy theories and examines how AI-generated evidence can influence belief systems. They also tackle the role of social media platforms in moderating content, the ethical balance between free speech and misinformation, and practical steps that can make platforms safer without stifling expression.

This episode provides valuable insights for anyone interested in understanding how to counter misinformation through behavioral science and AI.

LINKS:

Gordon Pennycook:

Further Reading on Misinformation:

TIMESTAMPS:

01:27 Intro and Early Voting
06:45 Welcome back, Gordon!
07:52 Strategies to Combat Misinformation
11:10 Nudges and Behavioral Interventions
14:21 Comparing Intervention Strategies
19:08 Psychological Inoculation and Prebunking
32:21 Echo Chambers and Online Misinformation
34:13 Individual vs. Policy Interventions
36:21 If You Owned a Social Media Company
37:49 Algorithm Changes and Platform Quality
38:42 Community Notes and Fact-Checking
39:30 Reddit’s Moderation System
42:07 Generative AI and Fact-Checking
43:16 AI Debunking Conspiracy Theories
45:26 Effectiveness of AI in Changing Beliefs
51:32 Potential Misuse of AI
55:13 Final Thoughts and Reflections

--

Interesting in collaborating with Nuance? If you’d like to become one of our special projects, email us at hello@nuancebehavior.com or book a call directly on our website: ⁠⁠⁠nuancebehavior.com.⁠⁠⁠

Support the podcast by joining ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Habit Weekly Pro⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ 🚀. Members get access to extensive content databases, calls with field leaders, exclusive offers and discounts, and so much more.

Every Monday our ⁠⁠⁠⁠⁠⁠⁠Habit Weekly newsletter⁠⁠⁠⁠⁠⁠⁠ shares the best articles, videos, podcasts, and exclusive premium content from the world of behavioral science and business.

Get in touch via ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠podcast@habitweekly.com⁠⁠⁠⁠⁠⁠⁠

The song used is ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Murgatroyd by David Pizarro⁠

  continue reading

59 Episoden

Alle Folgen

×
 
Loading …

Willkommen auf Player FM!

Player FM scannt gerade das Web nach Podcasts mit hoher Qualität, die du genießen kannst. Es ist die beste Podcast-App und funktioniert auf Android, iPhone und im Web. Melde dich an, um Abos geräteübergreifend zu synchronisieren.

 

Kurzanleitung