Artwork

Inhalt bereitgestellt von Carter Phipps. Alle Podcast-Inhalte, einschließlich Episoden, Grafiken und Podcast-Beschreibungen, werden direkt von Carter Phipps oder seinem Podcast-Plattformpartner hochgeladen und bereitgestellt. Wenn Sie glauben, dass jemand Ihr urheberrechtlich geschütztes Werk ohne Ihre Erlaubnis nutzt, können Sie dem hier beschriebenen Verfahren folgen https://de.player.fm/legal.
Player FM - Podcast-App
Gehen Sie mit der App Player FM offline!

Avi Tuschman: Can Wikipedia Save Social Media?

1:26:45
 
Teilen
 

Manage episode 295079054 series 2933485
Inhalt bereitgestellt von Carter Phipps. Alle Podcast-Inhalte, einschließlich Episoden, Grafiken und Podcast-Beschreibungen, werden direkt von Carter Phipps oder seinem Podcast-Plattformpartner hochgeladen und bereitgestellt. Wenn Sie glauben, dass jemand Ihr urheberrechtlich geschütztes Werk ohne Ihre Erlaubnis nutzt, können Sie dem hier beschriebenen Verfahren folgen https://de.player.fm/legal.

Misinformation. Disinformation. Fake news. Conspiracy theories. These viruses of the information age proliferate with frightening speed on social media channels like Facebook, Twitter, and YouTube, sometimes with serious consequences. Over the past few years, as the scope of the problem has become unavoidable, there has been much debate over how to deal with it, and increasing pressure to do so. Should government regulate these platforms? Should the tech companies regulate themselves? Or is there another way? Avi Tuschman, a silicon valley entrepreneur and pioneer in the field of psychometric AI, believes there is. Last year, he published a paper outlining a bold and creative proposal for creating a third-party reviewing system based on a website everyone knows and loves: Wikipedia. Wikipedia, as he points out, is a remarkable success. It’s accurate to an extraordinary degree. Research all over the world rely on it. And its success is due to a unique formula: a distributed group of non-employee volunteers who write and edit the information on the site and, in conjunction with AI processes, make sure it conforms to the site’s high standards. In his paper, entitled Rosenbaum’s Magical Entity: How to Reduce Misinformation on Social Media, he suggests that we should use “the same open-source, software mechanisms and safeguards that have successfully evolved on Wikipedia to enable the collaborative adjudication of verifiability.”

It’s a proposal that potentially avoids many of the politically tricky consequences of getting government involved in regulating public platforms run by private companies. But how exactly would it work? Where does free speech come in? How much fact-checking do we want on our social media sites? And where do we draw the line between discourse that is merely unconventional and that which is outright conspiratorial? To unpack these questions and more, I invited Avi Tuschman to join me on Thinking Ahead for what turned out to be a thought-provoking conversation.

  continue reading

44 Episoden

Artwork
iconTeilen
 
Manage episode 295079054 series 2933485
Inhalt bereitgestellt von Carter Phipps. Alle Podcast-Inhalte, einschließlich Episoden, Grafiken und Podcast-Beschreibungen, werden direkt von Carter Phipps oder seinem Podcast-Plattformpartner hochgeladen und bereitgestellt. Wenn Sie glauben, dass jemand Ihr urheberrechtlich geschütztes Werk ohne Ihre Erlaubnis nutzt, können Sie dem hier beschriebenen Verfahren folgen https://de.player.fm/legal.

Misinformation. Disinformation. Fake news. Conspiracy theories. These viruses of the information age proliferate with frightening speed on social media channels like Facebook, Twitter, and YouTube, sometimes with serious consequences. Over the past few years, as the scope of the problem has become unavoidable, there has been much debate over how to deal with it, and increasing pressure to do so. Should government regulate these platforms? Should the tech companies regulate themselves? Or is there another way? Avi Tuschman, a silicon valley entrepreneur and pioneer in the field of psychometric AI, believes there is. Last year, he published a paper outlining a bold and creative proposal for creating a third-party reviewing system based on a website everyone knows and loves: Wikipedia. Wikipedia, as he points out, is a remarkable success. It’s accurate to an extraordinary degree. Research all over the world rely on it. And its success is due to a unique formula: a distributed group of non-employee volunteers who write and edit the information on the site and, in conjunction with AI processes, make sure it conforms to the site’s high standards. In his paper, entitled Rosenbaum’s Magical Entity: How to Reduce Misinformation on Social Media, he suggests that we should use “the same open-source, software mechanisms and safeguards that have successfully evolved on Wikipedia to enable the collaborative adjudication of verifiability.”

It’s a proposal that potentially avoids many of the politically tricky consequences of getting government involved in regulating public platforms run by private companies. But how exactly would it work? Where does free speech come in? How much fact-checking do we want on our social media sites? And where do we draw the line between discourse that is merely unconventional and that which is outright conspiratorial? To unpack these questions and more, I invited Avi Tuschman to join me on Thinking Ahead for what turned out to be a thought-provoking conversation.

  continue reading

44 Episoden

Alle Folgen

×
 
Loading …

Willkommen auf Player FM!

Player FM scannt gerade das Web nach Podcasts mit hoher Qualität, die du genießen kannst. Es ist die beste Podcast-App und funktioniert auf Android, iPhone und im Web. Melde dich an, um Abos geräteübergreifend zu synchronisieren.

 

Kurzanleitung