Artwork

Inhalt bereitgestellt von SAS Podcast Admins, Kimberly Nevala, and Strategic Advisor - SAS. Alle Podcast-Inhalte, einschließlich Episoden, Grafiken und Podcast-Beschreibungen, werden direkt von SAS Podcast Admins, Kimberly Nevala, and Strategic Advisor - SAS oder seinem Podcast-Plattformpartner hochgeladen und bereitgestellt. Wenn Sie glauben, dass jemand Ihr urheberrechtlich geschütztes Werk ohne Ihre Erlaubnis nutzt, können Sie dem hier beschriebenen Verfahren folgen https://de.player.fm/legal.
Player FM - Podcast-App
Gehen Sie mit der App Player FM offline!

AI: Competitor or Collaborator with Lama Nachman

38:48
 
Teilen
 

Manage episode 393677215 series 3546664
Inhalt bereitgestellt von SAS Podcast Admins, Kimberly Nevala, and Strategic Advisor - SAS. Alle Podcast-Inhalte, einschließlich Episoden, Grafiken und Podcast-Beschreibungen, werden direkt von SAS Podcast Admins, Kimberly Nevala, and Strategic Advisor - SAS oder seinem Podcast-Plattformpartner hochgeladen und bereitgestellt. Wenn Sie glauben, dass jemand Ihr urheberrechtlich geschütztes Werk ohne Ihre Erlaubnis nutzt, können Sie dem hier beschriebenen Verfahren folgen https://de.player.fm/legal.

Lama Nachman is an Intel fellow and the director of Intel’s Human & AI Systems Research Lab. She also led Intel’s Responsible AI program. Lama’s team researches how AI can be applied to deliver contextually appropriate experiences that increase accessibility and amplify human potential.

In this inspirational discussion, Lama exposes the need for equity in AI, demonstrates the difficulty in empowering authentic human interaction, and why ‘Wizard of Oz’ approaches as well as a willingness to go back to the drawing board are critical.

Through the lens of her work in early childhood education to manufacturing and assistive technologies, Lama deftly illustrates the ethical dilemmas that arise with any AI application - no matter how well-meaning. Kimberly and Lama discuss why perfectionism in the enemy of progress and the need to design for uncertainty in AI. Speaking to her quest to give people suffering from ALS back their voice, Lama stresses how designing for authenticity over expediency is critical to unlock the human experience.

While pondering the many ethical conundrums that keep her up at night, Lama shows how an expansive, multi-disciplinary approach is critical to mitigate harm. Any why cooperation between humans and AI maximizes the potential of both.

A full transcript of this episode can be found here.

Our final episode this season features Dr. Ansgar Koene. Ansgar is the Global AI Ethics and Regulatory Leader at EY and a Sr. Research Fellow who specializes in social media, data ethics and AI regulation. Subscribe now to Pondering AI so you don’t miss him.

  continue reading

54 Episoden

Artwork
iconTeilen
 
Manage episode 393677215 series 3546664
Inhalt bereitgestellt von SAS Podcast Admins, Kimberly Nevala, and Strategic Advisor - SAS. Alle Podcast-Inhalte, einschließlich Episoden, Grafiken und Podcast-Beschreibungen, werden direkt von SAS Podcast Admins, Kimberly Nevala, and Strategic Advisor - SAS oder seinem Podcast-Plattformpartner hochgeladen und bereitgestellt. Wenn Sie glauben, dass jemand Ihr urheberrechtlich geschütztes Werk ohne Ihre Erlaubnis nutzt, können Sie dem hier beschriebenen Verfahren folgen https://de.player.fm/legal.

Lama Nachman is an Intel fellow and the director of Intel’s Human & AI Systems Research Lab. She also led Intel’s Responsible AI program. Lama’s team researches how AI can be applied to deliver contextually appropriate experiences that increase accessibility and amplify human potential.

In this inspirational discussion, Lama exposes the need for equity in AI, demonstrates the difficulty in empowering authentic human interaction, and why ‘Wizard of Oz’ approaches as well as a willingness to go back to the drawing board are critical.

Through the lens of her work in early childhood education to manufacturing and assistive technologies, Lama deftly illustrates the ethical dilemmas that arise with any AI application - no matter how well-meaning. Kimberly and Lama discuss why perfectionism in the enemy of progress and the need to design for uncertainty in AI. Speaking to her quest to give people suffering from ALS back their voice, Lama stresses how designing for authenticity over expediency is critical to unlock the human experience.

While pondering the many ethical conundrums that keep her up at night, Lama shows how an expansive, multi-disciplinary approach is critical to mitigate harm. Any why cooperation between humans and AI maximizes the potential of both.

A full transcript of this episode can be found here.

Our final episode this season features Dr. Ansgar Koene. Ansgar is the Global AI Ethics and Regulatory Leader at EY and a Sr. Research Fellow who specializes in social media, data ethics and AI regulation. Subscribe now to Pondering AI so you don’t miss him.

  continue reading

54 Episoden

Alle afleveringen

×
 
Loading …

Willkommen auf Player FM!

Player FM scannt gerade das Web nach Podcasts mit hoher Qualität, die du genießen kannst. Es ist die beste Podcast-App und funktioniert auf Android, iPhone und im Web. Melde dich an, um Abos geräteübergreifend zu synchronisieren.

 

Kurzanleitung