Artwork

Inhalt bereitgestellt von Annual Reviews. Alle Podcast-Inhalte, einschließlich Episoden, Grafiken und Podcast-Beschreibungen, werden direkt von Annual Reviews oder seinem Podcast-Plattformpartner hochgeladen und bereitgestellt. Wenn Sie glauben, dass jemand Ihr urheberrechtlich geschütztes Werk ohne Ihre Erlaubnis nutzt, können Sie dem hier beschriebenen Verfahren folgen https://de.player.fm/legal.
Player FM - Podcast-App
Gehen Sie mit der App Player FM offline!

ATGthePodcast 220 - Evaluating AI Tools For Researchers: A Guide For Libraries

33:15
 
Teilen
 

Manage episode 381321849 series 1327300
Inhalt bereitgestellt von Annual Reviews. Alle Podcast-Inhalte, einschließlich Episoden, Grafiken und Podcast-Beschreibungen, werden direkt von Annual Reviews oder seinem Podcast-Plattformpartner hochgeladen und bereitgestellt. Wenn Sie glauben, dass jemand Ihr urheberrechtlich geschütztes Werk ohne Ihre Erlaubnis nutzt, können Sie dem hier beschriebenen Verfahren folgen https://de.player.fm/legal.

Audio from the 2022 Charleston Conference from a Session titled "Evaluating AI Tools For Researchers: A Guide For Libraries.”

This session was presented by Michael Upshall, Consultant, Former Publisher, and Igor Brbre, Information Specialist, NHS Healthcare Improvement, Scotland, and research librarian.

In this session, Michael and Igor outline an assessment framework for evaluating new AI-based tools, from an evaluation of the corpus to being able to measure accuracy and effectiveness, as well as being able to identify biases and inequalities.

The evaluation framework is exemplified in a case study including a measured trial of some of these tools, comparing human and machine-facilitated approaches to the research process, and comparing time taken and quality of results, using data from a health library. Until now, much of the help provided by libraries in the form of fact sheets and how-to guides has been evaluated only subjectively; this case studies aims to show a better way to evaluate these tools, using statistically valid samples plus feedback from users to identify how widely used and how successful these tools are, in the hopes of being better equipped to evaluate new AI utilities for research and submission.

Video of the presentation available at: https://youtu.be/ifZ-xQKw-oY?si=CQ3Yflo-zhJV_VMO

Social Media:

https://www.linkedin.com/in/mupshall/

Twitter:

Keywords: #AI, #AITools, #AIServices, #research, #knowledge, #scholcomm, #collaboration,#engagement, #problemsolvers, #publishing, #libraries, #librarians, #information, #ChsConf, #LibrariesAndVendors, #LibrariesAndPublishers,

  continue reading

252 Episoden

Artwork
iconTeilen
 
Manage episode 381321849 series 1327300
Inhalt bereitgestellt von Annual Reviews. Alle Podcast-Inhalte, einschließlich Episoden, Grafiken und Podcast-Beschreibungen, werden direkt von Annual Reviews oder seinem Podcast-Plattformpartner hochgeladen und bereitgestellt. Wenn Sie glauben, dass jemand Ihr urheberrechtlich geschütztes Werk ohne Ihre Erlaubnis nutzt, können Sie dem hier beschriebenen Verfahren folgen https://de.player.fm/legal.

Audio from the 2022 Charleston Conference from a Session titled "Evaluating AI Tools For Researchers: A Guide For Libraries.”

This session was presented by Michael Upshall, Consultant, Former Publisher, and Igor Brbre, Information Specialist, NHS Healthcare Improvement, Scotland, and research librarian.

In this session, Michael and Igor outline an assessment framework for evaluating new AI-based tools, from an evaluation of the corpus to being able to measure accuracy and effectiveness, as well as being able to identify biases and inequalities.

The evaluation framework is exemplified in a case study including a measured trial of some of these tools, comparing human and machine-facilitated approaches to the research process, and comparing time taken and quality of results, using data from a health library. Until now, much of the help provided by libraries in the form of fact sheets and how-to guides has been evaluated only subjectively; this case studies aims to show a better way to evaluate these tools, using statistically valid samples plus feedback from users to identify how widely used and how successful these tools are, in the hopes of being better equipped to evaluate new AI utilities for research and submission.

Video of the presentation available at: https://youtu.be/ifZ-xQKw-oY?si=CQ3Yflo-zhJV_VMO

Social Media:

https://www.linkedin.com/in/mupshall/

Twitter:

Keywords: #AI, #AITools, #AIServices, #research, #knowledge, #scholcomm, #collaboration,#engagement, #problemsolvers, #publishing, #libraries, #librarians, #information, #ChsConf, #LibrariesAndVendors, #LibrariesAndPublishers,

  continue reading

252 Episoden

Усі епізоди

×
 
Loading …

Willkommen auf Player FM!

Player FM scannt gerade das Web nach Podcasts mit hoher Qualität, die du genießen kannst. Es ist die beste Podcast-App und funktioniert auf Android, iPhone und im Web. Melde dich an, um Abos geräteübergreifend zu synchronisieren.

 

Kurzanleitung