Artwork

Inhalt bereitgestellt von Dev and Doc. Alle Podcast-Inhalte, einschließlich Episoden, Grafiken und Podcast-Beschreibungen, werden direkt von Dev and Doc oder seinem Podcast-Plattformpartner hochgeladen und bereitgestellt. Wenn Sie glauben, dass jemand Ihr urheberrechtlich geschütztes Werk ohne Ihre Erlaubnis nutzt, können Sie dem hier beschriebenen Verfahren folgen https://de.player.fm/legal.
Player FM - Podcast-App
Gehen Sie mit der App Player FM offline!

#02 A clinical introduction to Large language models (LLM), AI chatbots, Med-PaLM

1:41:03
 
Teilen
 

Manage episode 428686727 series 3585389
Inhalt bereitgestellt von Dev and Doc. Alle Podcast-Inhalte, einschließlich Episoden, Grafiken und Podcast-Beschreibungen, werden direkt von Dev and Doc oder seinem Podcast-Plattformpartner hochgeladen und bereitgestellt. Wenn Sie glauben, dass jemand Ihr urheberrechtlich geschütztes Werk ohne Ihre Erlaubnis nutzt, können Sie dem hier beschriebenen Verfahren folgen https://de.player.fm/legal.

In this episode, we introduce large language models in healthcare, their potentials and pitfalls. We put AI chatbots like ChatGPT to the test, discuss our thoughts on Google's Med-PaLM, and dabble in a bit of philosophy of artificial general intelligence.

Like what you're hearing? Support us by subscribing and reaching out to us. We want to encourage open discussion between clinicians and developers.

Dev&Doc is a podcast where doctors and developers deep dive into the potential of AI in healthcare.
👨🏻‍⚕️Doc - Dr. Joshua Au Yeung
🤖Dev - Zeljko Kraljevic
LinkedIn Newsletter
YouTube
Spotify
Apple
Substack
For enquiries - 📧 [email protected]

Timestamps:
00:00 Start
00:16 Intro
02:04 ChatGPT, A giant leap for mankind?
04:02 Spending two weeks with ChatGPT as a doctor
07:36 History of Large Language Models (LLMs)
10:15 A top down approach to what is an LLM
17:31 Medical language is a language in itself
18:42 A lot of data, that is just wrong
21:05 Self-supervised training LLM
22:05 Instruction based fine tuning a LLM
23:52 Doc summarizing LLM training
25:48 The clinical shortcomings of instruction based tuning
27:33 Reinforcement learning from Human (clinician) feedback
32:22 Doc summarizing LLM, RLHF - A strict vs a progressive parent
34:10 There are still many problems with LLMs, aligning with clinical training data
36:26 Training a LLM on discharge summaries is a bad idea
39:18 Garbage in garbage out - data
40:13 Context windows
40:43 Data cleaning clinical notes
44:31 Bias in scientific domain LLMs PubmedGPT, Galatica
46:31 Data drift in medicine and continual learning
50:01 MedPaLM - instruction tuning to the medical domain
50:23 Model benchmarks do not reflect the real world
59:11 LLM emulating human language, but not the brain. Only one piece of the mind
1:01:10 LLMs on headaches and general knowledge
1:03:25 Where does a LLM fit in into the clinical work flow
1:05:50 Are regulations working against safety?
1:08:26 Cooling down LLMs to pass regulations
1:09:50 Why call it a hallucination? It's a false positive
1:13:57 Examples of bias of ChatGPT - A bad Santa Claus
1:17:00 Do LLMs encode true "understanding"? Can language lead to AGI?
1:20:05 Pregnancy - an acid test for Large Language Models
1:21:55 Training a LLM for the NHS (NHS-LLM)
1:25:25 Tell the model to "think deeply"
1:27:55 Asking ChatGPT to draw a picture of a human O.O
1:31:00 Language is not enough to achieve AGI
1:32:10 What can clinicians do about LLMs? Assisting vs Autonomous
1:39:07 What's next- Forecasting diagnoses with AI

🎞️ Editor - Dragan Kraljević
🎨 Brand design and art direction - Ana Grigorovici

  continue reading

30 Episoden

Artwork
iconTeilen
 
Manage episode 428686727 series 3585389
Inhalt bereitgestellt von Dev and Doc. Alle Podcast-Inhalte, einschließlich Episoden, Grafiken und Podcast-Beschreibungen, werden direkt von Dev and Doc oder seinem Podcast-Plattformpartner hochgeladen und bereitgestellt. Wenn Sie glauben, dass jemand Ihr urheberrechtlich geschütztes Werk ohne Ihre Erlaubnis nutzt, können Sie dem hier beschriebenen Verfahren folgen https://de.player.fm/legal.

In this episode, we introduce large language models in healthcare, their potentials and pitfalls. We put AI chatbots like ChatGPT to the test, discuss our thoughts on Google's Med-PaLM, and dabble in a bit of philosophy of artificial general intelligence.

Like what you're hearing? Support us by subscribing and reaching out to us. We want to encourage open discussion between clinicians and developers.

Dev&Doc is a podcast where doctors and developers deep dive into the potential of AI in healthcare.
👨🏻‍⚕️Doc - Dr. Joshua Au Yeung
🤖Dev - Zeljko Kraljevic
LinkedIn Newsletter
YouTube
Spotify
Apple
Substack
For enquiries - 📧 [email protected]

Timestamps:
00:00 Start
00:16 Intro
02:04 ChatGPT, A giant leap for mankind?
04:02 Spending two weeks with ChatGPT as a doctor
07:36 History of Large Language Models (LLMs)
10:15 A top down approach to what is an LLM
17:31 Medical language is a language in itself
18:42 A lot of data, that is just wrong
21:05 Self-supervised training LLM
22:05 Instruction based fine tuning a LLM
23:52 Doc summarizing LLM training
25:48 The clinical shortcomings of instruction based tuning
27:33 Reinforcement learning from Human (clinician) feedback
32:22 Doc summarizing LLM, RLHF - A strict vs a progressive parent
34:10 There are still many problems with LLMs, aligning with clinical training data
36:26 Training a LLM on discharge summaries is a bad idea
39:18 Garbage in garbage out - data
40:13 Context windows
40:43 Data cleaning clinical notes
44:31 Bias in scientific domain LLMs PubmedGPT, Galatica
46:31 Data drift in medicine and continual learning
50:01 MedPaLM - instruction tuning to the medical domain
50:23 Model benchmarks do not reflect the real world
59:11 LLM emulating human language, but not the brain. Only one piece of the mind
1:01:10 LLMs on headaches and general knowledge
1:03:25 Where does a LLM fit in into the clinical work flow
1:05:50 Are regulations working against safety?
1:08:26 Cooling down LLMs to pass regulations
1:09:50 Why call it a hallucination? It's a false positive
1:13:57 Examples of bias of ChatGPT - A bad Santa Claus
1:17:00 Do LLMs encode true "understanding"? Can language lead to AGI?
1:20:05 Pregnancy - an acid test for Large Language Models
1:21:55 Training a LLM for the NHS (NHS-LLM)
1:25:25 Tell the model to "think deeply"
1:27:55 Asking ChatGPT to draw a picture of a human O.O
1:31:00 Language is not enough to achieve AGI
1:32:10 What can clinicians do about LLMs? Assisting vs Autonomous
1:39:07 What's next- Forecasting diagnoses with AI

🎞️ Editor - Dragan Kraljević
🎨 Brand design and art direction - Ana Grigorovici

  continue reading

30 Episoden

Усі епізоди

×
 
Loading …

Willkommen auf Player FM!

Player FM scannt gerade das Web nach Podcasts mit hoher Qualität, die du genießen kannst. Es ist die beste Podcast-App und funktioniert auf Android, iPhone und im Web. Melde dich an, um Abos geräteübergreifend zu synchronisieren.

 

Kurzanleitung

Hören Sie sich diese Show an, während Sie die Gegend erkunden
Abspielen