Artwork

Content provided by Dev and Doc. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Dev and Doc or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ro.player.fm/legal.
Player FM - Aplicație Podcast
Treceți offline cu aplicația Player FM !

#22 Explaining Explainable AI (for healthcare) with Dr Annabelle Painter (RSM digital health section Podcast)

58:40
 
Distribuie
 

Manage episode 434385253 series 3585389
Content provided by Dev and Doc. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Dev and Doc or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ro.player.fm/legal.

Dev and Doc is joined by guest Annabelle Painter, doctor, CMO, and podcaster for the Royal Society of Medicine Digital Health Podcast. We deep dive into explainability and interpretability with concrete healthcare examples.

Check out Dr. Painter's Podcast here, she has some amazing guests and great insights into AI in healthcare! - https://spotify.link/pzSgxmpD5yb

👋 Hey! If you are enjoying our conversations, reach out, share your thoughts and journey with us. Don't forget to subscribe whilst you're here :)

👨🏻‍⚕️ Doc - Dr. Joshua Au Yeung - https://www.linkedin.com/in/dr-joshua-auyeung/

🤖 Dev - Zeljko Kraljevic - https://twitter.com/zeljkokr

LinkedIn Newsletter

YouTube Channel

Spotify

Apple Podcasts

Substack

For enquiries - 📧 Devanddoc@gmail.com

🎞️ Editor - Dragan Kraljević - https://www.instagram.com/dragan_kraljevic/

🎨 Brand design and art direction - Ana Grigorovici - https://www.behance.net/anagrigorovici027d

Timestamps:

  • 00:00 - Start + highlights
  • 03:47 - Intro
  • 08:16 - Does all AI in healthcare need to be explainable?
  • 15:56 - History and explanation of Explainable/Interpretable AI
  • 20:43 - Gradient-based saliency and heat maps
  • 24:14 - LIME - Local Interpretable Model-agnostic Explanations
  • 30:09 - Nonsensical correlations - When explainability goes wrong
  • 33:57 - Modern explainability - Anthropic
  • 37:15 - Comparing LLMs with the human brain
  • 40:02 - Clinician-AI interaction
  • 47:11 - Where is this all going? Aligning models to ground truth and teaching them to say "I don't know"

References:

  continue reading

25 episoade

Artwork
iconDistribuie
 
Manage episode 434385253 series 3585389
Content provided by Dev and Doc. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Dev and Doc or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ro.player.fm/legal.

Dev and Doc is joined by guest Annabelle Painter, doctor, CMO, and podcaster for the Royal Society of Medicine Digital Health Podcast. We deep dive into explainability and interpretability with concrete healthcare examples.

Check out Dr. Painter's Podcast here, she has some amazing guests and great insights into AI in healthcare! - https://spotify.link/pzSgxmpD5yb

👋 Hey! If you are enjoying our conversations, reach out, share your thoughts and journey with us. Don't forget to subscribe whilst you're here :)

👨🏻‍⚕️ Doc - Dr. Joshua Au Yeung - https://www.linkedin.com/in/dr-joshua-auyeung/

🤖 Dev - Zeljko Kraljevic - https://twitter.com/zeljkokr

LinkedIn Newsletter

YouTube Channel

Spotify

Apple Podcasts

Substack

For enquiries - 📧 Devanddoc@gmail.com

🎞️ Editor - Dragan Kraljević - https://www.instagram.com/dragan_kraljevic/

🎨 Brand design and art direction - Ana Grigorovici - https://www.behance.net/anagrigorovici027d

Timestamps:

  • 00:00 - Start + highlights
  • 03:47 - Intro
  • 08:16 - Does all AI in healthcare need to be explainable?
  • 15:56 - History and explanation of Explainable/Interpretable AI
  • 20:43 - Gradient-based saliency and heat maps
  • 24:14 - LIME - Local Interpretable Model-agnostic Explanations
  • 30:09 - Nonsensical correlations - When explainability goes wrong
  • 33:57 - Modern explainability - Anthropic
  • 37:15 - Comparing LLMs with the human brain
  • 40:02 - Clinician-AI interaction
  • 47:11 - Where is this all going? Aligning models to ground truth and teaching them to say "I don't know"

References:

  continue reading

25 episoade

Toate episoadele

×
 
Loading …

Bun venit la Player FM!

Player FM scanează web-ul pentru podcast-uri de înaltă calitate pentru a vă putea bucura acum. Este cea mai bună aplicație pentru podcast și funcționează pe Android, iPhone și pe web. Înscrieți-vă pentru a sincroniza abonamentele pe toate dispozitivele.

 

Ghid rapid de referință

Listen to this show while you explore
Play