Treceți offline cu aplicația Player FM !
#147 Fast Approximate Inference without Convergence Worries, with Martin Ingram
Manage episode 523875222 series 2635823
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!
- Intro to Bayes Course (first 2 lessons free)
- Advanced Regression Course (first 2 lessons free)
Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!
Visit our Patreon page to unlock exclusive Bayesian swag ;)
Takeaways:
- DADVI is a new approach to variational inference that aims to improve speed and accuracy.
- DADVI allows for faster Bayesian inference without sacrificing model flexibility.
- Linear response can help recover covariance estimates from mean estimates.
- DADVI performs well in mixed models and hierarchical structures.
- Normalizing flows present an interesting avenue for enhancing variational inference.
- DADVI can handle large datasets effectively, improving predictive performance.
- Future enhancements for DADVI may include GPU support and linear response integration.
Chapters:
13:17 Understanding DADVI: A New Approach
21:54 Mean Field Variational Inference Explained
26:38 Linear Response and Covariance Estimation
31:21 Deterministic vs Stochastic Optimization in DADVI
35:00 Understanding DADVI and Its Optimization Landscape
37:59 Theoretical Insights and Practical Applications of DADVI
42:12 Comparative Performance of DADVI in Real Applications
45:03 Challenges and Effectiveness of DADVI in Various Models
48:51 Exploring Future Directions for Variational Inference
53:04 Final Thoughts and Advice for Practitioners
Thank you to my Patrons for making this episode possible!
Yusuke Saito, Avi Bryant, Giuliano Cruz, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Aubrey Clayton, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Joshua Meehl, Javier Sabio, Kristian Higgins, Matt Rosinski, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie, Cory Kiser, Julio, Edvin Saveljev, Frederick Ayala, Jeffrey Powell, Gal Kampel, Adan Romero, Will Geary, Blake Walters, Jonathan Morgan, Francesco Madrisotti, Ivy Huang, Gary Clarke, Robert Flannery, Rasmus Hindström, Stefan, Corey Abshire, Mike Loncaric, David McCormick, Ronald Legere, Sergio Dolia, Michael Cao, Yiğit Aşık, Suyog Chandramouli and Guillaume Berthon.
Links from the show:
- Martin's website: https://martiningram.github.io/
- Martin on Linkedin: https://www.linkedin.com/in/martin-ingram-48302782/
- Martin on GitHub: https://github.com/martiningram
- Martin on Google Scholar: https://scholar.google.com/citations?user=AZ-A7AEAAAAJ&hl=en
- Fast approximate inference without convergence worries in PyMC: https://martiningram.github.io/deterministic-advi-in-pymc/
- DADVI linear regression example: https://github.com/pymc-devs/pymc-extras/blob/main/notebooks/deterministic_advi_example.ipynb
- LBS #142 Bayesian Trees & Deep Learning for Optimization & Big Data, with Gabriel Stechschulte: https://learnbayesstats.com/episode/142-bayesian-trees-deep-learning-optimization-big-data-gabriel-stechschulte
- Alex Andorra & Chris Fonnesbeck – A Beginner's Guide to Variational Inference | PyData Virginia 2025: https://www.youtube.com/watch?v=XECLmgnS6Ng
- NUTS Adaptation with Normalizing Flows: https://pymc-devs.github.io/nutpie/nf-adapt.html
Transcript
This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.
181 episoade
Manage episode 523875222 series 2635823
Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!
- Intro to Bayes Course (first 2 lessons free)
- Advanced Regression Course (first 2 lessons free)
Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!
Visit our Patreon page to unlock exclusive Bayesian swag ;)
Takeaways:
- DADVI is a new approach to variational inference that aims to improve speed and accuracy.
- DADVI allows for faster Bayesian inference without sacrificing model flexibility.
- Linear response can help recover covariance estimates from mean estimates.
- DADVI performs well in mixed models and hierarchical structures.
- Normalizing flows present an interesting avenue for enhancing variational inference.
- DADVI can handle large datasets effectively, improving predictive performance.
- Future enhancements for DADVI may include GPU support and linear response integration.
Chapters:
13:17 Understanding DADVI: A New Approach
21:54 Mean Field Variational Inference Explained
26:38 Linear Response and Covariance Estimation
31:21 Deterministic vs Stochastic Optimization in DADVI
35:00 Understanding DADVI and Its Optimization Landscape
37:59 Theoretical Insights and Practical Applications of DADVI
42:12 Comparative Performance of DADVI in Real Applications
45:03 Challenges and Effectiveness of DADVI in Various Models
48:51 Exploring Future Directions for Variational Inference
53:04 Final Thoughts and Advice for Practitioners
Thank you to my Patrons for making this episode possible!
Yusuke Saito, Avi Bryant, Giuliano Cruz, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Aubrey Clayton, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Joshua Meehl, Javier Sabio, Kristian Higgins, Matt Rosinski, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas, Luke Gorrie, Cory Kiser, Julio, Edvin Saveljev, Frederick Ayala, Jeffrey Powell, Gal Kampel, Adan Romero, Will Geary, Blake Walters, Jonathan Morgan, Francesco Madrisotti, Ivy Huang, Gary Clarke, Robert Flannery, Rasmus Hindström, Stefan, Corey Abshire, Mike Loncaric, David McCormick, Ronald Legere, Sergio Dolia, Michael Cao, Yiğit Aşık, Suyog Chandramouli and Guillaume Berthon.
Links from the show:
- Martin's website: https://martiningram.github.io/
- Martin on Linkedin: https://www.linkedin.com/in/martin-ingram-48302782/
- Martin on GitHub: https://github.com/martiningram
- Martin on Google Scholar: https://scholar.google.com/citations?user=AZ-A7AEAAAAJ&hl=en
- Fast approximate inference without convergence worries in PyMC: https://martiningram.github.io/deterministic-advi-in-pymc/
- DADVI linear regression example: https://github.com/pymc-devs/pymc-extras/blob/main/notebooks/deterministic_advi_example.ipynb
- LBS #142 Bayesian Trees & Deep Learning for Optimization & Big Data, with Gabriel Stechschulte: https://learnbayesstats.com/episode/142-bayesian-trees-deep-learning-optimization-big-data-gabriel-stechschulte
- Alex Andorra & Chris Fonnesbeck – A Beginner's Guide to Variational Inference | PyData Virginia 2025: https://www.youtube.com/watch?v=XECLmgnS6Ng
- NUTS Adaptation with Normalizing Flows: https://pymc-devs.github.io/nutpie/nf-adapt.html
Transcript
This is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.
181 episoade
Toate episoadele
×Bun venit la Player FM!
Player FM scanează web-ul pentru podcast-uri de înaltă calitate pentru a vă putea bucura acum. Este cea mai bună aplicație pentru podcast și funcționează pe Android, iPhone și pe web. Înscrieți-vă pentru a sincroniza abonamentele pe toate dispozitivele.