Artwork

Content provided by PyTorch, Edward Yang, and Team PyTorch. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by PyTorch, Edward Yang, and Team PyTorch or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ro.player.fm/legal.
Player FM - Aplicație Podcast
Treceți offline cu aplicația Player FM !

Half precision

18:00
 
Distribuie
 

Manage episode 301973966 series 2921809
Content provided by PyTorch, Edward Yang, and Team PyTorch. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by PyTorch, Edward Yang, and Team PyTorch or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ro.player.fm/legal.

In this episode I talk about reduced precision floating point formats float16 (aka half precision) and bfloat16. I'll discuss what floating point numbers are, how these two formats vary, and some of the practical considerations that arise when you are working with numeric code in PyTorch that also needs to work in reduced precision. Did you know that we do all CUDA computations in float32, even if the source tensors are stored as float16? Now you know!

Further reading.

  continue reading

82 episoade

Artwork

Half precision

PyTorch Developer Podcast

33 subscribers

published

iconDistribuie
 
Manage episode 301973966 series 2921809
Content provided by PyTorch, Edward Yang, and Team PyTorch. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by PyTorch, Edward Yang, and Team PyTorch or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ro.player.fm/legal.

In this episode I talk about reduced precision floating point formats float16 (aka half precision) and bfloat16. I'll discuss what floating point numbers are, how these two formats vary, and some of the practical considerations that arise when you are working with numeric code in PyTorch that also needs to work in reduced precision. Did you know that we do all CUDA computations in float32, even if the source tensors are stored as float16? Now you know!

Further reading.

  continue reading

82 episoade

Alle episoder

×
 
Loading …

Bun venit la Player FM!

Player FM scanează web-ul pentru podcast-uri de înaltă calitate pentru a vă putea bucura acum. Este cea mai bună aplicație pentru podcast și funcționează pe Android, iPhone și pe web. Înscrieți-vă pentru a sincroniza abonamentele pe toate dispozitivele.

 

Ghid rapid de referință