Artwork

Content provided by Daniel Reid Cahn. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Daniel Reid Cahn or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ro.player.fm/legal.
Player FM - Aplicație Podcast
Treceți offline cu aplicația Player FM !

LLM Inference Speed (Tech Deep Dive)

39:36
 
Distribuie
 

Manage episode 379027520 series 3514761
Content provided by Daniel Reid Cahn. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Daniel Reid Cahn or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ro.player.fm/legal.

In this tech talk, we dive deep into the technical specifics around LLM inference.

The big question is: Why are LLMs slow? How can they be faster? And might slow inference affect UX in the next generation of AI-powered software?

We jump into:

  • Is fast model inference the real moat for LLM companies?
  • What are the implications of slow model inference on the future of decentralized and edge model inference?
  • As demand rises, what will the latency/throughput tradeoff look like?
  • What innovations on the horizon might massively speed up model inference?
  continue reading

23 episoade

Artwork
iconDistribuie
 
Manage episode 379027520 series 3514761
Content provided by Daniel Reid Cahn. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Daniel Reid Cahn or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ro.player.fm/legal.

In this tech talk, we dive deep into the technical specifics around LLM inference.

The big question is: Why are LLMs slow? How can they be faster? And might slow inference affect UX in the next generation of AI-powered software?

We jump into:

  • Is fast model inference the real moat for LLM companies?
  • What are the implications of slow model inference on the future of decentralized and edge model inference?
  • As demand rises, what will the latency/throughput tradeoff look like?
  • What innovations on the horizon might massively speed up model inference?
  continue reading

23 episoade

Όλα τα επεισόδια

×
 
Loading …

Bun venit la Player FM!

Player FM scanează web-ul pentru podcast-uri de înaltă calitate pentru a vă putea bucura acum. Este cea mai bună aplicație pentru podcast și funcționează pe Android, iPhone și pe web. Înscrieți-vă pentru a sincroniza abonamentele pe toate dispozitivele.

 

Ghid rapid de referință