Artwork

Content provided by Daniel Filan. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Daniel Filan or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ro.player.fm/legal.
Player FM - Aplicație Podcast
Treceți offline cu aplicația Player FM !

38.3 - Erik Jenner on Learned Look-Ahead

23:46
 
Distribuie
 

Manage episode 455065795 series 2844728
Content provided by Daniel Filan. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Daniel Filan or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ro.player.fm/legal.

Lots of people in the AI safety space worry about models being able to make deliberate, multi-step plans. But can we already see this in existing neural nets? In this episode, I talk with Erik Jenner about his work looking at internal look-ahead within chess-playing neural networks.

Patreon: https://www.patreon.com/axrpodcast

Ko-fi: https://ko-fi.com/axrpodcast

The transcript: https://axrp.net/episode/2024/12/12/episode-38_3-erik-jenner-learned-look-ahead.html

FAR.AI: https://far.ai/

FAR.AI on X (aka Twitter): https://x.com/farairesearch

FAR.AI on YouTube: https://www.youtube.com/@FARAIResearch

The Alignment Workshop: https://www.alignment-workshop.com/

Topics we discuss, and timestamps:

00:57 - How chess neural nets look into the future

04:29 - The dataset and basic methodology

05:23 - Testing for branching futures?

07:57 - Which experiments demonstrate what

10:43 - How the ablation experiments work

12:38 - Effect sizes

15:23 - X-risk relevance

18:08 - Follow-up work

21:29 - How much planning does the network do?

Research we mention:

Evidence of Learned Look-Ahead in a Chess-Playing Neural Network: https://arxiv.org/abs/2406.00877

Understanding the learned look-ahead behavior of chess neural networks (a development of the follow-up research Erik mentioned): https://openreview.net/forum?id=Tl8EzmgsEp

Linear Latent World Models in Simple Transformers: A Case Study on Othello-GPT: https://arxiv.org/abs/2310.07582

Episode art by Hamish Doodles: hamishdoodles.com

  continue reading

48 episoade

Artwork
iconDistribuie
 
Manage episode 455065795 series 2844728
Content provided by Daniel Filan. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Daniel Filan or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ro.player.fm/legal.

Lots of people in the AI safety space worry about models being able to make deliberate, multi-step plans. But can we already see this in existing neural nets? In this episode, I talk with Erik Jenner about his work looking at internal look-ahead within chess-playing neural networks.

Patreon: https://www.patreon.com/axrpodcast

Ko-fi: https://ko-fi.com/axrpodcast

The transcript: https://axrp.net/episode/2024/12/12/episode-38_3-erik-jenner-learned-look-ahead.html

FAR.AI: https://far.ai/

FAR.AI on X (aka Twitter): https://x.com/farairesearch

FAR.AI on YouTube: https://www.youtube.com/@FARAIResearch

The Alignment Workshop: https://www.alignment-workshop.com/

Topics we discuss, and timestamps:

00:57 - How chess neural nets look into the future

04:29 - The dataset and basic methodology

05:23 - Testing for branching futures?

07:57 - Which experiments demonstrate what

10:43 - How the ablation experiments work

12:38 - Effect sizes

15:23 - X-risk relevance

18:08 - Follow-up work

21:29 - How much planning does the network do?

Research we mention:

Evidence of Learned Look-Ahead in a Chess-Playing Neural Network: https://arxiv.org/abs/2406.00877

Understanding the learned look-ahead behavior of chess neural networks (a development of the follow-up research Erik mentioned): https://openreview.net/forum?id=Tl8EzmgsEp

Linear Latent World Models in Simple Transformers: A Case Study on Othello-GPT: https://arxiv.org/abs/2310.07582

Episode art by Hamish Doodles: hamishdoodles.com

  continue reading

48 episoade

Toate episoadele

×
 
Loading …

Bun venit la Player FM!

Player FM scanează web-ul pentru podcast-uri de înaltă calitate pentru a vă putea bucura acum. Este cea mai bună aplicație pentru podcast și funcționează pe Android, iPhone și pe web. Înscrieți-vă pentru a sincroniza abonamentele pe toate dispozitivele.

 

Ghid rapid de referință

Listen to this show while you explore
Play