Artwork

Content provided by The Gradient. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Gradient or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ro.player.fm/legal.
Player FM - Aplicație Podcast
Treceți offline cu aplicația Player FM !

Subbarao Kambhampati: Planning, Reasoning, and Interpretability in the Age of LLMs

1:59:03
 
Distribuie
 

Manage episode 399949426 series 2975159
Content provided by The Gradient. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Gradient or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ro.player.fm/legal.

In episode 110 of The Gradient Podcast, Daniel Bashir speaks to Professor Subbarao Kambhampati.

Professor Kambhampati is a professor of computer science at Arizona State University. He studies fundamental problems in planning and decision making, motivated by the challenges of human-aware AI systems. He is a fellow of the Association for the Advancement of Artificial Intelligence, American Association for the Advancement of Science, and Association for Computing machinery, and was an NSF Young Investigator. He was the president of the Association for the Advancement of Artificial Intelligence, trustee of the International Joint Conference on Artificial Intelligence, and a founding board member of Partnership on AI.

Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pub

Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

Outline:

* (00:00) Intro

* (02:11) Professor Kambhampati’s background

* (06:07) Explanation in AI

* (18:08) What people want from explanations—vocabulary and symbolic explanations

* (21:23) The realization of new concepts in explanation—analogy and grounding

* (30:36) Thinking and language

* (31:48) Conscious and subconscious mental activity

* (36:58) Tacit and explicit knowledge

* (42:09) The development of planning as a research area

* (46:12) RL and planning

* (47:47) What makes a planning problem hard?

* (51:23) Scalability in planning

* (54:48) LLMs do not perform reasoning

* (56:51) How to show LLMs aren’t reasoning

* (59:38) External verifiers and backprompting LLMs

* (1:07:51) LLMs as cognitive orthotics, language and representations

* (1:16:45) Finding out what kinds of representations an AI system uses

* (1:31:08) “Compiling” system 2 knowledge into system 1 knowledge in LLMs

* (1:39:53) The Generative AI Paradox, reasoning and retrieval

* (1:43:48) AI as an ersatz natural science

* (1:44:03) Why AI is straying away from its engineering roots, and what constitutes engineering

* (1:58:33) Outro

Links:

* Professor Kambhampati’s Twitter and homepage

* Research and Writing — Planning and Human-Aware AI Systems

* A Validation-structure-based theory of plan modification and reuse (1990)

* Challenges of Human-Aware AI Systems (2020)

* Polanyi vs. Planning (2021)

* LLMs and Planning

* Can LLMs Really Reason and Plan? (2023)

* On the Planning Abilities of LLMs (2023)

* Other

* Changing the nature of AI research


Get full access to The Gradient at thegradientpub.substack.com/subscribe
  continue reading

128 episoade

Artwork
iconDistribuie
 
Manage episode 399949426 series 2975159
Content provided by The Gradient. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Gradient or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ro.player.fm/legal.

In episode 110 of The Gradient Podcast, Daniel Bashir speaks to Professor Subbarao Kambhampati.

Professor Kambhampati is a professor of computer science at Arizona State University. He studies fundamental problems in planning and decision making, motivated by the challenges of human-aware AI systems. He is a fellow of the Association for the Advancement of Artificial Intelligence, American Association for the Advancement of Science, and Association for Computing machinery, and was an NSF Young Investigator. He was the president of the Association for the Advancement of Artificial Intelligence, trustee of the International Joint Conference on Artificial Intelligence, and a founding board member of Partnership on AI.

Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pub

Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

Outline:

* (00:00) Intro

* (02:11) Professor Kambhampati’s background

* (06:07) Explanation in AI

* (18:08) What people want from explanations—vocabulary and symbolic explanations

* (21:23) The realization of new concepts in explanation—analogy and grounding

* (30:36) Thinking and language

* (31:48) Conscious and subconscious mental activity

* (36:58) Tacit and explicit knowledge

* (42:09) The development of planning as a research area

* (46:12) RL and planning

* (47:47) What makes a planning problem hard?

* (51:23) Scalability in planning

* (54:48) LLMs do not perform reasoning

* (56:51) How to show LLMs aren’t reasoning

* (59:38) External verifiers and backprompting LLMs

* (1:07:51) LLMs as cognitive orthotics, language and representations

* (1:16:45) Finding out what kinds of representations an AI system uses

* (1:31:08) “Compiling” system 2 knowledge into system 1 knowledge in LLMs

* (1:39:53) The Generative AI Paradox, reasoning and retrieval

* (1:43:48) AI as an ersatz natural science

* (1:44:03) Why AI is straying away from its engineering roots, and what constitutes engineering

* (1:58:33) Outro

Links:

* Professor Kambhampati’s Twitter and homepage

* Research and Writing — Planning and Human-Aware AI Systems

* A Validation-structure-based theory of plan modification and reuse (1990)

* Challenges of Human-Aware AI Systems (2020)

* Polanyi vs. Planning (2021)

* LLMs and Planning

* Can LLMs Really Reason and Plan? (2023)

* On the Planning Abilities of LLMs (2023)

* Other

* Changing the nature of AI research


Get full access to The Gradient at thegradientpub.substack.com/subscribe
  continue reading

128 episoade

Toate episoadele

×
 
Loading …

Bun venit la Player FM!

Player FM scanează web-ul pentru podcast-uri de înaltă calitate pentru a vă putea bucura acum. Este cea mai bună aplicație pentru podcast și funcționează pe Android, iPhone și pe web. Înscrieți-vă pentru a sincroniza abonamentele pe toate dispozitivele.

 

Ghid rapid de referință