Artwork

Inhalt bereitgestellt von Daniel Bashir. Alle Podcast-Inhalte, einschließlich Episoden, Grafiken und Podcast-Beschreibungen, werden direkt von Daniel Bashir oder seinem Podcast-Plattformpartner hochgeladen und bereitgestellt. Wenn Sie glauben, dass jemand Ihr urheberrechtlich geschütztes Werk ohne Ihre Erlaubnis nutzt, können Sie dem hier beschriebenen Verfahren folgen https://de.player.fm/legal.
Player FM - Podcast-App
Gehen Sie mit der App Player FM offline!

Jacob Andreas: Language, Grounding, and World Models

1:52:43
 
Teilen
 

Manage episode 444549976 series 2975159
Inhalt bereitgestellt von Daniel Bashir. Alle Podcast-Inhalte, einschließlich Episoden, Grafiken und Podcast-Beschreibungen, werden direkt von Daniel Bashir oder seinem Podcast-Plattformpartner hochgeladen und bereitgestellt. Wenn Sie glauben, dass jemand Ihr urheberrechtlich geschütztes Werk ohne Ihre Erlaubnis nutzt, können Sie dem hier beschriebenen Verfahren folgen https://de.player.fm/legal.

Episode 140

I spoke with Professor Jacob Andreas about:

* Language and the world

* World models

* How he’s developed as a scientist

Enjoy!

Jacob is an associate professor at MIT in the Department of Electrical Engineering and Computer Science as well as the Computer Science and Artificial Intelligence Laboratory. His research aims to understand the computational foundations of language learning, and to build intelligent systems that can learn from human guidance. Jacob earned his Ph.D. from UC Berkeley, his M.Phil. from Cambridge (where he studied as a Churchill scholar) and his B.S. from Columbia. He has received a Sloan fellowship, an NSF CAREER award, MIT's Junior Bose and Kolokotrones teaching awards, and paper awards at ACL, ICML and NAACL.

Find me on Twitter for updates on new episodes, and reach me at editor@thegradient.pub for feedback, ideas, guest suggestions.

Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

Outline:

* (00:00) Intro

* (00:40) Jacob’s relationship with grounding fundamentalism

* (05:21) Jacob’s reaction to LLMs

* (11:24) Grounding language — is there a philosophical problem?

* (15:54) Grounding and language modeling

* (24:00) Analogies between humans and LMs

* (30:46) Grounding language with points and paths in continuous spaces

* (32:00) Neo-Davidsonian formal semantics

* (36:27) Evolving assumptions about structure prediction

* (40:14) Segmentation and event structure

* (42:33) How much do word embeddings encode about syntax?

* (43:10) Jacob’s process for studying scientific questions

* (45:38) Experiments and hypotheses

* (53:01) Calibrating assumptions as a researcher

* (54:08) Flexibility in research

* (56:09) Measuring Compositionality in Representation Learning

* (56:50) Developing an independent research agenda and developing a lab culture

* (1:03:25) Language Models as Agent Models

* (1:04:30) Background

* (1:08:33) Toy experiments and interpretability research

* (1:13:30) Developing effective toy experiments

* (1:15:25) Language Models, World Models, and Human Model-Building

* (1:15:56) OthelloGPT’s bag of heuristics and multiple “world models”

* (1:21:32) What is a world model?

* (1:23:45) The Big Question — from meaning to world models

* (1:28:21) From “meaning” to precise questions about LMs

* (1:32:01) Mechanistic interpretability and reading tea leaves

* (1:35:38) Language and the world

* (1:38:07) Towards better language models

* (1:43:45) Model editing

* (1:45:50) On academia’s role in NLP research

* (1:49:13) On good science

* (1:52:36) Outro

Links:

* Jacob’s homepage and Twitter

* Language Models, World Models, and Human Model-Building

* Papers

* Semantic Parsing as Machine Translation (2013)

* Grounding language with points and paths in continuous spaces (2014)

* How much do word embeddings encode about syntax? (2014)

* Translating neuralese (2017)

* Analogs of linguistic structure in deep representations (2017)

* Learning with latent language (2018)

* Learning from Language (2018)

* Measuring Compositionality in Representation Learning (2019)

* Experience grounds language (2020)

* Language Models as Agent Models (2022)


Get full access to The Gradient at thegradientpub.substack.com/subscribe
  continue reading

147 Episoden

Artwork
iconTeilen
 
Manage episode 444549976 series 2975159
Inhalt bereitgestellt von Daniel Bashir. Alle Podcast-Inhalte, einschließlich Episoden, Grafiken und Podcast-Beschreibungen, werden direkt von Daniel Bashir oder seinem Podcast-Plattformpartner hochgeladen und bereitgestellt. Wenn Sie glauben, dass jemand Ihr urheberrechtlich geschütztes Werk ohne Ihre Erlaubnis nutzt, können Sie dem hier beschriebenen Verfahren folgen https://de.player.fm/legal.

Episode 140

I spoke with Professor Jacob Andreas about:

* Language and the world

* World models

* How he’s developed as a scientist

Enjoy!

Jacob is an associate professor at MIT in the Department of Electrical Engineering and Computer Science as well as the Computer Science and Artificial Intelligence Laboratory. His research aims to understand the computational foundations of language learning, and to build intelligent systems that can learn from human guidance. Jacob earned his Ph.D. from UC Berkeley, his M.Phil. from Cambridge (where he studied as a Churchill scholar) and his B.S. from Columbia. He has received a Sloan fellowship, an NSF CAREER award, MIT's Junior Bose and Kolokotrones teaching awards, and paper awards at ACL, ICML and NAACL.

Find me on Twitter for updates on new episodes, and reach me at editor@thegradient.pub for feedback, ideas, guest suggestions.

Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

Outline:

* (00:00) Intro

* (00:40) Jacob’s relationship with grounding fundamentalism

* (05:21) Jacob’s reaction to LLMs

* (11:24) Grounding language — is there a philosophical problem?

* (15:54) Grounding and language modeling

* (24:00) Analogies between humans and LMs

* (30:46) Grounding language with points and paths in continuous spaces

* (32:00) Neo-Davidsonian formal semantics

* (36:27) Evolving assumptions about structure prediction

* (40:14) Segmentation and event structure

* (42:33) How much do word embeddings encode about syntax?

* (43:10) Jacob’s process for studying scientific questions

* (45:38) Experiments and hypotheses

* (53:01) Calibrating assumptions as a researcher

* (54:08) Flexibility in research

* (56:09) Measuring Compositionality in Representation Learning

* (56:50) Developing an independent research agenda and developing a lab culture

* (1:03:25) Language Models as Agent Models

* (1:04:30) Background

* (1:08:33) Toy experiments and interpretability research

* (1:13:30) Developing effective toy experiments

* (1:15:25) Language Models, World Models, and Human Model-Building

* (1:15:56) OthelloGPT’s bag of heuristics and multiple “world models”

* (1:21:32) What is a world model?

* (1:23:45) The Big Question — from meaning to world models

* (1:28:21) From “meaning” to precise questions about LMs

* (1:32:01) Mechanistic interpretability and reading tea leaves

* (1:35:38) Language and the world

* (1:38:07) Towards better language models

* (1:43:45) Model editing

* (1:45:50) On academia’s role in NLP research

* (1:49:13) On good science

* (1:52:36) Outro

Links:

* Jacob’s homepage and Twitter

* Language Models, World Models, and Human Model-Building

* Papers

* Semantic Parsing as Machine Translation (2013)

* Grounding language with points and paths in continuous spaces (2014)

* How much do word embeddings encode about syntax? (2014)

* Translating neuralese (2017)

* Analogs of linguistic structure in deep representations (2017)

* Learning with latent language (2018)

* Learning from Language (2018)

* Measuring Compositionality in Representation Learning (2019)

* Experience grounds language (2020)

* Language Models as Agent Models (2022)


Get full access to The Gradient at thegradientpub.substack.com/subscribe
  continue reading

147 Episoden

Alle Folgen

×
 
Loading …

Willkommen auf Player FM!

Player FM scannt gerade das Web nach Podcasts mit hoher Qualität, die du genießen kannst. Es ist die beste Podcast-App und funktioniert auf Android, iPhone und im Web. Melde dich an, um Abos geräteübergreifend zu synchronisieren.

 

Kurzanleitung