Why can humans learn a language with so few words, while models need so many?
Ted Gibson is pretty impressed with how well #LLMs can produce English. But, he doesn’t think they will ever be able to produce the languages of the indigenous tribes he worked with in the Amazon. The reason? Data. Language models have been trained with trillions of words to make today's English results. But, there simply aren't enough words being written or spoken to make good models in many other languages. This challenge leads Ted and Gabriel to discuss the differences between human language learning and language models. They ask:
Why can humans learn a language with so few words, while models need so many?
Ted Gibson runs a language lab at MIT and works on all aspects of human language. He has worked with two indigenous populations in the Amazon and has a PhD in Computational Linguistics. Ted and Gabriel’s discussion leads them to discuss some of the most prominent theories of linguistics and to dive deep into human #language cognition!
They discuss:
- If LLMs have proved Chomsky wrong
- Low resource and high resource languages
- The lack of consensus on how language operates
- How machine learning differs from human learning
- How language is used for connection, not just passing information
- The appeal of “automatic” #translation products
- Language cognition across cultures
- The challenge of universal language rules
And more!
Click play to join our Merging Minds host Gabriel Fairman and his guest, Ted Gibson, for a deep conversation about language, the human mind, and how it is very different from the machine! You can learn more about Ted Gibson here: https://tedlab.mit.edu/
P.S. Don't forget to subscribe to Merging Minds Podcast powered by Bureau Works for more thought-provoking podcast episodes.
AI as a tool: Gabriel Fairman on empowering translators, not replacing.
Exploring AI in localization and the power of community with Charles Campbell.