AI under the hood (image: Domhnall Malone/Google DeepMind via Unsplash)

AI and the Barrier of Meaning 2, a workshop held at the Santa Fe Institute on April 24–26, brought together experts working in AI, cognitive science, philosophy, anthropology, linguistics, neuroscience, and law. Videos of the talks from the workshop are now available on YouTube. Similar to the first AI and the Barrier of Meaning workshop, held in 2018, the event focused on questions related to “understanding” and what it means to “extract meaning” in a humanlike way.

“Breakthroughs in Large Language Models (LLMs) like ChatGPT have a lot of people talking about whether or not AI has learned the meaning of human language,” said SFI Professor Melanie Mitchell, an AI researcher and one of the workshop’s organizers. “This workshop provided a rich set of new perspectives on LLMs as well as the many dimensions of ‘understanding’ in humans and machines.” 

Presentations and perspectives at the workshop ranged from analysis of existing AI architectures to philosophical aspects of understanding. They also covered practical issues of what “understanding” could mean in scenarios involving AI and humans, and what the legal consequences could be. Discussions circled around the question of understanding, and what, if anything, is fundamentally missing from the current LLMs. Should we build machines that understand the world the way we do, and if so, how?

“For humans, thought results in language production, which results in comprehension. Understanding goes beyond the language system,” said Anna Ivanova from MIT.

LLMs create an illusion of understanding because they can be fluent in human language, but this, the participants agreed, is not the same as either thinking or understanding in humans. Humans create mental models of the world based on our experience of living as human beings inhabiting a body.

“You have to account for the fact that the world is not entirely predictive,” said Yann LeCun from Meta. “Common sense is a collection of world models. Understanding is predicting using world models. Predicting language is easy because it is discrete. Making predictions under certainty of text is easy — that’s why LLMs work. For example, in video, predictions don’t work.” 

David Chalmers, a philosopher at New York University, said, “Our own meaning of understanding may have been too ambiguous, too vague, too unarticulated.” He proposed “distinguishing the various strands in our existing notion of understanding” by using a framework from Henk W De Regt’s book Understanding Scientific Understanding.

The questions surrounding how we understand and what understanding means in LLMs could direct how AIs — and our relationships with them — develop.  

Workshops such as these serve as benchmarks in an ongoing, but increasingly important, conversation. In the next few months, the organizers will be releasing a report via arXiv. They hope, through events such as these, to continue the dialogue by creating environments that facilitate collaborations and inform the broader public about what the experts in the field of AI and related fields are thinking about.