Meeting Description: In 1986, the mathematician and philosopher Gian-Carlo Rota wrote, “I wonder whether or when artificial intelligence will ever crash the barrier of meaning.” Here, the phrase “barrier of meaning” refers to a belief about humans versus machines: humans are able to “actually understand” the situations they encounter, whereas AI systems (at least current ones) do not possess such understanding. The internal representations learned by (or programmed into) AI systems do not capture the rich “meanings” that humans bring to bear in perception, language, and reasoning.
This workshop brings AI scientists together with psychologists, biologists, social scientists, information theorists, and others—researchers who grapple with how complex systems extract meaning from the information they encounter. Together we will explore questions about the function and mechanisms of “understanding”—or “extracting meaning—in complex systems across many disciplines, and focus specifically on the relevance of human-like understanding for artificial intelligence systems.
This meeting is supported in part by the Artificial Intelligence Journal and by the National Science Foundation under Grant No. (IIS-1832717).