Abstract: While AI has made dramatic progress over the last decade in areas such as vision, natural language processing, and game-playing, current AI systems still wholly lack the abilities to create humanlike conceptual abstractions and analogies. It can be argued that the lack of humanlike concepts in AI systems is the cause of their brittleness—the inability to reliably transfer knowledge to new situations—as well as their vulnerability to adversarial attacks.
Much AI research on conceptual abstraction and analogy has used visual-IQ-like tests or other idealized domains as arenas for developing and evaluating AI systems, and in several of these tasks AI systems have performed surprisingly well, in some cases outperforming humans. In this talk I will review some very recent (and some much older) work along these lines, and discuss the following questions: Do these domains actually require abilities that will transfer and scale to real-world tasks? And what are the systems that succeed on these idealized domains actually learning?