Abstract: In this talk I will describe an AI architecture for recognizing visual situations in images. Visual situations are concepts such as “a boxing match”, “a handshake”, “a crowd waiting for a bus”, or “a game of Ping-Pong”, whose instantiations in images are often linked more by their common spatial and semantic structure than by low-level visual similarity.
Given a query situation description, our architecture—called Situate—learns models capturing the visual features of expected objects as well the expected spatial configuration of relationships among objects. Given a new image, Situate uses these models in an attempt to instantiate (i.e., to located in the image) each expected component of the situation in the image via an active search procedure. Situate uses the resulting instantiation to compute a score indicating the degree to which the new image is judged to contain an instance of the situation. Such scores can be used to rank images in a collection as part of a retrieval system. I will demonstrate the promise of this system by comparing Situate's performance with that of two baseline methods, as well as with a related semantic image interpretation and retrieval system based on “scene graphs”.