Abstract: The accurate generalization of prior experience is perhaps humankind’s greatest cognitive feat. Despite receiving relatively few data, we are able to make consistent and valuable inferences in an ever-changing environment, in a manner that continues to prove difficult for machines. In my research, I attempt to account for such abilities at scale by developing computational models of cognition to predict large sets of psychological data derived from crowdsourcing platforms online. First, I will present a set of studies that investigate how people generalize latent object properties—specifically, their category—from their observed features. Using stimulus features derived from modern computer vision, we show that previous cognitive theories developed using simple artificial stimuli are inadequate to explain human categorization of more naturalistic images. However, when these two approaches are combined they begin to offer good predictions of human behaviour, as well as techniques for forcing machine learners to learn representations that are more ecological, and therefore robust to out-of-training-sample data and adversarial attacks. Second, I will discuss more recent work on the generalization of relations, and how this relates to analogical learning and inference. Developing a computational account of how humans make good relational inferences across related computational environments is key to understanding efficient learning over a lifetime, and a route to autonomous and explainable strategies for problem solving and data forecasting.