Abstract. The ability of relatively simple neural network frameworks to outperform decades of expert feature-engineered systems is a phenomenon exhibited over a rapidly growing set of machine learning tasks such as speech recognition, image classification, facial recognition, game (Chess/Go) playing, and natural language processing. In this talk, we present a transformative deep learning approach to image reconstruction, the process of producing an image from raw sensor data (for MRI, PET, CT, radio astronomy, etc…). The traditional approach involves multiple expert-designed stages in a signal processing chain whose composition depends on the details of each sensing strategy. We recast the image reconstruction problem as a unified data-driven, supervised learning task that allows a mapping between sensor and image domain to automatically emerge from an appropriate corpus of training data, resulting in a system remarkably generalizable over arbitrary image acquisition schemes and extremely robust to noise. Beyond demonstrating superhuman results from AI systems such as these, of increasing importance is the ability to explain or interpret the activity of the underlying neural networks. We will survey recent explainable AI advances and highlight limitations that may be uniquely addressable by the field of complexity science.