Blessings of dimensionality for Gaussian latent factor models
Abstract. Learning the structure of graphical models from data is a fundamental problem with a heavy curse of dimensionality. Special structure is needed to make the problem tractable but even standard approaches based on sparsity or latent trees require the number of samples to grow with the dimension, making them infeasible for many real-world problems. However, when overlaps among latent factors are restricted, we present theoretical results that suggest the curse should become a blessing and sample complexity can decrease with dimensionality. Furthermore, we devise a method that exhibits this blessing of dimensionality for high-dimensional structure recovery using few samples. We demonstrate the advantages of this approach on under-sampled data from brain fMRI and financial markets.