We are all embedded in multiple, ever-changing networks — the network of colliding particles in the universe, the interacting flows in Earth’s atmosphere, the highways and city streets we traverse, and the family, friends, coworkers, and neighbors that comprise our social networks. But the most complex networks we engage with every day are the ones inside our own brains. When humans learn, we don’t just acquire disconnected bits of information, but interconnected networks of relational knowledge. Our capacity for such learning naturally depends on the architecture of the knowledge network itself, and also on the architecture of the computational unit — the brain — that encodes and processes the information.
In this SFI Community Lecture, neuroscientist Danielle S. Bassett discusses emerging work assessing network constraints on the learnability of relational knowledge, and physical constraints on the development of interconnected patterns in neural systems. What do the correspondences between these domains tell us about the nature of modeling and computation in our brains, and mechanisms for knowledge acquisition? Can we, as networks, use network science to think about ourselves?
Danielle S. Bassett is a physicist and systems neuroscientist at the University of Pennsylvania. A 2014 MacArthur and Sloan fellow, she applies concepts from physics and mathematics to understand the the brain’s network structures and function.
Watch the video: