Our group seeks to understand why the brain contains particular codes, how the architecture and dynamics of neural circuits shape such codes, and how coding states evolve to perform computations that unfold over time. We are specifically interested in questions of learning, memory, integration, inference, and cognitive representations in the brain. Our tools are numerical and theoretical, and our approach includes working closely with collaborators on specific experimental systems.
Coding: In principle, the brain could encode information about a variable in any of a myriad ways. The choice of coding scheme sheds light on the computational priorities of the brain in representing that variable. For instance, codes can differ in capacity, ease of readout by downstream areas, or noise tolerance. Understanding a neural code means not only learning what or how much is encoded, but learning the tradeoffs of the coding scheme, to see “why” it was selected.
Plasticity, learning and memory: What kinds of network connectivity support robust integration, memory, and representation of the world? How do such structures emerge through development and plasticity? What are effective mechanisms for unsupervised and supervised learning in the brain? We study these questions through theory and simulation of neural circuits.
Error correction: Representations in the brain are necessarily noisy and varying because of apparently stochastic dynamics in neurons and synapses. Avoiding problems that can arise from these processes, especially in memory systems where noise accumulates, requires aggressive error reduction and correction, but our understanding of how the brain does this is at best primitive. We investigate representations and mechanisms for error control, bringing together coding and dynamical considerations.
Theoretically-motivated analysis of neural data: We analyze neural data with a view toward discovering mechanism, specifically by testing predictions of theoretical models. We close the theory-experiment loop through detailed comparisons of theory and data and through collaborative design of experiments to effectively discriminate between models.