Circuit inference biased in strongly recurrent networks and a possible solution

Abhranil’s new paper is out! Congrats Abhranil.

We show that connectivity inference from activity in strongly recurrent networks will be systematically biased regardless of data volume and even given access to every neuron in the network. But measuring activity far-out-of equilibrium after a simple low-dimensional suppressive input could ameliorate the bias.

Memories from patterns: a review

“Memory from patterns: Attractor and integrator networks in the brain”, with Mikail, is submitted. Comments and suggestions are welcomed!

The theory of how complex patterns emerge from simple interactions and constituents is one of the big ideas in biology, explaining animal coats and morphogenesis.

The same principles can produce dynamical states for computation in the brain, in the form of attractor networks. We review how attractor networks generate states for robust representation, integration, and memory.

Our review covers the conceptual ideas, the theory, and the potential utility of continuous and discrete attractor networks, then focuses on the empirical evidence that the brain computes using these structures. Finally, we discuss modern developments in combining the concepts of modularity and attractors, and list future challenges.

We hope the review provides a vista of a field of systems neuroscience driven by theoretical ideas, where theory and experiment have come together fruitfully and harmoniously.

What are the main sources of homing error in young and aging humans?

Matthias and Ingmar’s paper is out in Nature Communications!

Homing, or determining the straight path back to “home” after a winding outbound journey, is a critical but error-prone computation.

What are the main causes of human homing error, and how do they change with age?

We put humans into immersive VR, measured homing errors along winding paths, and modeled the time-resolved process of error accumulation with a Langevin-type diffusion equation.

We found that forgetful integration, biases in velocity estimation or integration, and reporting or readout errors do not limit homing ability; rather, the bottleneck is an accumulation of unbiased random error.

The random error accumulates with movement but not time, suggesting it is related to velocity sensing rather than integration.

Aging humans do worse; their diminished performance is not from new sources of error but an increase in the unbiased error already limiting young subjects.

Representing high-dimensional cognitive variables with grid cells

Mirko & Marcus’ paper is out in PLoS Computational Biology!

If grid cells encode non-spatial cognitive variables, they should be able to represent spaces of dimension greater than two.

Can grid cells construct unambiguous representations of higher-dimensional inputs without recurrent rewiring to form higher-dimensional grid responses, a cell-inefficient and inflexible mechanism?

We show how they could do so by combining low-dimensional random projections with “classical” two-dimensional hexagonal grid cell responses.

Without reconfiguration of the recurrent circuit, the same network can flexibly encode multiple variables of different dimensions while maximizing the coding range (per dimension) by automatically trading-off dimension with an exponentially large coding range.

This model achieves high efficiency and flexibility by combining two powerful concepts, modularity and mixed selectivity, in what we call “mixed modular coding”.

The causal inference trap and digging out with far-out-of-equilibrium sampling

Abhranil’s work on the intrinsic difficulty of inferring connectivity from activity in strongly recurrent networks (read: highly correlated variables) just accepted for publication (Nat. Neurosci March 2020; bioRxiv, Jan 2019)!

We show that attempting causal inference in systems with strongly correlated variables will typically result in an overestimation of connectivity, drawing causal connections where there are none.

Even in fully observed systems, this failure to explain away correlations occurs when there is mismatch between the dynamical model generating the data and the statistical model doing the inference, an inevitable situation in the real world.

Thus, causal inference from activity is especially problemmatic in systems with strong correlations.

In short, not only are correlations not causation, correlations hurt causation.

The good news: We find that sampling data shortly after kicking the system far-out-of-equilibrium by low-dimensional suppressive drive could mitigate the inference problem without the need for inducing and tracking detailed high-dimensional (holographic) perturbations.

Place cells as feedforward readouts: huge capacity but little discretion

Manyi’s new paper on the capacity and combinatorics of place cell representations is on bioRxiv. Beautiful theoretical work from Samsonovich, Macnaughton, Battista, Monasson and others shows that recurrent models of place field generation are severely capacity-limited. What if place cells are largely feedforward-driven, computing coincidences between grid cell and landmark inputs? We show through analytical calculations and simulation that feedforward place cells can have a huge capacity. But at the same time, little flexibility or discretion in where to place their multiple fields, suggesting highly constrained firing geometries.