Representing high-dimensional cognitive variables with grid cells

Mirko & Marcus’ paper is out in PLoS Computational Biology!

If grid cells encode non-spatial cognitive variables, they should be able to represent spaces of dimension greater than two.

Can grid cells construct unambiguous representations of higher-dimensional inputs without recurrent rewiring to form higher-dimensional grid responses, a cell-inefficient and inflexible mechanism?

We show how they could do so by combining low-dimensional random projections with “classical” two-dimensional hexagonal grid cell responses.

Without reconfiguration of the recurrent circuit, the same network can flexibly encode multiple variables of different dimensions while maximizing the coding range (per dimension) by automatically trading-off dimension with an exponentially large coding range.

This model achieves high efficiency and flexibility by combining two powerful concepts, modularity and mixed selectivity, in what we call “mixed modular coding”.

The causal inference trap and digging out with far-out-of-equilibrium sampling

Abhranil’s work on the intrinsic difficulty of inferring connectivity from activity in strongly recurrent networks (read: highly correlated variables) just accepted for publication (Nat. Neurosci March 2020; bioRxiv, Jan 2019)!

We show that attempting causal inference in systems with strongly correlated variables will typically result in an overestimation of connectivity, drawing causal connections where there are none.

Even in fully observed systems, this failure to explain away correlations occurs when there is mismatch between the dynamical model generating the data and the statistical model doing the inference, an inevitable situation in the real world.

Thus, causal inference from activity is especially problemmatic in systems with strong correlations.

In short, not only are correlations not causation, correlations hurt causation.

The good news: We find that sampling data shortly after kicking the system far-out-of-equilibrium by low-dimensional suppressive drive could mitigate the inference problem without the need for inducing and tracking detailed high-dimensional (holographic) perturbations.

Place cells as feedforward readouts: huge capacity but little discretion

Manyi’s new paper on the capacity and combinatorics of place cell representations is on bioRxiv. Beautiful theoretical work from Samsonovich, Macnaughton, Battista, Monasson and others shows that recurrent models of place field generation are severely capacity-limited. What if place cells are largely feedforward-driven, computing coincidences between grid cell and landmark inputs? We show through analytical calculations and simulation that feedforward place cells can have a huge capacity. But at the same time, little flexibility or discretion in where to place their multiple fields, suggesting highly constrained firing geometries.