I realize I didn’t announce Rylan’s work on neural circuits performing hierarchical latent inference when it appeared in NeurIPS last December! Here’s a brief summary:
Often we must make decisions between two alternatives, but which of the two is more likely to result in a payoff might change over time. The International Brain Laboratory trains animals in a setting where the payoff balance between the two switches discretely, and the switches are uncued. To make a good decision, animals must infer the unknown balance of payoff probabilities, by accumulating information over multiple trials. In addition, each trial requires integration of evidence. Rylan showed how a neural circuit can perform this latent hierarchical inference task, by training RNNs. The RNN finds solutions that are not Bayesian but rather simple linear filters over time. It also uses the same coupled activity subspace to integrate evidence both within-trial and across-trials, albeit with different time-constants.
We also introduce a new method for network “distillation”, RADD: training a small network based on the hidden states of a bigger network to achieve a highly compact network that can solve the same latent inference problem (when the small network trained directly on the problem cannot do so). Next up: comparison of model predictions with neural data.