You are here

Normalization for probabilistic inference with neurons

Abstract Recently, there have been a number of proposals regarding how biologically plausible neural networks might perform probabilistic inference (Rao, 2004; Eliasmith and Anderson, 2003; Ma et al., 2006; Sahani and Dayan, 2003). To be able to repeatedly perform such inference, it is essential that the represented distributions be appropriately normalized. Past approaches have considered normalization mechanisms independently of inference, often leaving them unexplored, or appealing to a notion of divisive normalization that requires pooling across many neurons. Here we demonstrate how normalization and inference can be combined into an appropriate connection matrix, eliminating the need for pooling or a division-like operation. We algebraically demonstrate that such a solution is available regardless of the inference being performed. We show that such a solution is relevant to neural computation by implementing it in a recurrent spiking neural network.
Peer Reviewed: 
Eliasmith, C., Martens, J. (2011). Normalization for probabilistic inference with neurons. Biological Cybernetics. 104(4), 251-262.
Publication URL:

Like other methods, the important step of normalization of the probability density functions that are represented in our neural implementation is left to other mechanisms. Recently we have been working on methods to include normalization in the inference transformation (which takes place in the connection weights). We have recently submitted this work for publication. The supporting Nengo and Matlab code can be downloaded below.

Open the Normalization.3k.good.nef, and run it to regenerate the data in the paper. To put an input into the model, run the script, which takes the input_matrix.txt and generates a node that will give the correct input.

To generate the plots, run nengo_plotgen (read instructions in that file).

Just to get the 'ideal' solution (no neurons), run the normalization_noneurons_paper.m, which is the easiest.