Summary. The HTM cortical learning algorithm embodies what we believe is a basic building block of neural organization in the neocortex
The HTM cortical learning algorithm embodies what we believe is a basic building block of neural organization in the neocortex. It shows how a layer of horizontally- connected neurons learns sequences of sparse distributed representations. Variations of the HTM cortical learning algorithm are used in different layers of the neocortex for related, but different purposes.
We propose that feed-forward input to a neocortical region, whether to layer 4 or layer 3, projects predominantly to proximal dendrites, which with the assistance of inhibitory cells, creates a sparse distributed representation of the input. We propose that cells in layers 2, 3, 4, 5, and 6 share this sparse distributed representation. This is accomplished by forcing all cells in a column that spans the layers to respond to the same feed-forward input.
We propose that layer 4 cells, when they are present, use the HTM cortical learning algorithm to learn first-order temporal transitions which make representations that are invariant to spatial transformations. Layer 3 cells use the HTM cortical learning algorithm to learn variable-order temporal transitions and form stable representations that are passed up the cortical hierarchy. Layer 5 cells learn variable-order transitions with timing. We don’t have specific proposals for layer 2 and layer 6. However, due to the typical horizontal connectivity in these layers it is likely they, too, are learning some form of sequence memory.
|