Студопедия — Hypothesis on what the different layers do
Студопедия Главная Случайная страница Обратная связь

Разделы: Автомобили Астрономия Биология География Дом и сад Другие языки Другое Информатика История Культура Литература Логика Математика Медицина Металлургия Механика Образование Охрана труда Педагогика Политика Право Психология Религия Риторика Социология Спорт Строительство Технология Туризм Физика Философия Финансы Химия Черчение Экология Экономика Электроника

Hypothesis on what the different layers do






 

We propose that layers 3, 4 and 5 are all feed-forward layers and are all learning sequences. Layer 4 is learning first order sequences. Layer 3 is learning variable order sequences. And layer 5 is learning variable order sequences with timing. Let’s look at each of these in more detail.

 

Layer 4

It is easy to learn first order sequences using the HTM cortical learning algorithm. If

we don’t force the cells in a column to inhibit each other, that is, the cells in a

column don’t differentiate in the context of prior inputs, then first order learning will occur. In the neocortex this would likely be accomplished by removing an inhibitory effect between cells in the same column. In our computer models of the HTM cortical learning algorithm, we just assign one cell per column, which produces a similar result.

 

First order sequences are what are needed to form invariant representations for spatial transformations of an input. In vision, for example, x-y translation, scale, and rotation are all spatial transformations. When an HTM region with first order memory is trained on moving objects, it learns that different spatial patterns are equivalent. The resulting HTM cells will behave like what are called “complex cells” in the neocortex. The HTM cells will stay active (in the predictive state) over a range of spatial transformations.

 

At Numenta we have done vision experiments that verify this mechanism works as expected, and that some spatial invariance is achieved within each level. The details of these experiments are beyond the scope of this appendix.


Learning first order sequences in layer 4 is consistent with finding complex cells in layer 4, and for explaining why layer 4 disappears in higher regions of neocortex. As you ascend the hierarchy at some point it will no longer be possible to learn further spatial invariances as the representations will already be invariant to them.

 

Layer 3

Layer 3 is closest to the HTM cortical learning algorithm that we described in Chapter 2. It learns variable order sequences and forms predictions that are more stable than its input. Layer 3 always projects to the next region in the hierarchy and therefore leads to increased temporal stability within the hierarchy. Variable order sequence memory leads to neurons called “directionally-tuned complex cells” which are first observed in layer 3. Directionally-tuned complex cells differentiate by temporal context, such as a line moving left vs. a line moving right.

 

Layer 5

The final feed-forward layer is layer 5. We propose that layer 5 is similar to layer 3 with three differences. The first difference is that layer 5 adds a concept of timing.

Layer 3 predicts “what” will happen next, but it doesn’t tell you “when” it will

happen. However, many tasks require timing such as recognizing spoken words in which the relative timing between sounds is important. Motor behavior is another example; coordinated timing between muscle activations is essential. We propose that layer 5 neurons predict the next state only after the expected time. There are several biological details that support this hypothesis. One is that layer 5 is the motor output layer of the neocortex. Another is that layer 5 receives input from layer 1 that originates in a part of the thalamus (not shown in the diagram). We propose that this information is how time is encoded and distributed to many cells via a thalamic input to layer 1 (not shown in the diagram).

 

The second difference between layer 3 and layer 5 is that we want layer 3 to make predictions as a far into the future as possible, gaining temporal stability. The HTM cortical learning algorithm described in Chapter 2 does this. In contrast, we only want layer 5 to predict the next element (at a specific time). We have not modeled this difference but it would naturally occur if transitions were always stored with an associated time.

 

The third difference between layer 3 and layer 5 can be seen in the diagram. The output of layer 5 always projects to sub-cortical motor centers, and the feed- forward path is gated by the thalamus. The output of layer 5 is sometimes passed to the next region and sometimes it is blocked. We (and others) propose this gating is related to covert attention (covert attention is when you attend to an input without motor behavior).

 

In summary, layer 5 combines specific timing, attention, and motor behavior. There are many mysteries relating to how these play together. The point we want to make is that a variation of the HTM cortical learning algorithm could easily incorporate specific timing and justify a separate layer in the cortex.


 

Layer 2 and layer 6

Layer 6 is the origin of axons that feed back to lower regions. Much less is known about layer 2. As mentioned above, the very existence of layer 2 as unique from layer 3 is sometimes debated. We won’t have further to say about this question now other than to point out that layers 2 and 6, like all the other layers, exhibit the pattern of massive horizontal connections and columnar response properties, so we propose that they, too, are running a variant of the HTM cortical learning algorithm.

 

What does an HTM region correspond to in the neocortex?

We have implemented the HTM cortical learning algorithm in two flavors, one with multiple cells per column for variable order memory, and one with a single cell per

column for first order memory. We believe these two flavors correspond to layer 3

and layer 4 in the neocortex. We have not attempted to combine these two variants in a single HTM region.

 

Although the HTM cortical learning algorithm (with multiple cells per column) is closest to layer 3 in the neocortex, we have flexibility in our models that the brain doesn’t have. Therefore we can create hybrid cellular layers that don’t correspond to specific neocortical layers. For example, in our model we know the order in which synapses are formed on dendrite segments. We can use this information to extract what is predicted to happen next from the more general prediction of all the things that will happen in the future. We can probably add specific timing in the same way. Therefore it should be possible to create a single layer HTM region that combines the functions of layer 3 and layer 5.

 







Дата добавления: 2015-08-12; просмотров: 330. Нарушение авторских прав; Мы поможем в написании вашей работы!



Кардиналистский и ординалистский подходы Кардиналистский (количественный подход) к анализу полезности основан на представлении о возможности измерения различных благ в условных единицах полезности...

Обзор компонентов Multisim Компоненты – это основа любой схемы, это все элементы, из которых она состоит. Multisim оперирует с двумя категориями...

Композиция из абстрактных геометрических фигур Данная композиция состоит из линий, штриховки, абстрактных геометрических форм...

Важнейшие способы обработки и анализа рядов динамики Не во всех случаях эмпирические данные рядов динамики позволяют определить тенденцию изменения явления во времени...

Тема: Изучение приспособленности организмов к среде обитания Цель:выяснить механизм образования приспособлений к среде обитания и их относительный характер, сделать вывод о том, что приспособленность – результат действия естественного отбора...

Тема: Изучение фенотипов местных сортов растений Цель: расширить знания о задачах современной селекции. Оборудование:пакетики семян различных сортов томатов...

Тема: Составление цепи питания Цель: расширить знания о биотических факторах среды. Оборудование:гербарные растения...

Неисправности автосцепки, с которыми запрещается постановка вагонов в поезд. Причины саморасцепов ЗАПРЕЩАЕТСЯ: постановка в поезда и следование в них вагонов, у которых автосцепное устройство имеет хотя бы одну из следующих неисправностей: - трещину в корпусе автосцепки, излом деталей механизма...

Понятие метода в психологии. Классификация методов психологии и их характеристика Метод – это путь, способ познания, посредством которого познается предмет науки (С...

ЛЕКАРСТВЕННЫЕ ФОРМЫ ДЛЯ ИНЪЕКЦИЙ К лекарственным формам для инъекций относятся водные, спиртовые и масляные растворы, суспензии, эмульсии, ново­галеновые препараты, жидкие органопрепараты и жидкие экс­тракты, а также порошки и таблетки для имплантации...

Studopedia.info - Студопедия - 2014-2024 год . (0.012 сек.) русская версия | украинская версия