2014 | OriginalPaper | Chapter
One-Shot Learning with Feedback for Multi-layered Convolutional Network
Author : Kunihiko Fukushima
Published in: Artificial Neural Networks and Machine Learning – ICANN 2014
Publisher: Springer International Publishing
Activate our intelligent search to find suitable subject content or patents.
Select sections of text to find matching patents with Artificial Intelligence. powered by
Select sections of text to find additional relevant content using AI-assisted search. powered by
This paper proposes an improved add-if-silent rule, which is suited for training intermediate layers of a multi-layered convolutional network, such as a neocognitron. By the add-if-silent rule, a new cell is generated if all postsynaptic cells are silent. The generated cell learns the activity of the presynaptic cells in one-shot, and its input connections will never be modified afterward. To use this learning rule for a convolutional network, it is required to decide at which retinotopic location this rule is to be applied. In the conventional add-if-silent rule, we chose the location where the activity of presynaptic cells is the largest. In the proposed new learning rule, a negative feedback is introduced from postsynaptic cells to presynaptic cells, and a new cell is generated at the location where the presynaptic activity fails to be suppressed by the feedback. We apply this learning rule to a neocognitron for hand-written digit recognition, and demonstrate the decrease in the recognition error.