Published in:
12-02-2019
Guest Editorial: Recent Trends in Reuse and Integration
Authors:
Thouraya Bouabana-Tebibel, Stuart H. Rubin, Lydia Bouzar-Benlabiod
Published in:
Information Systems Frontiers
|
Issue 1/2019
Log in
Excerpt
The term, “reuse” means far more than code reuse, which it subsumes. It includes all manner of abstractions – from models to dynamic plans, simulations, knowledge bases, software, firmware, and hardware. In particular, the term includes semantic repositories, which enable the reuse of knowledge stored in episodic memory, rules, neural weight sets, connections in hardware, and the like. A problem with deep learning and neural networks in general is that they don’t facilitate transfer learning, which is a very human way to learn. (Hosseini et al.
2017) recently demonstrated that unlike humans, neural networks are unable to learn to recognize images that are equivalent under simple transformation (e.g., integers and their Polaroid negatives). Without transfer learning, neural networks cannot take advantage of randomization for accelerating their learning and increasing the space of what they know. Moreover, (Lin and Vitter
1991) proved that if a neural network has at least one hidden layer, then it is
NP-hard to train. Geoffrey Hinton, the inventor of the backpropagation algorithm, has stated that we need to reinvent the neural net if we ever hope to close in on how the brain works and the capability to emulate its diverse abilities. …