Skip to main content
Top

2018 | Book

Deep Belief Nets in C++ and CUDA C: Volume 2

Autoencoding in the Complex Domain

insite
SEARCH

About this book

Discover the essential building blocks of a common and powerful form of deep belief net: the autoencoder. You’ll take this topic beyond current usage by extending it to the complex domain for signal and image processing applications. Deep Belief Nets in C++ and CUDA C: Volume 2 also covers several algorithms for preprocessing time series and image data. These algorithms focus on the creation of complex-domain predictors that are suitable for input to a complex-domain autoencoder. Finally, you’ll learn a method for embedding class information in the input layer of a restricted Boltzmann machine. This facilitates generative display of samples from individual classes rather than the entire data distribution. The ability to see the features that the model has learned for each class separately can be invaluable.
At each step this book provides you with intuitive motivation, a summary of the most important equations relevant to the topic, and highly commented code for threaded computation on modern CPUs as well as massive parallel processing on computers with CUDA-capable video display cards.

What You'll LearnCode for deep learning, neural networks, and AI using C++ and CUDA C
Carry out signal preprocessing using simple transformations, Fourier transforms, Morlet wavelets, and more
Use the Fourier Transform for image preprocessing
Implement autoencoding via activation in the complex domain
Work with algorithms for CUDA gradient computation
Use the DEEP operating manual

Who This Book Is For
Those who have at least a basic knowledge of neural networks and some prior programming experience, although some C++ and CUDA C is recommended.

Table of Contents

Frontmatter
Chapter 1. Embedded Class Labels
Abstract
A picture is worth a thousand words. Sometimes a lot more. In many applications, the ability to see what a classification model is seeing is invaluable. This is especially true when the model is processing signals or images, which by nature have a visual representation. If the developer can study examples of the features that the model is associating with each class, this lucky developer may be clued in to strengths and weaknesses of the model. In this chapter, we will see how this can be done.
Timothy Masters
Chapter 2. Signal Preprocessing
Abstract
A picture is worth a thousand words. Sometimes a lot more. In many applications, the ability to see what a classification model is seeing is invaluable. This is especially true when the model is processing signals or images, which by nature have a visual representation. If the developer can study examples of the features that the model is associating with each class, this lucky developer may be clued in to strengths and weaknesses of the model. In this chapter, we will see how this can be done.
Timothy Masters
Chapter 3. Image Preprocessing
Abstract
An enormous variety of algorithms exist for preprocessing images for presentation to a model. This chapter will discuss only one such algorithm, though it is an important one whose computational details are often glossed over in other references. Here we will downplay the deep theory, which is widely available, and focus on the practical implementation details, which are not so widely available.
Timothy Masters
Chapter 4. Autoencoding
Abstract
The most basic autoencoder is an ordinary feedforward network that has a single hidden layer and is trained to reproduce its inputs. It’s a prediction model in which the targets are the inputs. The idea is that if the hidden layer is in some sense relatively weak (perhaps by virtue of having few neurons or having limited weight magnitudes or some other form of regularization), this hidden layer will learn to encapsulate the “important” features of the training data, those that are most consistent and have highest information content. These significant features, which are defined by the activation pattern of the hidden layer, can then be used for classification or prediction, or they can be used as inputs to yet another autoencoder for further pattern extraction. An unlimited number of simple one-hidden-layer autoencoders can be stacked into a deep network.
Timothy Masters
Chapter 5. DEEP Operating Manual
Abstract
This chapter presents a concise operating manual for DEEP 2.0. The first section lists every menu option along with a short description of its purpose and the page number on which more details can be found if the short description is not sufficient.
Timothy Masters
Backmatter
Metadata
Title
Deep Belief Nets in C++ and CUDA C: Volume 2
Author
Timothy Masters
Copyright Year
2018
Publisher
Apress
Electronic ISBN
978-1-4842-3646-8
Print ISBN
978-1-4842-3645-1
DOI
https://doi.org/10.1007/978-1-4842-3646-8

Premium Partner