Skip to main content
main-content
Top

About this book

Use Java to develop neural network applications in this practical book. After learning the rules involved in neural network processing, you will manually process the first neural network example. This covers the internals of front and back propagation, and facilitates the understanding of the main principles of neural network processing. Artificial Neural Networks with Java also teaches you how to prepare the data to be used in neural network development and suggests various techniques of data preparation for many unconventional tasks.
The next big topic discussed in the book is using Java for neural network processing. You will use the Encog Java framework and discover how to do rapid development with Encog, allowing you to create large-scale neural network applications.
The book also discusses the inability of neural networks to approximate complex non-continuous functions, and it introduces the micro-batch method that solves this issue. The step-by-step approach includes plenty of examples, diagrams, and screen shots to help you grasp the concepts quickly and easily.

What You Will LearnPrepare your data for many different tasks
Carry out some unusual neural network tasks
Create neural network to process non-continuous functions
Select and improve the development model

Who This Book Is For
Intermediate machine learning and deep learning developers who are interested in switching to Java.

Table of Contents

Frontmatter

Chapter 1. Learning About Neural Networks

Abstract
The artificial intelligence neural network architecture schematically mimics a human brain network. It consists of layers of neurons directionally connected to each other. Figure 1-1 shows a schematic image of a human neuron.
Igor Livshin

Chapter 2. Internal Mechanics of Neural Network Processing

Abstract
This chapter discusses the inner workings of neural network processing. It shows how a network is built, trained, and tested.
Igor Livshin

Chapter 3. Manual Neural Network Processing

Abstract
In this chapter, you’ll learn about the internals of neural network processing by seeing a simple example. I’ll provide a detailed step-by-step explanation of the calculations involved in processing the forward and backward propagation passes.
Igor Livshin

Chapter 4. Configuring Your Development Environment

Abstract
This book is about neural network processing using Java. Before you can start developing any neural network program, you need to learn several Java tools. If you are a Java developer and are familiar with the tools discussed in this chapter, you can skip this chapter. Just make sure that all the necessary tools are installed on your Windows machine.
Igor Livshin

Chapter 5. Neural Network Development Using the Java Encog Framework

Abstract
To facilitate your learning of network program development using Java, you will develop your first simple program using the function from Example 1 in Chapter 2.
Igor Livshin

Chapter 6. Neural Network Prediction Outside the Training Range

Abstract
Preparing data for neural network processing is typically the most difficult and time-consuming task you’ll encounter when working with neural networks. In addition to the enormous volume of data that could easily reach millions and even billions of records, the main difficulty is in preparing the data in the correct format for the task in question. In this and the following chapters, I will demonstrate several techniques of data preparations/transformation.
Igor Livshin

Chapter 7. Processing Complex Periodic Functions

Abstract
This chapter continues the discussion of how to process periodic functions, concentrating on more complex periodic functions.
Igor Livshin

Chapter 8. Approximating Noncontinuous Functions

Abstract
This chapter will discuss the neural network approximation of noncontinuous functions. Currently, this is a problematic area for neural networks because network processing is based on calculating partial function derivatives (using the gradient descent algorithm), and calculating them for noncontinuous functions at the points where the function value suddenly jump or drop leads to questionable results. You will dig deeper into this issue in this chapter. The chapter also includes a method I developed that solves this issue.
Igor Livshin

Chapter 9. Approximating Continuous Functions with Complex Topology

Abstract
This chapter shows that the micro-batch method substantially improves the approximation results of continuous functions with complex topologies.
Igor Livshin

Chapter 10. Using Neural Networks to Classify Objects

Abstract
In this chapter, you’ll use a neural network to classify objects. Classification means recognizing various objects and determining the class to which those objects belong. As with many areas of artificial intelligence, classification is easily done by humans but can be quite difficult for computers.
Igor Livshin

Chapter 11. The Importance of Selecting the Correct Model

Abstract
The example discussed in this chapter will end up showing a negative result. However, you can learn a lot from mistakes like this.
Igor Livshin

Chapter 12. Approximation of Functions in 3D Space

Abstract
This chapter discusses how to approximate functions in 3D space. Such function values depend on two variables (instead of one variable, which was discussed in the preceding chapters). Everything discussed in this chapter is also correct for functions that depend on more than two variables. Figure 12-1 shows the chart of the 3D function considered in this chapter.
Igor Livshin

Backmatter

Additional information

Premium Partner

    image credits