Skip to main content

2021 | Buch

Artificial Neural Networks with TensorFlow 2

ANN Architecture Machine Learning Projects

insite
SUCHEN

Über dieses Buch

Develop machine learning models across various domains. This book offers a single source that provides comprehensive coverage of the capabilities of TensorFlow 2 through the use of realistic, scenario-based projects.
After learning what's new in TensorFlow 2, you'll dive right into developing machine learning models through applicable projects. This book covers a wide variety of ANN architectures—starting from working with a simple sequential network to advanced CNN, RNN, LSTM, DCGAN, and so on. A full chapter is devoted to each kind of network and each chapter consists of a full project describing the network architecture used, the theory behind that architecture, what data set is used, the pre-processing of data, model training, testing and performance optimizations, and analysis.
This practical approach can either be used from the beginning through to the end or, if you're already familiar with basic ML models, you can dive right into the application that interests you. Line-by-line explanations on major code segments help to fill in the details as you work and the entire project source is available to you online for learning and further experimentation. With Artificial Neural Networks with TensorFlow 2 you'll see just how wide the range of TensorFlow's capabilities are.
What You'll LearnDevelop Machine Learning ApplicationsTranslate languages using neural networksCompose images with style transferWho This Book Is For

Beginners, practitioners, and hard-cored developers who want to master machine and deep learning with TensorFlow 2. The reader should have working concepts of ML basics and terminologies.

Inhaltsverzeichnis

Frontmatter
Chapter 1. TensorFlow Jump Start
Abstract
TensorFlow is an end-to-end open source platform for developing and deploying machine learning applications. We can call it the complete machine learning (ML) ecosystem. All of us have seen face tagging in our photos on Facebook. Well, this is a machine learning application. Autonomous cars use object detection to avoid collisions on the road. Machines now translate Spanish to English. Human voices are converted into text for you to create a digital document. All these are machine learning applications. Even a trivial OCR (optical character reader) application that we use so often uses machine learning. There are many more advanced applications developed today – such as captioning images, generating images, translating images, forecasting a time series, understanding human languages, and so on. All such applications and many more can be developed and deployed on the TensorFlow platform. And exactly, that’s what you are going to learn in this book.
Poornachandra Sarang
Chapter 2. A Closer Look at TensorFlow
Abstract
In the previous chapter, you saw the capabilities of the TensorFlow platform. Having seen a glimpse of TensorFlow powers, it is time now to start learning how to harness this power into your own real-world applications.
Poornachandra Sarang
Chapter 3. Deep Dive in tf.keras
Abstract
Keras is a high-level neural networks API that runs on top of TensorFlow. Last many years, you were using Keras API with TensorFlow running at the backend. With TensorFlow 2.x, this has changed. TensorFlow has now integrated Keras in tf.keras API. The tf.keras is the TensorFlow’s implementation of the Keras API specification. This change is mainly done to bring consistency in using Keras with TF. It also resulted in taking advantage of several TensorFlow features such as eager execution, distributed training and others while using Keras. The latest Keras release as of this writing is 2.3.0. This release adds support for TensorFlow 2.x and is also the last major release of multi-backend Keras. Henceforth, you will be using only tf.keras in all your deep learning applications. You have already used tf.keras in chapter 2 while getting started on TensorFlow. This chapter will take you deeper in the use of tf.keras.
Poornachandra Sarang
Chapter 4. Transfer Learning
Abstract
In the previous chapter, you developed a binary image classifier. With 60,000 images, it took a while to train the model. The accuracy we achieved was about 80 to 90%. If you want a higher accuracy, more images would be required for training. As a matter of fact, a deep learning network learns better with a higher number of data points. The ImageNet (https://devopedia.org/imagenet), the first of its kind in terms of scale, consists of 14,197,122 images organized into 21,841 subcategories which are further classified into 27 subtrees. To classify the images in ImageNet, many machine learning models were developed; these were developed mainly as a way of research and competitions. In 2017, one such model achieved an error rate of as low as 2.3%. The underlying network was very complex. Considering the number of trainable points for such a complex network, imagine the number of resources and time it would have taken to train the model.
Poornachandra Sarang
Chapter 5. Neural Networks for Regression
Abstract
So far, we looked at the classification model in deep learning. Can we apply whatever techniques you learned so far as a part of deep learning to a regression problem, which is considered probably the simplest in data analytics? Is it worth even attempting to use deep learning in the areas of regression, considering the overheads of deep learning? Is there an advantage in using deep learning over the traditional statistical techniques – especially in the case of regression modeling? You will find answers to these and similar questions in this chapter.
Poornachandra Sarang
Chapter 6. Estimators
Abstract
Any machine learning project consists of many stages that include training, evaluation, prediction, and finally exporting it for serving on a production server. You learned these stages in previous chapters where the classification and regression machine learning projects were discussed. To develop the best performing model, you played around with different ANN architectures. Basically, you experimented with several different prototypes to achieve the desired results. Prior to TF 2.0, this entire experimentation was not so easy as for every change that you make in the code, you were required to build a computational graph and run it in a session. The estimators that you are going to study in this chapter were designed to handle all this plumbing. The entire process of creating graphs and running them in sessions was a time-consuming job and posed lots of challenges in debugging the code.
Poornachandra Sarang
Chapter 7. Text Generation
Abstract
In this book, so far you have used vanilla neural networks, the networks that do not have the ability to remember their past. They accept a fixed-sized vector as input and produce a fixed-sized output. Consider the case of image classification where the input is an image and the output is one of the classes for which the model has been trained. Now, consider a situation where the prediction requires the knowledge of the previous predictions. To give you an example, consider you are watching a movie. Your mind keeps guessing what will be the next scene. The guess depends on what has happened not just in the near past but something that also happened 15 minutes ago or even an hour for a long movie. Our vanilla neural networks, in the way they work, do not have memory to remember what happened in the past and to apply that knowledge to the current guess.
Poornachandra Sarang
Chapter 8. Language Translation
Abstract
The first time I was transiting through Frankfurt International Airport, I had a tough time following the signs at the airport as I do not understand the German language. This was many years ago. Today, you can just point your mobile to these signs, and the app within your phone will provide you with the translation in English or maybe a language of your choice. How are these translations done? There is more than one technology involved behind these translations. The core is a machine learning model that provides a word-to-word translation using a huge vocabulary of the predefined words. Obviously, this kind of word-to-word or in machine learning terms sequence-to-sequence translation works with a great accuracy in the case of the airport and road signs, but it may not produce acceptable translations in natural language sentences. Just to give you an example, the translation of a question like “How are you today?” cannot be simply done by translating each word of the sentence independently of other words. Sophisticated models are made for performing such translations. Google initially used statistical language translation; in 2016, they started using NMT (neural machine translation). You will learn how to develop such a model in this chapter.
Poornachandra Sarang
Chapter 9. Natural Language Understanding
Abstract
In the last chapter, you used the SEQ2SEQ model along with the Attention to perform a language translation. In this chapter, I will show you a more sophisticated technique for Natural Language Processing. You will learn to use the latest innovation in natural language modeling called Transformer. The Transformer model eliminates the need for LSTMs and produces far better results than the SEQ2SEQ model that uses LSTMs. So, let us understand what a Transformer model is.
Poornachandra Sarang
Chapter 10. Image Captioning
Abstract
When you are vacationing, you capture several pictures of beautiful landscapes and places, and then you put some captions on these photos and publish them on your social network. Just imagine that you would have a mobile app doing photo captioning for you; would this not be a wonderful thing to achieve? In this chapter, you will learn how to create and train a neural network to create captions for your photos.
Poornachandra Sarang
Chapter 11. Time Series Forecasting
Abstract
Forecasting has always been a topic of interest for every human being. What is my future? Will I become a millionaire in the next 5 years? When will I get married? These are the questions raised by several of us. There are people in this world who do forecasting and at least try to provide answers to such questions. Neural networks so far are not successful in doing such kinds of forecasts. But they do certainly forecast the futures where the past data contains some discoverable patterns. The topic of this chapter is to learn how to train neural networks to perform such kinds of forecasts. These are called time series forecasting. Let me first describe what we mean by a time series followed by forecasting the future.
Poornachandra Sarang
Chapter 12. Style Transfer
Abstract
Ever wish you could paint like Picasso or the famous Indian painter M.F. Husain? It looks like neural networks have made your every wish come true. In this chapter, you would learn one such technique that uses neural networks to compose your own clicked picture in the style of a famous artist or rather in a style of your own choice. The technique is called neural style transfer, which is outlined in Leon A. Gatys’ famous paper – “A Neural Algorithm of Artistic Style.” Though the paper is a great read, you would not need all those details given in the paper to understand this chapter.
Poornachandra Sarang
Chapter 13. Image Generation
Abstract
Did you ever imagine that neural networks could be used for generating complex color images? How about Anime? How about the faces of celebrities? How about a bedroom? Doesn’t it sound interesting? All these are possible with the most interesting idea in neural networks and that is Generative Adversarial Networks (GANs). The idea was introduced and developed by Ian J. Goodfellow in 2014. The images created by GAN look so real that it becomes practically impossible to differentiate between a fake and a real image. Be warned, to generate complex images of this nature, you would require lots of resources to train the network, but it does certainly work as you would see when you study this chapter. So let us look at what is GAN.
Poornachandra Sarang
Chapter 14. Image Translation
Abstract
Have you ever thought of colorizing the old B&W photograph of your granny? You would probably approach a Photoshop artist to do this job for you, paying them hefty fees and awaiting a couple of days/weeks for them to finish the job. If I tell you that you could do this with a deep neural network, would you not get excited about learning how to do it? Well, this chapter teaches you the technique of converting your B&W images to a colorized image almost instantaneously. The technique is simple and uses a network architecture known as AutoEncoders. So, let us first look at AutoEncoders.
Poornachandra Sarang
Backmatter
Metadaten
Titel
Artificial Neural Networks with TensorFlow 2
verfasst von
Poornachandra Sarang
Copyright-Jahr
2021
Verlag
Apress
Electronic ISBN
978-1-4842-6150-7
Print ISBN
978-1-4842-6149-1
DOI
https://doi.org/10.1007/978-1-4842-6150-7