State-of-the-Art Deep Learning Models in TensorFlow
Modern Machine Learning in the Google Colab Ecosystem
- 2021
- Buch
- Verfasst von
- David Paper
- Verlag
- Apress
Über dieses Buch
Über dieses Buch
Use TensorFlow 2.x in the Google Colab ecosystem to create state-of-the-art deep learning models guided by hands-on examples. The Colab ecosystem provides a free cloud service with easy access to on-demand GPU (and TPU) hardware acceleration for fast execution of the models you learn to build. This book teaches you state-of-the-art deep learning models in an applied manner with the only requirement being an Internet connection. The Colab ecosystem provides everything else that you need, including Python, TensorFlow 2.x, GPU and TPU support, and Jupyter Notebooks.
The book begins with an example-driven approach to building input pipelines that feed all machine learning models. You will learn how to provision a workspace on the Colab ecosystem to enable construction of effective input pipelines in a step-by-step manner. From there, you will progress into data augmentation techniques and TensorFlow datasets to gain a deeper understanding of how to work with complex datasets. You will find coverage of Tensor Processing Units (TPUs) and transfer learning followed by state-of-the-art deep learning models, including autoencoders, generative adversarial networks, fast style transfer, object detection, and reinforcement learning.
Author Dr. Paper provides all the applied math, programming, and concepts you need to master the content. Examples range from relatively simple to very complex when necessary. Examples are carefully explained, concise, accurate, and complete. Care is taken to walk you through each topic through clear examples written in Python that you can try out and experiment with in the Google Colab ecosystem in the comfort of your own home or office.
What You Will LearnTake advantage of the built-in support of the Google Colab ecosystemWork with TensorFlow data sets
Create input pipelines to feed state-of-the-art deep learning models
Create pipelined state-of-the-art deep learning models with clean and reliable Python code
Leverage pre-trained deep learning models to solve complex machine learning tasks
Create a simple environment to teach an intelligent agent to make automated decisions
Who This Book Is For
Readers who want to learn the highly popular TensorFlow deep learning platform, those who wish to master the basics of state-of-the-art deep learning models, and those looking to build competency with a modern cloud service tool such as Google Colab
Inhaltsverzeichnis
-
Frontmatter
-
Chapter 1. Build TensorFlow Input Pipelines
David PaperAbstractWe introduce you to TensorFlow input pipelines with the tf.data API, which enables you to build complex input pipelines from simple, reusable pieces. Input pipelines are the lifeblood of any deep learning experiment because learning models expect data in a TensorFlow consumable form. It is very easy to create high-performance pipelines with the tf.data.Dataset abstraction (a component of the tf.data API) because it represents a sequence of elements from a dataset in a simple format. -
Chapter 2. Increase the Diversity of Your Dataset with Data Augmentation
David PaperAbstractWe guide you in the creation of augmented data experiments to increase the diversity of a training set by applying random (but realistic) transformations. Data augmentation is very useful for small datasets because deep learning models crave a lot of data to perform well. -
Chapter 3. TensorFlow Datasets
David PaperAbstractWe introduce TensorFlow Datasets by discussing and demonstrating their many facets with code examples. Although TensorFlow Datasets are not ML models, we include this chapter because we use them in many of the chapters in this book. These datasets are created by the TensorFlow team to provide a diverse set of data for practicing ML experiments. -
Chapter 4. Deep Learning with TensorFlow Datasets
David PaperAbstractIn the previous chapter, we demonstrated how to work with TFDS objects. In this chapter, we work through two end-to-end deep learning experiments with large and complex TFDS objects. The Fashion-MNIST and beans datasets are small with simple images. -
Chapter 5. Introduction to Tensor Processing Units
David PaperAbstractWe introduce you to Tensor Processing Units with code examples. A Tensor Processing Unit (TPU) is an application-specific integrated circuit (ASIC) designed to accelerate ML workloads. The TPUs available in TensorFlow are custom-developed from the ground up by the Google Brain team based on its plethora of experience and leadership in the ML community. Google Brain is a deep learning artificial intelligence (AI) research team at Google who research ways to make machines intelligent for the improvement of people’s lives. -
Chapter 6. Simple Transfer Learning with TensorFlow Hub
David PaperAbstractTransfer learning is the process of creating new learning models by fine-tuning previously trained neural networks. Instead of training a network from scratch, we download a pre-trained open source learning model and fine-tune it for our own purpose. A pre-trained model is one that is created by someone else to solve a similar problem. We can use one of these instead of building our own model. A big advantage is that a pre-trained model has been crafted by experts, so we can be confident that it performs at a high level (in most cases). Another advantage is that we don’t have to have a lot of data to use a pre-trained model. -
Chapter 7. Advanced Transfer Learning
David PaperAbstractWe introduce advanced transfer learning with code examples based on several transfer learning architectures. The code examples train learning models with these architectures. -
Chapter 8. Stacked Autoencoders
David PaperAbstractThe first seven chapters focused on supervised learning algorithms. Supervised learning is a subcategory of ML that uses labeled datasets to train algorithms to classify data and predict outcomes accurately. The remaining chapters focus on unsupervised learning algorithms. Unsupervised learning uses ML algorithms to analyze and cluster unlabeled datasets. Such algorithms discover hidden patterns or data groupings without the need for human intervention. -
Chapter 9. Convolutional and Variational Autoencoders
David PaperAbstractAutoencoders don’t typically work well with images unless they are very small. But convolutional and variational autoencoders work much better than feedforward dense ones with large color images. -
Chapter 10. Generative Adversarial Networks
David PaperAbstractGenerative modeling is an unsupervised learning technique that involves automatically discovering and learning the regularities (or patterns) in input data so that a trained model can generate new examples that plausibly could have been drawn from the original dataset. A popular type of generative model is a generative adversarial network. Generative adversarial networks (GANs) are generative models that create new data instances that resemble the training data. -
Chapter 11. Progressive Growing Generative Adversarial Networks
David PaperAbstractGANs are effective at generating crisp synthetic images, but are limited in size to about 64 × 64 pixels. A Progressive Growing GAN is an extension of the GAN that enables training generator models to generate large high-quality images up to about 1024 × 1024 pixels (as of this writing). The approach has proven effective at generating high-quality synthetic faces that are startlingly realistic. -
Chapter 12. Fast Style Transfer
David PaperAbstractNeural style transfer (NST) is a computer vision technique that takes two images – a content image and a style reference image – and blends them together so that the resulting output image retains the core elements of the content image but appears to be painted in the style of the style reference image. The output image from a NST network is called a pastiche. A pastiche is a work of visual art, literature, theater or music that imitates the style (or character) of the work of one or more other artists. Unlike a parody, a pastiche celebrates rather than mocks the work it imitates. -
Chapter 13. Object Detection
David PaperAbstractObject detection is an automated computer vision technique for locating instances of objects in digital photographs or videos. Specifically, object detection draws bounding boxes around one or more effective targets located in a still image or video data. An effective target is the object of interest in the image or video data that is being investigated. The effective target (or targets) should be known at the beginning of the task. -
Chapter 14. An Introduction to Reinforcement Learning
David PaperAbstractReinforcement learning (RL) is an area of machine learning that focuses on teaching intelligent agents how to take actions in an environment in order to maximize cumulative reward. Cumulative reward in RL is the sum of all rewards as a function of the number of training steps. -
Backmatter
- Titel
- State-of-the-Art Deep Learning Models in TensorFlow
- Verfasst von
-
David Paper
- Copyright-Jahr
- 2021
- Verlag
- Apress
- Electronic ISBN
- 978-1-4842-7341-8
- Print ISBN
- 978-1-4842-7340-1
- DOI
- https://doi.org/10.1007/978-1-4842-7341-8
Informationen zur Barrierefreiheit für dieses Buch folgen in Kürze. Wir arbeiten daran, sie so schnell wie möglich verfügbar zu machen. Vielen Dank für Ihre Geduld.