Zum Inhalt

State-of-the-Art Deep Learning Models in TensorFlow

Modern Machine Learning in the Google Colab Ecosystem

  • 2021
  • Buch

Über dieses Buch

Verwenden Sie TensorFlow 2.x im Google Colab-Ökosystem, um hochmoderne Deep-Learning-Modelle zu erstellen, die von praktischen Beispielen geleitet werden. Das Colab-Ökosystem bietet einen kostenlosen Cloud-Service mit einfachem Zugriff auf bedarfsgerechte GPU- (und TPU) Hardwarebeschleunigung zur schnellen Ausführung der Modelle, die Sie bauen lernen. Dieses Buch lehrt Sie in angewandter Weise die neuesten Deep-Learning-Modelle, wobei die einzige Voraussetzung eine Internetverbindung ist. Das Colab-Ökosystem bietet alles andere, was Sie brauchen, einschließlich Python, TensorFlow 2.x, GPU- und TPU-Unterstützung und Jupyter-Notebooks. Das Buch beginnt mit einem beispielhaften Ansatz für den Bau von Input-Pipelines, die alle Modelle des maschinellen Lernens speisen. Sie lernen Schritt für Schritt, wie man einen Arbeitsbereich im Colab-Ökosystem bereitstellt, um den Bau effektiver Eingangspipelines zu ermöglichen. Von dort aus werden Sie in Techniken zur Datenvergrößerung und TensorFlow-Datensätze einsteigen, um ein tieferes Verständnis davon zu erlangen, wie man mit komplexen Datensätzen arbeitet. Hier finden Sie Informationen zu Tensor Processing Units (TPUs) und Transferlernen, gefolgt von hochmodernen Deep-Learning-Modellen, darunter Autoencoder, generative Netzwerke, schneller Stiltransfer, Objekterkennung und Verstärkungslernen. Autor Dr. Paper bietet alle angewandten mathematischen, programmierenden und Konzepte, die Sie brauchen, um den Inhalt zu beherrschen. Die Beispiele reichen von relativ einfach bis sehr komplex, wenn nötig. Die Beispiele sind sorgfältig erklärt, kurz, präzise und vollständig. Es wird darauf geachtet, Sie durch jedes Thema durch klare Beispiele zu führen, die in Python geschrieben wurden und die Sie bequem zu Hause oder im Büro ausprobieren und im Google Colab-Ökosystem ausprobieren können. What You Will LearnNutzen Sie die integrierte Unterstützung des Google Colab-Ökosystems Work with TensorFlow-Datensets. Erstellen Sie Eingabepipelines zur Einspeisung hochmoderner Deep-Learning-ModelleErstellen Sie hochmoderne Deep-Learning-Modelle mit sauberem und zuverlässigem Python-CodeNutzen Sie vorab ausgebildete Deep-Learning-Modelle, um komplexe Aufgaben im Bereich des maschinellen Lernens zu lösen. Erstellen Sie eine einfache Umgebung, um einem intelligenten Agenten beizubringen, automatisierte Entscheidungen zu treffen. Wer dieses Buch liest, der die äußerst beliebte TensorFlow-Deep-Learning-Plattform erlernen möchte, wer die Grundlagen moderner Deep-Learning-Modelle beherrschen möchte und wer Kompetenz mit einem modernen Cloud-Service-Tool wie Google Colab aufbauen möchte.

Inhaltsverzeichnis

  1. Frontmatter

  2. Chapter 1. Build TensorFlow Input Pipelines

    David Paper
    Abstract
    We introduce you to TensorFlow input pipelines with the tf.data API, which enables you to build complex input pipelines from simple, reusable pieces. Input pipelines are the lifeblood of any deep learning experiment because learning models expect data in a TensorFlow consumable form. It is very easy to create high-performance pipelines with the tf.data.Dataset abstraction (a component of the tf.data API) because it represents a sequence of elements from a dataset in a simple format.
  3. Chapter 2. Increase the Diversity of Your Dataset with Data Augmentation

    David Paper
    Abstract
    We guide you in the creation of augmented data experiments to increase the diversity of a training set by applying random (but realistic) transformations. Data augmentation is very useful for small datasets because deep learning models crave a lot of data to perform well.
  4. Chapter 3. TensorFlow Datasets

    David Paper
    Abstract
    We introduce TensorFlow Datasets by discussing and demonstrating their many facets with code examples. Although TensorFlow Datasets are not ML models, we include this chapter because we use them in many of the chapters in this book. These datasets are created by the TensorFlow team to provide a diverse set of data for practicing ML experiments.
  5. Chapter 4. Deep Learning with TensorFlow Datasets

    David Paper
    Abstract
    In the previous chapter, we demonstrated how to work with TFDS objects. In this chapter, we work through two end-to-end deep learning experiments with large and complex TFDS objects. The Fashion-MNIST and beans datasets are small with simple images.
  6. Chapter 5. Introduction to Tensor Processing Units

    David Paper
    Abstract
    We introduce you to Tensor Processing Units with code examples. A Tensor Processing Unit (TPU) is an application-specific integrated circuit (ASIC) designed to accelerate ML workloads. The TPUs available in TensorFlow are custom-developed from the ground up by the Google Brain team based on its plethora of experience and leadership in the ML community. Google Brain is a deep learning artificial intelligence (AI) research team at Google who research ways to make machines intelligent for the improvement of people’s lives.
  7. Chapter 6. Simple Transfer Learning with TensorFlow Hub

    David Paper
    Abstract
    Transfer learning is the process of creating new learning models by fine-tuning previously trained neural networks. Instead of training a network from scratch, we download a pre-trained open source learning model and fine-tune it for our own purpose. A pre-trained model is one that is created by someone else to solve a similar problem. We can use one of these instead of building our own model. A big advantage is that a pre-trained model has been crafted by experts, so we can be confident that it performs at a high level (in most cases). Another advantage is that we don’t have to have a lot of data to use a pre-trained model.
  8. Chapter 7. Advanced Transfer Learning

    David Paper
    Abstract
    We introduce advanced transfer learning with code examples based on several transfer learning architectures. The code examples train learning models with these architectures.
  9. Chapter 8. Stacked Autoencoders

    David Paper
    Abstract
    The first seven chapters focused on supervised learning algorithms. Supervised learning is a subcategory of ML that uses labeled datasets to train algorithms to classify data and predict outcomes accurately. The remaining chapters focus on unsupervised learning algorithms. Unsupervised learning uses ML algorithms to analyze and cluster unlabeled datasets. Such algorithms discover hidden patterns or data groupings without the need for human intervention.
  10. Chapter 9. Convolutional and Variational Autoencoders

    David Paper
    Abstract
    Autoencoders don’t typically work well with images unless they are very small. But convolutional and variational autoencoders work much better than feedforward dense ones with large color images.
  11. Chapter 10. Generative Adversarial Networks

    David Paper
    Abstract
    Generative modeling is an unsupervised learning technique that involves automatically discovering and learning the regularities (or patterns) in input data so that a trained model can generate new examples that plausibly could have been drawn from the original dataset. A popular type of generative model is a generative adversarial network. Generative adversarial networks (GANs) are generative models that create new data instances that resemble the training data.
  12. Chapter 11. Progressive Growing Generative Adversarial Networks

    David Paper
    Abstract
    GANs are effective at generating crisp synthetic images, but are limited in size to about 64 × 64 pixels. A Progressive Growing GAN is an extension of the GAN that enables training generator models to generate large high-quality images up to about 1024 × 1024 pixels (as of this writing). The approach has proven effective at generating high-quality synthetic faces that are startlingly realistic.
  13. Chapter 12. Fast Style Transfer

    David Paper
    Abstract
    Neural style transfer (NST) is a computer vision technique that takes two images – a content image and a style reference image – and blends them together so that the resulting output image retains the core elements of the content image but appears to be painted in the style of the style reference image. The output image from a NST network is called a pastiche. A pastiche is a work of visual art, literature, theater or music that imitates the style (or character) of the work of one or more other artists. Unlike a parody, a pastiche celebrates rather than mocks the work it imitates.
  14. Chapter 13. Object Detection

    David Paper
    Abstract
    Object detection is an automated computer vision technique for locating instances of objects in digital photographs or videos. Specifically, object detection draws bounding boxes around one or more effective targets located in a still image or video data. An effective target is the object of interest in the image or video data that is being investigated. The effective target (or targets) should be known at the beginning of the task.
  15. Chapter 14. An Introduction to Reinforcement Learning

    David Paper
    Abstract
    Reinforcement learning (RL) is an area of machine learning that focuses on teaching intelligent agents how to take actions in an environment in order to maximize cumulative reward. Cumulative reward in RL is the sum of all rewards as a function of the number of training steps.
  16. Backmatter

Titel
State-of-the-Art Deep Learning Models in TensorFlow
Verfasst von
David Paper
Copyright-Jahr
2021
Verlag
Apress
Electronic ISBN
978-1-4842-7341-8
Print ISBN
978-1-4842-7340-1
DOI
https://doi.org/10.1007/978-1-4842-7341-8

Informationen zur Barrierefreiheit für dieses Buch folgen in Kürze. Wir arbeiten daran, sie so schnell wie möglich verfügbar zu machen. Vielen Dank für Ihre Geduld.

    Bildnachweise
    AvePoint Deutschland GmbH/© AvePoint Deutschland GmbH, ams.solutions GmbH/© ams.solutions GmbH, Wildix/© Wildix, arvato Systems GmbH/© arvato Systems GmbH, Ninox Software GmbH/© Ninox Software GmbH, Nagarro GmbH/© Nagarro GmbH, GWS mbH/© GWS mbH, CELONIS Labs GmbH, USU GmbH/© USU GmbH, G Data CyberDefense/© G Data CyberDefense, Vendosoft/© Vendosoft, Kumavision/© Kumavision, Noriis Network AG/© Noriis Network AG, WSW Software GmbH/© WSW Software GmbH, tts GmbH/© tts GmbH, Asseco Solutions AG/© Asseco Solutions AG, AFB Gemeinnützige GmbH/© AFB Gemeinnützige GmbH, Ferrari electronic AG/© Ferrari electronic AG, Doxee AT GmbH/© Doxee AT GmbH , Haufe Group SE/© Haufe Group SE, NTT Data/© NTT Data