Skip to main content

2021 | Buch

Deep Learning for Hydrometeorology and Environmental Science

insite
SUCHEN

Über dieses Buch

This book provides a step-by-step methodology and derivation of deep learning algorithms as Long Short-Term Memory (LSTM) and Convolution Neural Network (CNN), especially for estimating parameters, with back-propagation as well as examples with real datasets of hydrometeorology (e.g. streamflow and temperature) and environmental science (e.g. water quality).

Deep learning is known as part of machine learning methodology based on the artificial neural network. Increasing data availability and computing power enhance applications of deep learning to hydrometeorological and environmental fields. However, books that specifically focus on applications to these fields are limited.

Most of deep learning books demonstrate theoretical backgrounds and mathematics. However, examples with real data and step-by-step explanations to understand the algorithms in hydrometeorology and environmental science are very rare.

This book focuses on the explanation of deep learning techniques and their applications to hydrometeorological and environmental studies with real hydrological and environmental data. This book covers the major deep learning algorithms as Long Short-Term Memory (LSTM) and Convolution Neural Network (CNN) as well as the conventional artificial neural network model.

Inhaltsverzeichnis

Frontmatter
Chapter 1. Introduction
Abstract
Deep learning has been popularly employed for analysis and forecasting in various fields. In this chapter, a brief introduction of deep learning is presented, including the definition and pros and cons of deep learning, followed by the recent applications of deep learning models in hydrological and environmental fields. The structure of the remaining chapters for this book is also explained.
Taesam Lee, Vijay P. Singh, Kyung Hwa Cho
Chapter 2. Mathematical Background
Abstract
In this current chapter, the fundamental mathematical background is presented for a deep learning model. Linear simple and multiple regression models are explained, including the definition of error terms and parameter estimation procedure, since they are similarly used in deep learning models. Also, the basic concept of the time series model is also explained and this part is mainly referred to in the LSTM model chapter.
Taesam Lee, Vijay P. Singh, Kyung Hwa Cho
Chapter 3. Data Preprocessing
Abstract
Before applying a neural network model, the data must be preprocessed in advance. Data normalization must be made to avoid the difference of data variability for inputs and outputs. It also simplifies the parameter range of a network model. Furthermore, data should be split for different purposes such as training, validation and testing. In this chapter, these data normalization and data split are explained in detail.
Taesam Lee, Vijay P. Singh, Kyung Hwa Cho
Chapter 4. Neural Network
Abstract
Deep learning model is composed of several layers of neural networks. Therefore, the basic concepts and terminology of a neural network are introduced. The simplest neural network model is introduced and used in the latter part of this book. Then, a full neural network model is described. The parameter estimation procedure employing backward propagation is also explained.
Taesam Lee, Vijay P. Singh, Kyung Hwa Cho
Chapter 5. Training a Neural Network
Abstract
A neural network is composed of a number of parameters, also called weights. The neural network training is the estimation procedure of those parameters. In this chapter, the training procedure of a neural network is described based on the gradient descent method and the backpropagation algorithm.
Taesam Lee, Vijay P. Singh, Kyung Hwa Cho
Chapter 6. Updating Weights
Abstract
Training a neural network is to update the weights to minimize a specified loss function and the gradient descent method has been employed. However, the number of weights exponentially grows, especially in a deep learning machine. In recent years, several methods updating weights have been developed to improve the speed of convergence and to find the best trajectory to reach the optimum of the employed loss function for a network. In this chapter, those methods for updating weights are explained.
Taesam Lee, Vijay P. Singh, Kyung Hwa Cho
Chapter 7. Improving Model Performance
Abstract
In order improve the performance of a neural network model, a number of ways have been studied. In this chapter, minibatch and k-fold cross-validation are explained. The basic idea of these two methods is on controlling the dataset, since repeated usage of the same dataset for training and validation might result in overfitting. Furthermore, regularization of the neural network model training by L-norm regularization and dropout of hidden nodes are explained in this chapter to avoid overfitting.
Taesam Lee, Vijay P. Singh, Kyung Hwa Cho
Chapter 8. Advanced Neural Network Algorithms
Abstract
In this chapter, recently developed neural network algorithms are introduced, including extreme learning machine and autoencoder. These algorithms are popularly adopted in deep learning models. Particularly, autoencoder can be used in shrinking the dimension of inputs and outputs as well as hidden nodes.
Taesam Lee, Vijay P. Singh, Kyung Hwa Cho
Chapter 9. Deep Learning for Time Series
Abstract
One of the major applications in deep learning models is to forecast the future. In recent years, time series forecasting with deep learning models has been developed and applied in a number of fields. Recurrent neural network models can allow forecasting future better, and long short-term memory (LSTM) is a breakthrough to overcome the shortages of the previous RNN model. These algorithms are explained in detail in this chapter.
Taesam Lee, Vijay P. Singh, Kyung Hwa Cho
Chapter 10. Deep Learning for Spatial Datasets
Abstract
Among recent developments of deep learning models, the availability of spatial datasets or images with deep learning is the most significant contribution. In this chapter, a convolutional neural network (CNN) that can analyze the spatial datasets is described. The training procedure of CNN is described with a simple example.
Taesam Lee, Vijay P. Singh, Kyung Hwa Cho
Chapter 11. Tensorflow and Keras Programming for Deep Learning
Abstract
Tensorflow is an end-to-end open-source platform for machine learning containing a comprehensive, flexible ecosystem of tools, libraries, and community resources (https://​www.​tensorflow.​org/​). It provides multiple levels of abstractions to choose the right one. The high-level Keras API can be used to build and train models by easily getting started with Tensorflow. Keras allows employing Tensorflow without losing its flexibility and capability. In the following, two applications (i.e., temporal and spatial deep learning) are presented to illustrate how to use Keras with python.
Taesam Lee, Vijay P. Singh, Kyung Hwa Cho
Chapter 12. Hydrometeorological Applications of Deep Learning
Abstract
Deep learning models have been applied for hydrometeorological applications. In this chapter, a few of them are explained. In the field of hydrometeorology, time-series deep learning models are mainly employed. In this chapter, the development procedure of a time series deep learning model for stochastic simulation producing a long sequence that mimics historical series is explained. Furthermore, the case study for daily maximum temperature with an LSTM model is presented.
Taesam Lee, Vijay P. Singh, Kyung Hwa Cho
Chapter 13. Environmental Applications of Deep Learning
Abstract
A case study of CNN, a spatial deep learning model, is presented in this chapter for analyzing water quality with remote sensing data. Airborne remote sensing of cyanobacteria with multispectral/hyperspectral sensors is employed for input data and its water quality as output data.
Taesam Lee, Vijay P. Singh, Kyung Hwa Cho
Metadaten
Titel
Deep Learning for Hydrometeorology and Environmental Science
verfasst von
Prof. Taesam Lee
Prof. Dr. Vijay P. Singh
Kyung Hwa Cho
Copyright-Jahr
2021
Electronic ISBN
978-3-030-64777-3
Print ISBN
978-3-030-64776-6
DOI
https://doi.org/10.1007/978-3-030-64777-3