Skip to main content
main-content

2020 | Buch

Machine Learning and Artificial Intelligence

share
TEILEN
insite
SUCHEN

Über dieses Buch

This book provides comprehensive coverage of combined Artificial Intelligence (AI) and Machine Learning (ML) theory and applications. Rather than looking at the field from only a theoretical or only a practical perspective, this book unifies both perspectives to give holistic understanding. The first part introduces the concepts of AI and ML and their origin and current state. The second and third parts delve into conceptual and theoretic aspects of static and dynamic ML techniques. The forth part describes the practical applications where presented techniques can be applied. The fifth part introduces the user to some of the implementation strategies for solving real life ML problems.

The book is appropriate for students in graduate and upper undergraduate courses in addition to researchers and professionals. It makes minimal use of mathematics to make the topics more intuitive and accessible.

Presents a full reference to artificial intelligence and machine learning techniques - in theory and application;Provides a guide to AI and ML with minimal use of mathematics to make the topics more intuitive and accessible;Connects all ML and AI techniques to applications and introduces implementations.

Inhaltsverzeichnis

Frontmatter

Part I

Frontmatter
Chapter 1. Introduction to AI and ML
Abstract
In this chapter, I will introduce the concepts of artificial intelligence and machine learning to the user and discuss how these topics have evolved over last several decades. I will also lay the outline of the book and describe how different parts of the book are organized and how they unfold the topics in sequence. This layout will help the user to plan on how to get most out of the book. If he/she wants to focus on only some specific aspect(s), he/she can directly skip to corresponding part in the book. If the user is coming to the book in a fresh perspective, I would advise to go sequentially, however, if the user is already familiar with certain aspects of the area and want to expand the knowledge about specific topics, he/she is free to jump to different parts.
Ameet V Joshi
Chapter 2. Essential Concepts in Artificial Intelligence and Machine Learning
Abstract
This chapter is a treatise on various fundamental principles that lie at the heart of machine learning theory and are quite often used in practice implicitly or explicitly. A new comer to this area of machine learning will find this chapter of immense value and will suddenly feel lot more confident in understanding the multitude of concepts that are built on top of these principles. I would strongly advise to not skip this chapter. All these concepts will be visited in the rest of the book multiple times as well as in many other books/articles the reader will encounter in the field.
Ameet V Joshi
Chapter 3. Data Understanding, Representation, and Visualization
Abstract
This chapter introduces the concepts of understanding, representing, and visualizing the data. These are essential steps in the before one starts to build a machine learning model or artificially intelligent application. Although these concepts might appear trivial, when the dimensionality of the data is more than 3, they quickly become quite non-trivial and difficult. This chapter introduces dimensionality reduction techniques like principal component analysis and linear discriminant analysis for the purpose of better visualization of high dimensional data. The better visualization gives the user insights into the data distribution and relation of various features with each other and with the output. These insights are valuable when making various choices in the subsequent machine learning pipeline.
Ameet V Joshi

Part II

Frontmatter
Chapter 4. Linear Methods
Abstract
In this chapter, we will focus on the most simple methods in machine learning, viz. Linear Methods. Even if linear methods are relatively easy understand, they illustrate the fundamental concepts in machine learning. Linear methods also represent a nice cross section of supervised and unsupervised methods. In this chapter, we will study these concepts followed by linear regression. Regularization techniques also mark a crucial aspect in machine learning and we will study that in the context of linear methods in this chapter. Then we will see the generalization of these methods using nonlinear link functions.
Ameet V Joshi
Chapter 5. Perceptron and Neural Networks
Abstract
In this chapter we will study the classical theory of neural networks based on multilayer perceptron. We will learn the architecture of the neural networks and feedforward operation. We will learn about the activation functions and their importance. We will learn different methods to train the neural networks and compare their advantages and disadvantages. Then we will see different type of architecture in the form of radial basis function networks and understand their conceptual interpretation. Then we will conclude the chapter with concepts of overfitting and regularization.
Ameet V Joshi
Chapter 6. Decision Trees
Abstract
Decision tree is a fundamentally different approach towards machine learning compared to other options like neural networks or support vector machines. The other approaches deal with the data that is strictly numerical that may increase or decrease monotonically. The equations that define these approaches are designed to work only when the data is numerical. However, the theory of decision trees does not rely on the assumption of numerical data. In this chapter, we will study the theory of decision trees along with some advanced topics in decision trees, like ensemble methods. We will focus on bagging and boosting as two main types of ensemble methods and learn how they work and what their advantages and disadvantages.
Ameet V Joshi
Chapter 7. Support Vector Machines
Abstract
In this chapter we are going to study the concept of support vector machines as developed by Vapnik and others. This concept was first proposed as an alternative to neural networks, when neural networks were not performing up to the grand expectations that they came with. SVM proposed a very targeted mathematical approach towards finding the optimal solution in case of classification or regression. We will first study the original SVM theory that tries to solve the problem of linear classification. Then we will see how it can be further generalized for nonlinear problems with use of kernels and also how it is extended for solving the problems of regression. Theory of SVM proposed an elegant solution towards optimization and generalization and more importantly was extremely successful in getting results that neural network based methods only hoped for at the time.
Ameet V Joshi
Chapter 8. Probabilistic Models
Abstract
In this chapter we will study a new type of algorithm based on probability and statistics. We will study the two fundamental approaches in the form of discriminative and generative models. In the discriminative models we will study the concepts of Bayesian approach and Maximum likelihood approach. We will derive the solution of a same problem using both approaches to illustrate the differences and advantages and disadvantages. Then we will study the probability density functions and cumulative density functions of some commonly used distributions. Although it might feel repetitive and unimportant to study these wide variety of distributions one after another, I would highly recommend to go through them nonetheless. These distributions and their specific properties are quite important in understanding the variety of ways in which one can expect the data to be distributed in. This knowledge can be invaluable in data exploration.
Ameet V Joshi
Chapter 9. Dynamic Programming and Reinforcement Learning
Abstract
In this chapter we will study dynamic programming. Starting with the fundamental equation of dynamic programming as defined by Bellman, we will further dive deep into its generalization. We will understand the class of problems that can be solved with the framework of dynamic programming. Then we will study reinforcement learning as one subcategory of dynamic programming in detail. We will study the concepts of exploration and exploitation and the optimal tradeoff between them to achieve the best performance. We will also look at some variation of the reinforcement learning in the form of Q-learning and SARSA.
Ameet V Joshi
Chapter 10. Evolutionary Algorithms
Abstract
This chapter introduces the concepts of evolutionary algorithms. The evolutionary algorithms are based on Darwin’s theory of evolution by natural selection. All the algorithms described in this chapter are conceptually based on this idea. Each algorithm interprets the idea in slightly different manner and proposes a different framework to solve certain type of problems. Specifically, we will discuss the following algorithms: (1) Genetic Algorithms, (2) Simulated Annealing, (3) Ant Colony Optimization, and (4) Swarm Intelligence. These algorithms are aimed at using the biologically influenced techniques to improve the convergence of the optimization when the optimization problem is almost impossible to solve completely using most of the techniques described in other chapters.
Ameet V Joshi
Chapter 11. Time Series Models
Abstract
In this chapter we introduce the time series models in machine learning. These models are different in principle compared to most other models in the fact that the data that they work on is dynamic and is changing as a function of time. We will study some non-probabilistic techniques that are used to process such data including ARMA, ARIMA along with probabilistic techniques like hidden Markov models (HMM) and conditional random fields (CRF). Handling dynamic data is fundamentally different from dealing with static data, and needs a whole new perspective. In some cases dynamic data can be handled as static by taking snapshots of the data at specific times. However, this approach can only help to certain extent, and in order to solve the problems in dynamic data ultimately user needs to employ one of the techniques described here or deep neural networks as described in chapter dedicated on that topic.
Ameet V Joshi
Chapter 12. Deep Learning
Abstract
In this chapter we will focus on specific type of neural networks called as deep neural networks or deep networks. Deep networks and deep learning have become extremely popular tools in modern machine learning due to tremendous success they have achieved using the distributed and parallel computing technology available at disposal. We will study two specific types of deep networks in the form of convolutional neural networks (CNN) and recurrent neural networks (RNN). In RNN, we will study the most popular implementation called as long short-term memory or LSTM.
Ameet V Joshi
Chapter 13. Emerging Trends in Machine Learning
Abstract
In this chapter we will look at some of the emerging trends in the field of machine learning. Some of these trends represent incremental improvements to the existing techniques, while some of them may seem outright crazy and futuristic. Most of the techniques discussed here are in their infancy and need significant research efforts to mature. However, each one of these techniques represents an area of active research. Any of these techniques, if successful in delivering on the promise is likely to change the way we look at machine learning in general. I am only going to touch up on these areas to give the reader a glimpse into the future of machine learning and artificial intelligence.
Ameet V Joshi
Chapter 14. Unsupervised Learning
Abstract
All the machine learning techniques we have seen so far have one thing in common and that is availability of labelled training data. However, there are numerous cases, when such data is too expensive and unrealistic to get. In this chapter we are going to study the algorithms that can work without labelled training data and still be able to produce certain insights into the data or reduce the dimensionality of the data. All these algorithms are called as unsupervised algorithm and their application is called as unsupervised learning. Unsupervised learning marks an important pillar of modern machine learning.
Ameet V Joshi

Part III

Frontmatter
Chapter 15. Featurization
Abstract
In this chapter, we will study different aspects in featurization or feature engineering. Sometimes it is referred to as feature wrangling too. Featurization represented one of the important aspects of building the machine learning model. We will use a data set called Adult Salary data set from UCI repository to illustrate each step of the featurization. The specific steps we will study are: (1) identifying raw data, (2) building feature set, and (3) handling missing values. We will then look at various methods of visualizing the data. With widespread use of open source machine learning models, featurization is going to remain as most crucial step in building a successful pipeline to maximize the accuracy.
Ameet V Joshi
Chapter 16. Designing and Tuning Model Pipelines
Abstract
This chapter focusses on various design elements of building a machine learning pipeline. We will study individual aspects of the process from preprocessing the data, choosing the right algorithm followed by training the model and finally tuning the hyperparameters to optimize the model performance. In this chapter we will also discuss some of the lesser known aspects in designing successful machine learning pipeline in the form of data leakage, handling missing data and coincidence v/s causality of the features that are crucial in practical design and how to address them.
Ameet V Joshi
Chapter 17. Performance Measurement
Abstract
Performance can be measured qualitatively by looking subjectively at a set of results or objectively at the value of an expression. Whenever size of data is large subjective and qualitative tests can no longer give any reliable information about the general performance of the system and objective methods are the only way to go. There are various such mathematical expressions, called as metrics, defined in the field for assessing performance of different types of machine learning systems. In this chapter we are going to focus on the theory of performance measurement and metrics.
Ameet V Joshi

Part IV

Frontmatter
Chapter 18. Classification
Abstract
In this chapter we are going to look at the problem of classification from the perspective of application of machine learning theory. We will take an example of email spam detection as an application of classification and go through the steps in building an end to end machine learning pipeline to solve the problem. This process will illustrate a general application of classification as well as introduce the reader with various practical aspects of solving a real life problem using machine learning theory.
Ameet V Joshi
Chapter 19. Regression
Abstract
In this chapter we will look at a real life example of a problem to build artificially intelligent solution for a problem of regression. We will start with defining the problem of predicting price of a property. Then we will drill down into the details of the possible features and in the process also refine the problem statement to make it more concrete and solvable by a machine learning model. We will also establish the expected performance metrics. We will then lay out full pipeline for building a machine learning model pipeline that will solve the given problem and predict the price of a property.
Ameet V Joshi
Chapter 20. Ranking
Abstract
This chapter described yet another important aspect of artificially intelligent systems, called ranking. In basic terms, ranking is sorting the items in ascending or descending order of a property. However, the concepts underlying the creation of such measures are not trivial and have evolved quite a bit in recent years. In this chapter, we will look at some such measures along with the metrics that are used to compare multiple ranking systems.
Ameet V Joshi
Chapter 21. Recommendations Systems
Abstract
In this chapter we will study an application of AI for building recommendation systems. We will look at the concept of collaborative filtering that lies at the heart of recommendation systems. We will also look at real life examples of Netflix and Amazon and how they have used the technique to deliver personalized experiences in vastly different applications. These problems illustrate relatively novel concepts that were not well known to the field few decades before. The application of existing mathematical concepts in solving these problems and seeing the solutions in action in day-to-day life is quite exciting and satisfying and marks one of greatest success stories of modern machine learning.
Ameet V Joshi

Part V

Frontmatter
Chapter 22. Azure Machine Learning
Abstract
In this chapter we will discuss using Azure Machine Learning studio (abbreviated as AML studio) as free resource to start with implementing the full end to end machine learning pipelines. We will learn how to access the AML studio, create a free account, import data and build the model. We will look into details of available options in AML studio and how to use them. To illustrate the process, we will use an example of IRIS data from UCI repository. We will use this data and AML studio to build machine learning model for performing the task of multiclass classification. We will build two models using the built in features of AML studio and compare the performance of the two model using metrics like accuracy and confusion matrix.
Ameet V Joshi
Chapter 23. Open Source Machine Learning Libraries
Abstract
In this chapter we will continue the discussion on implementation of machine learning models using free and open source alternatives. Here, we will specifically give the list of top options in the open source domain and will select one of them suitable for illustrating the process. We will then use the same IRIS data to solve the problem of multiclass classification using multiple models. Instead of using AML studio we will use python based open source library called scikit-learn and its dependencies to build the full end to end pipeline along similar lines as one built using AML studio in the previous chapter. Then we will compare and contrast the two alternatives.
Ameet V Joshi
Chapter 24. Amazon’s Machine Learning Toolkit: Sagemaker
Abstract
In this chapter we will learn how to setup the machine learning service using Amazon’s offering for using the first time. The service is called as Sagemaker and it presents a fairly comprehensive set of tools to build a complete end to end pipeline to solve a machine learning problem. We will build a pipeline using Sagemaker to solve a similar problem that we solved using Azure ML and in the process will compare and contrast the pros and cons of this system with Azure ML.
Ameet V Joshi

Part VI

Frontmatter
Chapter 25. Conclusion and Next Steps
Abstract
This chapter gives some concluding remarks and suggests the next steps in the journey towards building better artificially intelligence.
Ameet V Joshi
Backmatter
Metadaten
Titel
Machine Learning and Artificial Intelligence
verfasst von
Ameet V Joshi
Copyright-Jahr
2020
Electronic ISBN
978-3-030-26622-6
Print ISBN
978-3-030-26621-9
DOI
https://doi.org/10.1007/978-3-030-26622-6