Skip to main content

2022 | Buch

Kernel Methods for Machine Learning with Math and Python

100 Exercises for Building Logic

insite
SUCHEN

Über dieses Buch

The most crucial ability for machine learning and data science is mathematical logic for grasping their essence rather than relying on knowledge or experience. This textbook addresses the fundamentals of kernel methods for machine learning by considering relevant math problems and building Python programs.

The book’s main features are as follows:

The content is written in an easy-to-follow and self-contained style.The book includes 100 exercises, which have been carefully selected and refined. As their solutions are provided in the main text, readers can solve all of the exercises by reading the book.The mathematical premises of kernels are proven and the correct conclusions are provided, helping readers to understand the nature of kernels.Source programs and running examples are presented to help readers acquire a deeper understanding of the mathematics used.Once readers have a basic understanding of the functional analysis topics covered in Chapter 2, the applications are discussed in the subsequent chapters. Here, no prior knowledge of mathematics is assumed.This book considers both the kernel for reproducing kernel Hilbert space (RKHS) and the kernel for the Gaussian process; a clear distinction is made between the two.

Inhaltsverzeichnis

Frontmatter
Chapter 1. Positive Definite Kernels
Abstract
In data analysis and various information processing tasks, we use kernels to evaluate the similarities between pairs of objects. In this book, we deal with mathematically defined kernels called positive definite kernels. Let the elements xy of a set E correspond to the elements (functions) \(\Psi (x), \Psi (y)\) of a linear space H called the reproducing kernel Hilbert space. The kernel k(xy) corresponds to the inner product \(\langle \Psi (x), \Psi (y) \rangle _H \) in the linear space H. Additionally, by choosing a nonlinear map \(\Psi \), this kernel can be applied to various problems. The set E may be a string, a tree, or a graph, even if it is not a real-numbered vector, as long as the kernel satisfies positive definiteness. After defining probability and Lebesgue integrals in the second half, we will learn about kernels by using characteristic functions (Bochner’s theorem).
Joe Suzuki
Chapter 2. Hilbert Spaces
Abstract
When considering machine learning and data science issues, in many cases, the calculus and linear algebra courses taken during the first year of university provide sufficient background information. However, we require knowledge of metric spaces and their completeness, as well as linear algebras with nonfinite dimensions, for kernels. If your major is not mathematics, we might have few opportunities to study these topics, and it may be challenging to learn them in a short period. This chapter aims to learn Hilbert spaces, the projection theorem, linear operators, and (some of) the compact operators necessary for understanding kernels. Unlike finite-dimensional linear spaces, ordinary Hilbert spaces require scrutiny of their completeness.
Joe Suzuki
Chapter 3. Reproducing Kernel Hilbert Space
Abstract
Thus far, we have learned that a feature map \(\Psi : E\ni x\mapsto k(x,\cdot )\) is obtained by the positive definite kernel \(k: E\times E\rightarrow {\mathbb R}\). In this chapter, we generate a linear space \(H_0\) based on its image \(k(x,\cdot ) (x\in E\)) and construct a Hilbert space H by completing this linear space, where H is called the reproducing kernel Hilbert space (RKHS), which satisfies the reproducing property of the kernel k (k is the reproducing kernel of H).
Joe Suzuki
Chapter 4. Kernel Computations
Abstract
In Chapter 1, we learned that the kernel \(k(x,y)\in {\mathbb R}\) represents the similarity between two elements xy in a set E.
Joe Suzuki
Chapter 5. The MMD and HSIC
Abstract
In this chapter, we introduce the concept of random variables \(X: E\rightarrow {\mathbb R}\) in an RKHS and discuss testing problems in RKHSs. In particular, we define a statistic and its null hypothesis for the two-sample problem and the corresponding independence test.
Joe Suzuki
Chapter 6. Gaussian Processes and Functional Data Analyses
Abstract
A stochastic process may be defined either as a sequence of random variables \(\{X_t\}_{t\in T}\), where T is a set of times, or as a function \(X_t(\omega ): T\rightarrow {\mathbb R}\) of \(\omega \in \Omega \).
Joe Suzuki
Backmatter
Metadaten
Titel
Kernel Methods for Machine Learning with Math and Python
verfasst von
Joe Suzuki
Copyright-Jahr
2022
Verlag
Springer Nature Singapore
Electronic ISBN
978-981-19-0401-1
Print ISBN
978-981-19-0400-4
DOI
https://doi.org/10.1007/978-981-19-0401-1