Skip to main content

2019 | Buch

Multiview Machine Learning

verfasst von: Prof. Shiliang Sun, Dr. Liang Mao, Ziang Dong, Lidan Wu

Verlag: Springer Singapore

insite
SUCHEN

Über dieses Buch

This book provides a unique, in-depth discussion of multiview learning, one of the fastest developing branches in machine learning. Multiview Learning has been proved to have good theoretical underpinnings and great practical success. This book describes the models and algorithms of multiview learning in real data analysis. Incorporating multiple views to improve the generalization performance, multiview learning is also known as data fusion or data integration from multiple feature sets. This self-contained book is applicable for multi-modal learning research, and requires minimal prior knowledge of the basic concepts in the field. It is also a valuable reference resource for researchers working in the field of machine learning and also those in various application domains.

Inhaltsverzeichnis

Frontmatter
Chapter 1. Introduction
Abstract
In this chapter, we first give the background for writing this monograph. Then, we provide a formal definition of multiview machine learning and discuss its difference and similarities with related concepts data fusion and multimodal learning. After showcasing four typical application fields in artificial intelligence, we explain the underlying philosophy on why multiview learning is useful. Finally, we give the organization structure of the book.
Shiliang Sun, Liang Mao, Ziang Dong, Lidan Wu
Chapter 2. Multiview Semi-supervised Learning
Abstract
Semi-supervised learning is concerned with such learning scenarios where only a small portion of training data are labeled. In multiview settings, unlabeled data can be used to regularize the prediction functions, and thus to reduce the search space. In this chapter, we introduce two categories of multiview semi-supervised learning methods. The first one contains the co-training style methods, where the prediction functions from different views are trained through their own objective, and each prediction function is improved by the others. The second one contains the co-regularization style methods, where a single objective function exists for the prediction functions from different views to be trained simultaneously.
Shiliang Sun, Liang Mao, Ziang Dong, Lidan Wu
Chapter 3. Multiview Subspace Learning
Abstract
In multiview settings, observations from different views are assumed to share the same subspace. The abundance of views can be utilized to better explore the subspace. In this chapter, we consider two different kinds of multiview subspace learning problems. The first one contains the general unsupervised multiview subspace learning problems. We focus on canonical correlation analysis as well as some of its extensions. The second one contains the supervised multiview subspace learning problems, i.e., there exists available label information. In this case, representations more suitable for the on-hand task can be obtained by utilizing the label information. We also briefly introduce some other methods at the end of this chapter.
Shiliang Sun, Liang Mao, Ziang Dong, Lidan Wu
Chapter 4. Multiview Supervised Learning
Abstract
Multiview supervised learning algorithm can exploit the multiview nature of the data by the consensus of the views, that is, to seek predictors from different views that agree on the same example. In this chapter, we introduce three categories of multiview supervised learning methods. The first one contains the multiview large margin-based classifiers, which regularize the classifiers from different views with their agreements on classification margins to enforce view consensus. The second one contains multiple kernel learning, where the feature mappings underlying multiple kernels will map the views to new feature spaces where the classifiers are more likely to agree on the views. The third one contains some Gaussian process related models, in which case the predict functions themselves are taken as random variables. We also briefly introduce some other methods at the end of this chapter.
Shiliang Sun, Liang Mao, Ziang Dong, Lidan Wu
Chapter 5. Multiview Clustering
Abstract
This chapter introduces three kinds of multiview clustering methods. We begin with the multiview spectral clustering, where the clustering is carried out through the partition of a relationship graph of the data. It depends on the eigenvector of the adjacent matrix of the data. Then we consider the multiview subspace clustering, which aims at recovering the underlying subspace of the multiview data and performs clustering on it. Finally, we introduce distributed multiview clustering, which first learns the patterns from each view individually and then combines them together to learn optimal patterns for clustering, and multiview clustering ensemble. It combines the results of multiple clustering algorithms to obtain better performance. We also briefly introduce some other methods at the end of this chapter.
Shiliang Sun, Liang Mao, Ziang Dong, Lidan Wu
Chapter 6. Multiview Active Learning
Abstract
Active learning is proposed based on the fact that manually labeled examples are expensive, thus it picks the most informative points to label so as to improve the learning efficiency. Combined with multiview learning algorithm, it constructs multiple learners to select contention points among different views. In this chapter, we introduce five multiview active learning algorithms as examples. At first, we introduce co-testing, the first algorithm applying active learning to multiview learning, and discuss how to process the contradiction between multiple learners. Bayesian co-training is proposed under the mutual information framework, which considers the unobserved labels as latent variables and marginalizes them out. We focus on multiview multi-learner learning active learning, which introduces the ambiguity of an example to measure its confidence. In the situation that active learning with extremely sparse labeled examples, there is a detailed derivation of CCA in two view. At last, we retell a practical active learning algorithm combined with semi-supervised learning. Besides, there are other methods briefly mentioned at the end of this chapter.
Shiliang Sun, Liang Mao, Ziang Dong, Lidan Wu
Chapter 7. Multiview Transfer Learning and Multitask Learning
Abstract
Transfer learning is proposed to transfer the learned knowledge from source domains to target domains where the target ones own fewer training data. Multitask learning learns multiple tasks simultaneously and makes use of the relationship among these tasks. Both of these learning methods can combine with the multiview learning, which exploits the information from the consistency of diverse views. In this chapter, we introduce four multiview transfer learning methods and three multiview multitask learning methods. We review research on multiview transfer learning under the large margin framework, discuss multiview discriminant transfer learning in detail, and introduce how to adapt Adaboost into multiview transfer learning. Three multiview multitask learning methods concentrate on the shared structures between tasks and views. The most natural way is to represent the relationships based on the bipartite graph and use an iterative algorithm to optimize its objective function. Another method constructs additional regularization function to ensure the view consistency. In general, convex shared structure learning algorithm provides structure parameters to share information. Besides, we introduce other methods; as supplements, where multi-transfer, multitask multiview discriminant analysis, and clustering are briefly mentioned.
Shiliang Sun, Liang Mao, Ziang Dong, Lidan Wu
Chapter 8. Multiview Deep Learning
Abstract
The multiview deep learning described in this chapter deals with multiview data or simulates constructing its intrinsic structure by using deep learning methods. We highlight three major categories of multiview deep learning methods through three different thoughts. The first category of approaches focuses on obtaining a shared joint representation from different views by building a hierarchical structure. The second category of approaches focuses on constructing structured spaces with different representations of multiple views which gives some constraints between representations on a different view. The third major category approaches focuses on explicitly constructing connections or relationships between different views or representations, which allows different views to be translated or mapped to each other.
Shiliang Sun, Liang Mao, Ziang Dong, Lidan Wu
Chapter 9. View Construction
Abstract
In most real applications, data are represented by a single view, which is difficult to apply in multiview learning. In this Chapter, we introduce six view construction methods to generate new views from the originate view. All of them will meet the assumptions of multiview models. Ideally, the hypotheses from two views ought to agree on the same example corresponding to the view consensus principle. Views are thought to be conditionally independent with each other. The simplest method is to partition feature set into disjoint subsets, each of which represents one view. The second method is to purify the high-dimension data to small sets of features as new views. By contrast, one can also generate a new view by adding noise into the originate data. The next three methods are based on neural networks. Reversed sequence can be regarded as another view in sequential models. New views can also be constructed using different modules such as kernel functions, neural networks, filters and other structures which can extract specific features from the original data. Finally, we introduce how to generate a new view conditioned on auxiliary information by conditional generative models.
Shiliang Sun, Liang Mao, Ziang Dong, Lidan Wu
Metadaten
Titel
Multiview Machine Learning
verfasst von
Prof. Shiliang Sun
Dr. Liang Mao
Ziang Dong
Lidan Wu
Copyright-Jahr
2019
Verlag
Springer Singapore
Electronic ISBN
978-981-13-3029-2
Print ISBN
978-981-13-3028-5
DOI
https://doi.org/10.1007/978-981-13-3029-2