Skip to main content
main-content

Über dieses Buch

This book equips readers to handle complex multi-view data representation, centered around several major visual applications, sharing many tips and insights through a unified learning framework. This framework is able to model most existing multi-view learning and domain adaptation, enriching readers’ understanding from their similarity, and differences based on data organization and problem settings, as well as the research goal.

A comprehensive review exhaustively provides the key recent research on multi-view data analysis, i.e., multi-view clustering, multi-view classification, zero-shot learning, and domain adaption. More practical challenges in multi-view data analysis are discussed including incomplete, unbalanced and large-scale multi-view learning. Learning Representation for Multi-View Data Analysis covers a wide range of applications in the research fields of big data, human-centered computing, pattern recognition, digital marketing, web mining, and computer vision.

Inhaltsverzeichnis

Frontmatter

Chapter 1. Introduction

Multi-view data generated from various view-points or multiple sensors are commonly seen in real-world applications. For example, the popular commercial depth sensor Kinect uses both visible light and near infrared sensors for depth estimation; autopilot uses both visual and radar sensors to produce real-time 3D information on the road; face analysis algorithms prefer face images from different views for high-fidelity reconstruction and recognition. However, such data with large view divergence would lead to an enormous challenge: data across various views have a large divergence preventing them from a fair comparison. Generally, different views tend to be treated as different domains from different distributions. Thus, there is an urgent need to mitigate the view divergence when facing specific problems by either fusing the knowledge across multiple views or adapting knowledge from some views to others. Since there are different terms regarding “multi-view” data analysis and its aliasing, we first give a formal definition and narrow down our research focus to differentiate it from other related works but in different lines.
Zhengming Ding, Handong Zhao, Yun Fu

Unsupervised Multi-view Learning

Frontmatter

Chapter 2. Multi-view Clustering with Complete Information

Multi-view Clustering (MVC) has garnered more attention recently since many real-world data are comprised of different representations or views. The key is to explore complementary information to benefit the clustering problem. In this chapter, we consider the conventional complete-view scenario. Specifically, in the first section, we present a deep matrix factorization framework for MVC, where semi-nonnegative matrix factorization is adopted to learn the hierarchical semantics of multi-view data in a layer-wise fashion. In the second section, we make an extension and consider the different sampled feature sets as multi-view data. We propose a novel graph-based method, Ensemble Subspace Segmentation under Block-wise constraints (ESSB), which is jointly formulated in the ensemble learning framework.
Zhengming Ding, Handong Zhao, Yun Fu

Chapter 3. Multi-view Clustering with Partial Information

Nowadays multi-modal visual data are much easier to access as the technology develops. Nevertheless, there is an underlying problem hidden behind the emerging multi-modality techniques: What if one/more modal data fail? Motivated by this question, we propose an unsupervised method which well handles the incomplete multi-modal data by transforming the original and incomplete data to a new and complete representation in a latent space.
Zhengming Ding, Handong Zhao, Yun Fu

Chapter 4. Multi-view Outlier Detection

Identifying different types of multi-view data outliers with abnormal behaviors is an interesting yet challenging unsupervised learning task, due to the complicated data distributions across different views. Conventional approaches achieve this by learning a new latent feature representation with the pairwise constraint on different view data. We argue that the existing methods are expensive in generalizing their models from two-view data to three-view (or more) data, in terms of the number of introduced variables and detection performance. In this chapter, we propose a novel multi-view outlier detection method with a consensus regularization on the latent representations.
Zhengming Ding, Handong Zhao, Yun Fu

Supervised Multi-view Classification

Frontmatter

Chapter 5. Multi-view Transformation Learning

In this chapter, we would propose two multi-view transformation learning algorithms to solve the classification problem. First of all, we consider the multi-view data have two kinds of manifold structures, i.e., class structure and view structure, then design a dual low-rank decomposition algorithm. Secondly, we assume the domain divergence involves more than one dominant factors, e.g., different view-points, various resolutions and changing illuminations, and explore an intermediate domain could often be found to build a bridge across them to facilitate the learning problem. After that, we propose a Coupled Marginalized Denoising Auto-encoders framework to address the cross-domain problem.
Zhengming Ding, Handong Zhao, Yun Fu

Chapter 6. Zero-Shot Learning

Zero-shot learning targets at precisely recognizing unseen categories through a shared visual-semantic function, which is built on the seen categories and expected to well adapt to unseen categories. However, the semantic gap across visual features and their underlying semantics is still the most challenging obstacle. In this chapter, we tackle this issue by exploiting the intrinsic relationship in the semantic manifold and enhancing the transferability of visual-semantic function. Specifically, we propose an Adaptive Latent Semantic Representation (ALSR) model in a sparse dictionary learning scheme, where a generic semantic dictionary is learned to connect the latent semantic space with visual feature space. To build a fast inference model, we explore a non-linear network to approximate the latent sparse semantic representation, which lies in the semantic manifold space. Consequently, our model could extract a variety of visual characteristics within seen classes, which can be well generalized to unobserved classes.
Zhengming Ding, Handong Zhao, Yun Fu

Transfer Learning

Frontmatter

Chapter 7. Missing Modality Transfer Learning

In reality, however, we always confront such a problem that no target data are achievable, especially when data are multi-modal. Under this situation, the target modality is blind in the training stage, while only the source modality can be obtained. We define such a problem as Missing Modality Problem in transfer learning.
Zhengming Ding, Handong Zhao, Yun Fu

Chapter 8. Multi-source Transfer Learning

Nowadays, it is common to see multiple sources available for knowledge transfer, each of which, however, may not include complete classes information of the target domain. Naively merging multiple sources together would lead to inferior results due to the large divergence among multiple sources. In this chapter, we attempt to utilize incomplete multiple sources for effective knowledge transfer to facilitate the learning task in target domain.
Zhengming Ding, Handong Zhao, Yun Fu

Chapter 9. Deep Domain Adaptation

Learning with limited labeled data is always a challenge in AI problems, and one of promising ways is transferring well-established source domain knowledge to the target domain, i.e., domain adaptation. Recent researches on transfer learning exploit deep structures for discriminative feature representation to tackle cross-domain disparity. However, few of them are able to joint feature learning and knowledge transfer in a unified deep framework. In this chapter, we develop three novel deep domain adaptation approaches for knowledge transfer. First, we propose a Deep Low-Rank Coding framework (DLRC) for transfer learning. The core idea of DLRC is to jointly learn a deep structure of feature representation and transfer knowledge via an iterative structured low-rank constraint, which aims to deal with the mismatch between source and target domains layer by layer. Second, we propose a novel Deep Transfer Low-rank Coding (DTLC) framework to uncover more shared knowledge across source and target in a multi-layer manner. Specifically, we extend traditional low-rank coding with one dictionary to multi-layer dictionaries by jointly building multiple latent common dictionaries shared by two domains. Third, we propose a novel deep model called “Deep Adaptive Exemplar AutoEncoder”, where we build a spectral bisection tree to generate source-target data compositions as the training pairs fed to autoencoders, and impose a low-rank coding regularizer to ensure the transferability of the learned hidden layer.
Zhengming Ding, Handong Zhao, Yun Fu

Chapter 10. Deep Domain Generalization

Conventional domain adaptation assumes that target data are still accessible in the training stage. However, we would always confront such cases in reality that the target data are totally blind in the training stage. This is extremely challenging since we have no prior knowledge of the target. Most recently, domain generalization has been well exploited to fight off the challenge through capturing knowledge from multiple source domains and generalizing to the unseen target domains. However, existing domain generalization research efforts all employ shallow structures, so it is difficult for them to well uncover the rich information within the complex data. Therefore, it is easy to ignore the useful knowledge shared by multiple sources and hard to adapt the knowledge to the unseen target domains in the test stage.
Zhengming Ding, Handong Zhao, Yun Fu
Weitere Informationen

Premium Partner

BranchenIndex Online

Die B2B-Firmensuche für Industrie und Wirtschaft: Kostenfrei in Firmenprofilen nach Lieferanten, Herstellern, Dienstleistern und Händlern recherchieren.

Whitepaper

- ANZEIGE -

Best Practices für die Mitarbeiter-Partizipation in der Produktentwicklung

Unternehmen haben das Innovationspotenzial der eigenen Mitarbeiter auch außerhalb der F&E-Abteilung erkannt. Viele Initiativen zur Partizipation scheitern in der Praxis jedoch häufig. Lesen Sie hier  - basierend auf einer qualitativ-explorativen Expertenstudie - mehr über die wesentlichen Problemfelder der mitarbeiterzentrierten Produktentwicklung und profitieren Sie von konkreten Handlungsempfehlungen aus der Praxis.
Jetzt gratis downloaden!

Bildnachweise