Skip to main content
main-content
Top

About this book

This book proposes applications of tensor decomposition to unsupervised feature extraction and feature selection. The author posits that although supervised methods including deep learning have become popular, unsupervised methods have their own advantages. He argues that this is the case because unsupervised methods are easy to learn since tensor decomposition is a conventional linear methodology. This book starts from very basic linear algebra and reaches the cutting edge methodologies applied to difficult situations when there are many features (variables) while only small number of samples are available. The author includes advanced descriptions about tensor decomposition including Tucker decomposition using high order singular value decomposition as well as higher order orthogonal iteration, and train tenor decomposition. The author concludes by showing unsupervised methods and their application to a wide range of topics.

Allows readers to analyze data sets with small samples and many features;Provides a fast algorithm, based upon linear algebra, to analyze big data;Includes several applications to multi-view data analyses, with a focus on bioinformatics.

Table of Contents

Frontmatter

Part I

Frontmatter

Chapter 1. Introduction to Linear Algebra

Abstract
Although the content in this chapter should be taught in much earlier life stages, graduate or undergraduate levels, of most possible reader, because this book mainly deals with somewhat data science oriented matters, it might not be a bad idea to reintroduce fundamental concepts in a data science oriented manner.
Y-h. Taguchi

Chapter 2. Matrix Factorization

Abstract
Matrix factorization is generally purposing to represent a matrix as a product of two or more matrices. It has multiple functions. For example, if two matrices used to represent the original matrix by multiplication are small enough (i.e., lower rank ), it can be considered to be reduction of degrees of freedom. Even if the matrix cannot be exactly represented as a product of two lower rank matrices, if it is possible for the product of matrices with smaller rank to approximate the original one, it can be considered to be a good approximation. Matrix factorization also has some relationship with geometrical representation. Generated matrices can be considered to be projection onto lower dimensional space.
Y-h. Taguchi

Chapter 3. Tensor Decomposition

Abstract
Tensor decomposition (TD) is a natural extension of matrix factorization (MF), introduced for matrices in the previous chapter, when tensors instead of matrices are considered. In contrast to the MF that is usually represented as a product of two matrices, TD has various forms. In contrast to the matrices that were extensively studied over long period, tensor has much shorter history of extensive investigations, especially from the application point of views. Thus, there are no de facto standards to be used for the specific application. Similar to the aim of MF, that of TD is also to reduce the degrees of freedoms. Nevertheless, how the degrees of freedom can be reduced has many variations for TD. In this chapter, we introduce three principal realizations of TD: sum of outer product of vectors, product summation of (smaller) tensor and matrices, and product summation of (smaller) tensors. These three methods have their own unique pros and cons. In addition to the algorithm to perform each of TDs, we will also discuss about these pros and cons of three methods introduced.
Y-h. Taguchi

Part II

Frontmatter

Chapter 4. PCA Based Unsupervised FE

Abstract
Principal component analysis (PCA) is generally considered to be a tool to visualize the relationship between sample objects as a statistical tool especially when the number of features attributed to individual samples is too huge to interpret. Mathematically, PCA is nothing but a linear projection of objects in high dimensional space onto low dimensional space. Alternatively, PC can be considered to be a tool that performs feature extraction (FE), because principal components (PC) that PCA generates can be used as new features attributed to individual objects. In this chapter, I would like to add one more function to PCA, feature selection. I demonstrate how we can make use of PCA in order to select features and how well it works in which situations. This can be also a good introduction for TD based unsupervised FE, which is in some sense the extension of the method proposed in this chapter.
Y-h. Taguchi

Chapter 5. TD Based Unsupervised FE

Abstract
In the previous chapter, I have introduced PCA based unsupervised FE as a tool that can identify features having favorable properties without pre-knowledge, e.g., class labeling and period. In this chapter, I introduce TD based unsupervised FE as a natural extension of PCA based unsupervised FE towards tensors. In contrast to PCA that can deal with only one feature, TD can deal with multiple features, e.g., gene expression and miRNA expression simultaneously associated with the same samples. If we consider case I and case II tensor approaches, we can perform integrated analysis of multi-omics data sets, too.
Y-h. Taguchi

Part III

Frontmatter

Chapter 6. Applications of PCA Based Unsupervised FE to Bioinformatics

Abstract
Although PCA is often blamed as an old technology, if it is useful, no other reasons will be required to be used. In this chapter, I will apply PCA based unsupervised FE to various bioinformatics problems. As discussed in the earlier chapter, PCA based unsupervised FE is fitted to the situation that there are more number of features than the number of samples. This specific situation is very usual, because features are genes whose numbers are as many as several tens thousands, while the number of samples are as many as that of patients, which is often as many as a few tens. The application of PCA based unsupervised FE ranges from biomarker identification and identification of disease causing genes to in silico drug discovery. I try to mention studies where PCA based unsupervised FE is applied as many as possible, from the published papers by myself.
Y-h. Taguchi

Chapter 7. Application of TD Based Unsupervised FE to Bioinformatics

Abstract
Although the purpose of data science is to understand something complicated, if all of complicated things are understood, it might not be interesting. Thus, it is better for something complicated to remain not to be fully understood.
In the previous chapter, we demonstrate that PCA based unsupervised FE is applicable to wide range of bioinformatics problem. Nevertheless, in some specific cases, TD is more suitable than PCA. There are possible two such situations. The first situation is that data itself should be formatted in tensor rather than matrix. The second situation is the integrated analysis of more than two matrices. In this chapter, we will demonstrate in which situation TD based unsupervised FE is better to be applied.
Y-h. Taguchi

Backmatter

Additional information