Skip to main content
Top

2022 | Book

Latent Factor Analysis for High-dimensional and Sparse Matrices

A particle swarm optimization-based approach

insite
SEARCH

About this book

Latent factor analysis models are an effective type of machine learning model for addressing high-dimensional and sparse matrices, which are encountered in many big-data-related industrial applications. The performance of a latent factor analysis model relies heavily on appropriate hyper-parameters. However, most hyper-parameters are data-dependent, and using grid-search to tune these hyper-parameters is truly laborious and expensive in computational terms. Hence, how to achieve efficient hyper-parameter adaptation for latent factor analysis models has become a significant question.

This is the first book to focus on how particle swarm optimization can be incorporated into latent factor analysis for efficient hyper-parameter adaptation, an approach that offers high scalability in real-world industrial applications.

The book will help students, researchers and engineers fully understand the basic methodologies of hyper-parameter adaptation via particle swarm optimization in latent factor analysis models. Further, it will enable them to conduct extensive research and experiments on the real-world applications of the content discussed.

Table of Contents

Frontmatter
Chapter 1. Introduction
Abstract
With the rapid development of basic computing and storage facilities, the big-data-related industrial applications, i.e., recommendation system [1–5], cloud services [6–10], social networks [11–16], and wireless sensor networks [17–22], continue to expand. Consequently, the data involved in these applications grow exponentially. Such data commonly contains a mass of entities. With the increasing number of involved entities, it becomes impossible to observe their whole interaction mapping. Hence, a High-dimensional and sparse (HiDS) matrix [23–31, 60] is commonly adopted to describe such specific relationships among entities. For instance, the Douban matrix [32] collected by the Chinese largest online book, movie and music database includes 129,490 users and 58,541 items. However, it only contains 16,830,839 known ratings and the density is 0.22%.
Ye Yuan, Xin Luo
Chapter 2. Learning Rate-Free Latent Factor Analysis via PSO
Abstract
With the explosive growth of the Internet, data generated by numerous industrial applications grows exponentially. Therefore, it is difficult for people to obtain useful information from big data, which contain a wealth of knowledge and are high-dimensional and sparse (HiDS) [1–5], e.g., node interaction in sensor networks [6–8], user-service invoking in cloud computing [9–15], protein interaction in biological information [16–18], user interactions in social networks service systems [19–21], and user-item preferences in recommender systems [22–25].
Ye Yuan, Xin Luo
Chapter 3. Learning Rate and Regularization Coefficient-Free Latent Factor Analysis via PSO
Abstract
A big-data-related application commonly has numerous nodes, e.g., users and items in a recommender system [1–5]. With the exponential growth of involved nodes, it is impossible to obtain their whole interaction relationships, e.g., it is impossible for a recommender system’s individual user to touch all of its items [6, 7]. For instance, the Epinions Matrix [8] is a typical HiDS matrix. It contains 631,064 ratings by 51,670 users on 83,509 different entries, with a data density of 0.015% only. Hence, the resultant interactions can be described by a high-dimensional and sparse (HiDS) matrix [9–18].
Ye Yuan, Xin Luo
Chapter 4. Regularization and Momentum Coefficient-Free Non-negative Latent Factor Analysis via PSO
Abstract
A big-data-related application [1–11] commonly involves numerous nodes with the inherent non-negativity interaction relationships, i.e., user-item interactions in a recommender system [12–14]. Due to the exponential growth of involved nodes, it is impossible to obtain their whole interaction relationships (e.g., a user touches a tiny subset of items only). Hence, a high-dimensional and sparse (HiDS) matrix [15–19] can described such inherent non-negativity interaction relationships, which has only a few known entries (describing the known interactions) while the most others are unknown rather than zeroes (describing the unknown ones).
Ye Yuan, Xin Luo
Chapter 5. Advanced Learning Rate-Free Latent Factor Analysis via P2SO
Abstract
Numerous entities are frequently encountered in various big data-related applications like wireless sensor networks [1–3], bioinformatics applications [4–9], social networks [10–13], user-service QoS [14, 15] and electronic commerce systems [16–18]. With the increasing number of involved entities, it becomes impossible to observe their whole interaction mapping. Hence, a resultant interaction mapping can be denoted by a high-dimensional and sparse (HiDS) matrix [19–23] with a few entries known (describing the observed interactions) while the most others unknown (describing the unobserved ones).
Ye Yuan, Xin Luo
Chapter 6. Conclusion and Future Directions
Abstract
This book is aiming at advancing latent factor analysis for high-dimensional and sparse matrices. In particular, we mainly introduce how to incorporate the principle of particle swarm optimization into latent factor analysis, thereby implementing effective hyper-parameter adaptation.
Ye Yuan, Xin Luo
Metadata
Title
Latent Factor Analysis for High-dimensional and Sparse Matrices
Authors
Ye Yuan
Xin Luo
Copyright Year
2022
Publisher
Springer Nature Singapore
Electronic ISBN
978-981-19-6703-0
Print ISBN
978-981-19-6702-3
DOI
https://doi.org/10.1007/978-981-19-6703-0

Premium Partner