Skip to main content
main-content

Über dieses Buch

This book focuses on recent advances, approaches, theories and applications related to mixture models. In particular, it presents recent unsupervised and semi-supervised frameworks that consider mixture models as their main tool. The chapters considers mixture models involving several interesting and challenging problems such as parameters estimation, model selection, feature selection, etc. The goal of this book is to summarize the recent advances and modern approaches related to these problems. Each contributor presents novel research, a practical study, or novel applications based on mixture models, or a survey of the literature.
Reports advances on classic problems in mixture modeling such as parameter estimation, model selection, and feature selection;Present theoretical and practical developments in mixture-based modeling and their importance in different applications;Discusses perspectives and challenging future works related to mixture modeling.

Inhaltsverzeichnis

Frontmatter

Gaussian-Based Models

Frontmatter

Chapter 1. A Gaussian Mixture Model Approach to Classifying Response Types

Abstract
Visual perception is influenced by prior experiences and learned expectations. One example of this is the ability to rapidly resume visual search after an interruption to the stimuli. The occurrence of this phenomenon within an interrupted search task has been referred to as rapid resumption. Previous attempts to quantify individual differences in the extent to which rapid resumption occurs across participants relied on using an operationally defined cutoff criteria to classify response types within the task. This approach is potentially limited in its accuracy and could be improved by turning to data-driven alternatives for classifying response types. In this chapter, I present an alternative approach to classifying participant responses on the interrupted search task by fitting a Gaussian mixture model to response distributions. The parameter estimates obtained from fitting this model can then be used in a naïve Bayesian classifier to allow for probabilistic classification of individual responses. The theoretical basis and practical application of this approach are covered, detailing the use of the Expectation-Maximisation algorithm to estimate the parameters of the Gaussian mixture model as well as applying a naïve classifier to data and interpreting the results.
Owen E. Parsons

Chapter 2. Interactive Generation of Calligraphic Trajectories from Gaussian Mixtures

Abstract
The chapter presents an approach for the interactive definition of curves and motion paths based on Gaussian mixture model (GMM) and optimal control. The input of our method is a mixture of multivariate Gaussians defined by the user, whose centers define a sparse sequence of key-points, and whose covariances define the precision required to pass through these key-points. The output is a dynamical system generating curves that are natural looking and reflect the kinematics of a movement, similar to that produced by human drawing or writing. In particular, the stochastic nature of the GMM combined with optimal control is exploited to generate paths with natural variations, which are defined by the user within a simple interactive interface. Several properties of the Gaussian mixture are exploited in this application. First, there is a direct link between multivariate Gaussian distributions and optimal control formulations based on quadratic objective functions (linear quadratic tracking), which is exploited to extend the GMM representation to a controller. We then exploit the option of tying the covariances in the GMM to modulate the style of the calligraphic trajectories. The approach is tested to generate curves and traces that are geometrically and dynamically similar to the ones that can be seen in art forms such as calligraphy or graffiti.
Daniel Berio, Frederic Fol Leymarie, Sylvain Calinon

Chapter 3. Mixture Models for the Analysis, Edition, and Synthesis of Continuous Time Series

Abstract
This chapter presents an overview of techniques used for the analysis, edition, and synthesis of continuous time series, with a particular emphasis on motion data. The use of mixture models allows the decomposition of time signals as a superposition of basis functions. It provides a compact representation that aims at keeping the essential characteristics of the signals. Various types of basis functions have been proposed, with developments originating from different fields of research, including computer graphics, human motion science, robotics, control, and neuroscience. Examples of applications with radial, Bernstein, and Fourier basis functions are presented, with associated source codes to get familiar with these techniques.
Sylvain Calinon

Generalized Gaussian-Based Models

Frontmatter

Chapter 4. Multivariate Bounded Asymmetric Gaussian Mixture Model

Abstract
In this chapter, bounded asymmetric Gaussian mixture model (BAGMM) is proposed. In the described model, parameter estimation is performed by maximization of log-likelihood via expectation–maximization (EM) and Newton–Raphson algorithm. This model is applied to several applications for data clustering. As a first step, to validate our model, we have chosen spambase dataset for clustering spam and non-spam emails. Another application selected for validation of our algorithm is object data clustering and we have used two popular datasets (Caltech 101 and Corel) in this task. Finally we have performed clustering on texture data and VisTex dataset is employed for this task. In order to evaluate the clustering, in all abovementioned applications, several performance metrics are employed and experimental results are further compared in similar settings with asymmetric Gaussian mixture model (AGMM). From the experiments and results in all applications, it is examined that BAGMM has outperformed AGMM in the clustering task.
Muhammad Azam, Basim Alghabashi, Nizar Bouguila

Chapter 5. Online Recognition via a Finite Mixture of Multivariate Generalized Gaussian Distributions

Abstract
The huge amount of data expanding day by day entail creating powerful real-time algorithms. Such algorithms allow a reactive processing between the input multimedia data and the system. In particular, we are mainly concerned with active learning and clustering images and videos for the purpose of pattern recognition. In this paper, we propose a novel online recognition algorithm based on multivariate generalized Gaussian distributions. We estimate at first the generative model’s parameters within a discriminative framework (fixed-point, Riemannian averaged fixed-point, and Fisher scoring). Then, we propose an online recognition algorithm in accordance with those algorithms. Finally, we applied our proposed framework on three challenging problems, namely: human action recognition, facial expression recognition, and pedestrian detection from infrared images. Experiments demonstrate the robustness of our approach by comparing with the state-of-the art algorithms and offline learning techniques.
Fatma Najar, Sami Bourouis, Rula Al-Azawi, Ali Al-Badi

Spherical and Count Data Clustering

Frontmatter

Chapter 6. L 2 Normalized Data Clustering Through the Dirichlet Process Mixture Model of von Mises Distributions with Localized Feature Selection

Abstract
In this chapter, we propose a probabilistic model based-approach for clustering L 2 normalized data. Our approach is based on the Dirichlet process mixture model of von Mises (VM) distributions. Since it assumes an infinite number of clusters (i.e., the mixture components), the Dirichlet process mixture model of VM distributions can also be considered as the infinite VM mixture model. Comparing with finite mixture model in which the number of mixture components have to be determined through extra efforts, the infinite mixture VM model is a nonparametric model such that the number of mixture components is assumed to be infinite initially and will be inferred automatically during the learning process. To improve clustering performance for high-dimensional data, a localized feature selection scheme is integrated into the infinite VM mixture model which can effectively detect irrelevant features based on the estimated feature saliencies. In order to learn the proposed infinite mixture model with localized feature selection, we develop an effective approach using variational inference that can estimate model parameters and feature saliencies with closed-form solutions. Our model-based clustering approach is validated through two challenging applications, namely topic novelty detection and unsupervised image categorization.
Wentao Fan, Nizar Bouguila, Yewang Chen, Ziyi Chen

Chapter 7. Deriving Probabilistic SVM Kernels from Exponential Family Approximations to Multivariate Distributions for Count Data

Abstract
This work aims to propose a robust hybrid probabilistic learning approach that combines appropriately the advantages of both the generative and discriminative models for modeling count data. We build new probabilistic kernels based on information divergences and Fisher score from efficient approximations to multivariate distributions for support vector machines (SVMs). More precisely, we drive probabilistic kernels from the mixture of exponential family approximation to two powerful generative models for count data, namely the multinomial compound Dirichlet (DCM) and the generalized Dirichlet multinomial (GDM). The developed hybrid models are introduced as effective SVM kernels able to incorporate prior knowledge about the nature of data involved in the problem at hand and, therefore, permits a good data discrimination. We demonstrate the flexibility and the merits of the proposed frameworks for the problem of analyzing activities in surveillance scenes.
Nuha Zamzami, Nizar Bouguila

Chapter 8. Toward an Efficient Computation of Log-Likelihood Functions in Statistical Inference: Overdispersed Count Data Clustering

Abstract
This work presents an unsupervised learning algorithm, using the mesh method for computing the log-likelihood function. The multinomial Dirichlet distribution (MDD) is one of the widely used methods of modeling multicategorical count data with overdispersion. Recently, it has been shown that traditional numerical computation of the MDD log-likelihood function either results in instability or leads to long run times that make its use infeasible in case of large datasets. Thus, we propose to use the mesh algorithm that involves approximating the MDD log-likelihood function based on Bernoulli polynomials. Moreover, we extend the mesh algorithm approach for computing the log-likelihood function of a more flexible distribution, namely the multinomial generalized Dirichlet (MGD). We demonstrate the efficiency of this method in statistical inference, i.e., maximum likelihood estimation, for fitting finite mixture models based on MDD and MGD as efficient distributions for count data. Through a set of experiments, the proposed approach shows its merits in two real-world clustering problems, namely natural scenes categorization and facial expression recognition.
Masoud Daghyani, Nuha Zamzami, Nizar Bouguila

Bounded and Semi-bounded Data Clustering

Frontmatter

Chapter 9. A Frequentist Inference Method Based on Finite Bivariate and Multivariate Beta Mixture Models

Abstract
Modern technological improvement, revolutionized computers, progress in scientific methods, and other related factors led to generate a massive volume of structured and unstructured data. Such valuable data has potential to be mined for information retrieval and analyzed computationally to reveal patterns, trends, and associations that lead to better decisions and strategies. Thus, machine learning and specifically, unsupervised learning methods have become the topic of interest of much recent researches in data engineering. Finite mixture models as unsupervised learning methods, namely clustering, are considered as capable techniques for discovery, extraction, and analysis of knowledge from data. Traditionally Gaussian mixture model (GMM) has drawn lots of attention in previous literature and has been studied extensively. However, other distributions demonstrate more flexibility and convenience in modeling and describing data.
The novel aspect of this work is to develop a framework to learn mixture models based on bivariate and multivariate Beta distributions. Moreover, we tackle simultaneously the problems of parameters estimation, cluster validation, or model selection which are principal challenges in deployment of mixture models. The effectiveness, utility, and advantages of the proposed method are illustrated through extensive empirical results using real datasets and challenging applications involving image segmentation, sentiment analysis, credit approval, and medical inference.
Narges Manouchehri, Nizar Bouguila

Chapter 10. Finite Inverted Beta-Liouville Mixture Models with Variational Component Splitting

Abstract
Use of mixture models to statistically approximate data has been an interesting topic of research in unsupervised learning methods. Mixture models based on exponential family of distributions have gained popularity in recent years. In this chapter, we introduce a finite mixture model based on Inverted Beta-Liouville distribution which has a higher degree of freedom to provide a better fit for the data. We use a variational learning framework to estimate the parameters which decreases the computational complexity of the model. We handle the problem of model selection with a component splitting approach which is an added advantage as it is done within the variational framework. We evaluate our model against some challenging applications like image clustering, speech clustering, spam image detection, and software defect detection.
Kamal Maanicshah, Muhammad Azam, Hieu Nguyen, Nizar Bouguila, Wentao Fan

Chapter 11. Online Variational Learning for Medical Image Data Clustering

Abstract
Data mining is an extensive area of research involving pattern discovery and feature extraction which is applied in various critical domains. In clinical aspect, data mining has emerged to assist the clinicians in early detection, diagnosis, and prevention of diseases. Advances in computational methods have led to implementation of machine learning in multi-modal clinical image analysis. One recent method is online learning where data become available in a sequential order, thus sequentially updating the best predictor for the future data at each step, as opposed to batch learning techniques which generate the best predictor by learning the entire data set at once.
In this chapter, we have examined and analysed multi-modal medical images by developing an unsupervised machine learning algorithm based on online variational inference for finite inverted Dirichlet mixture model. Our prime focus was to validate the developed approach on medical images. We do so by implementing the algorithm on both synthetic and real data sets. We test the algorithm’s ability to detect challenging real world diseases, namely brain tumour, lung tuberculosis, and melanomic skin lesion.
Meeta Kalra, Michael Osadebey, Nizar Bouguila, Marius Pedersen, Wentao Fan

Image Modeling and Segmentation

Frontmatter

Chapter 12. Color Image Segmentation Using Semi-bounded Finite Mixture Models by Incorporating Mean Templates

Abstract
Finite mixture models (FMM) are very popular for image segmentation. But, FMM assumes that each pixel is independent from each other. Thus, it does not consider the spatial information of the pixels which makes FMM more sensitive to noise. Generally, the traditional FMM consists of prior probability (PP) and component conditional probability (CP). In this chapter, we have incorporated mean templates, namely weighted geometric mean template (WGMT) and weighted arithmetic mean template (WAMT) to compute the CP. For estimating PP, the weighted geometric mean prior probability (WGMPP) and weighted arithmetic mean prior probability (WAMPP) templates are used. Lastly, the Expectation-Maximization (EM) algorithm is used to estimate the hyper-parameters of the FMM. Our models are proposed based on inverted Dirichlet (ID), generalized inverted Dirichlet (GID), and inverted Beta-Liouville (IBL) mixture models using the mean templates. For experimentation, the Berkeley 500 (BSD500) and MIT’s Computational Visual Cognition Laboratory (CVCL) datasets are used. We have also employed eight image segmentation performance evaluation metrics such as adjusted Rand index and homogeneity score to validate the image segmentation results for the BSD500. Additionally, we have also compared the segmentation outputs for the CVCL dataset which are computed using the traditional RGB and l 1 l 2 l 3 color spaces. The results obtained from IBL mixture models (IBLMM) are more promising than ID mixture models (IDMM) and GID mixture models (GIDMM).
Jaspreet Singh Kalsi, Muhammad Azam, Nizar Bouguila

Chapter 13. Medical Image Segmentation Based on Spatially Constrained Inverted Beta-Liouville Mixture Models

Abstract
In this chapter, we propose an image segmentation method based on a spatially constrained inverted Beta-Liouville (IBL) mixture model for segmenting medical images. Our method adopts the IBL distribution as the basic distribution, which can demonstrate better performance than commonly used distributions (such as Gaussian distribution) in image segmentation. To improve the robustness of our image segmentation method against noise, the spatial relationship among nearby pixels is imposed into our model by using generalized means. We develop a variational Bayes inference algorithm to learn the proposed model, such that model parameters can be efficiently estimated in closed form. In our experiments, we use both simulated and real brain magnetic resonance imaging (MRI) data to validate our model.
Wenmin Chen, Wentao Fan, Nizar Bouguila, Bineng Zhong

Chapter 14. Flexible Statistical Learning Model for Unsupervised Image Modeling and Segmentation

Abstract
We propose in this work to improve the tasks of image segmentation and modeling through an unsupervised flexible learning approach. Our focus here is to develop an alternative mixture model based on a bounded generalized Gaussian distribution, which is less sensitive to over-segmentation and offers more flexibility in data modeling than the Gaussian distribution which is certainly not the best approximation for image segmentation. A maximum likelihood- (ML) based algorithm is applied for estimating the resulted model parameters. We investigate here the integration of both a spatial information (a prior information between neighboring pixels) and a minimum description length (MDL) principle into the model learning step in order to deal with the major problems of finding the optimal number of classes and also selecting the best model that describes accurately the dataset. Therefore, the proposed model has the advantage to maintain the balance between model complexity and goodness of fit. Obtained results on a large database of medical MR images confirm the effectiveness of the proposed approach and demonstrate its superior performance compared to some conventional methods.
Ines Channoufi, Fatma Najar, Sami Bourouis, Muhammad Azam, Alrence S. Halibas, Roobaea Alroobaea, Ali Al-Badi

Backmatter

Weitere Informationen