Skip to main content

Open Access 25.04.2024 | Original Article

A robust multi-view knowledge transfer-based rough fuzzy C-means clustering algorithm

verfasst von: Feng Zhao, Yujie Yang, Hanqiang Liu, Chaofei Wang

Erschienen in: Complex & Intelligent Systems

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Rough fuzzy clustering algorithms have received extensive attention due to the excellent ability to handle overlapping and uncertainty of data. However, existing rough fuzzy clustering algorithms generally consider single view clustering, which neglects the clustering requirements of multiple views and results in the failure to identify diverse data structures in practical applications. In addition, rough fuzzy clustering algorithms are always sensitive to the initialized cluster centers and easily fall into local optimum. To solve the above problems, the multi-view and transfer learning are introduced into rough fuzzy clustering and a robust multi-view knowledge transfer-based rough fuzzy c-means clustering algorithm (MKT-RFCCA) is proposed in this paper. First, multiple distance metrics are adopted as multiple views to effectively recognize different data structures, and thus positively contribute to clustering. Second, a novel multi-view transfer-based rough fuzzy clustering objective function is constructed by using fuzzy memberships as transfer knowledge. This objective function can fully explore and utilize the potential information between multiple views and characterize the uncertainty information. Then, combining the statistical information of color histograms, an initialized centroids selection strategy is presented for image segmentation to overcome the instability and sensitivity caused by the random distribution of the initialized cluster centers. Finally, to reduce manual intervention, a distance-based adaptive threshold determination mechanism is designed to determine the threshold parameter for dividing the lower approximation and boundary region of rough fuzzy clusters during the iteration process. Experiments on synthetic datasets, real-world datasets, and noise-contaminated Berkeley and Weizmann images show that MKT-RFCCA obtains favorable clustering results. Especially, it provides satisfactory segmentation results on images with different types of noise and preserves more specific detail information of images.
Hinweise

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Introduction

Clustering is an unsupervised learning technique that is widely used in data mining [1, 2], pattern recognition [3, 4], etc. It aims to partition unlabeled data into groups or clusters such that similar data points belong to the same class [5]. In general, clustering methods mainly include prototype-based clustering [68], density-based clustering [911], graph-based clustering [1214], and so on. Among them, the prototype-based clustering algorithms consist of two main architecture types, including soft clustering and hard clustering.
Hard clustering algorithms fail to obtain satisfactory results since data in real scenes usually contain ambiguity information and noise. In contrast, soft clustering algorithms are more flexible, which allow pixels to belong to multiple specific regions or classes. In particular, fuzzy c-means (FCM) [15], rough c-means (RCM) [16] and rough fuzzy c-means (RFCM) [17, 18] have been successfully applied in many fields. To effectively deal with ambiguity and overlapping partitions, FCM utilizes fuzzy membership degree to measure the belongingness of each sample to all clusters. RCM introduces the concepts of upper and lower approximations in rough set theory to divide the samples so as to handle the incompleteness and uncertainty of clusters. And RFCM synthesizes the advantages of FCM and RCM. It combines the fuzzy membership degree and the concepts of upper and lower approximations, which enhances the ability of the algorithm to handle the uncertainty and vagueness of clusters, and also improves the robustness of the algorithm to outliers in data. However, the following issues are associated with these classical clustering algorithms: (1) Clustering results are sensitive to the initialized cluster centers and susceptible to noise; (2) These algorithms are designed for single view clustering, ignoring the clustering requirements of multiple views and restricting the application in practice.
To solve the sensitivity to the initialized cluster centers in the first issue, Murugesan et al. [19] proposed a new initialization and performance measure method based on maximizing the ratio of inter-cluster variance to intra-cluster variance to improve the clustering accuracy of RCM. Inspired by the average diversity of datasets, Wu et al. [20] presented an improved cluster centroids initialization method to avoid randomization of cluster centers in FCM. Some scholars also use evolutionary strategies to optimize one or more fitness functions to find the optimal cluster centers. For example, Liu et al. [21] proposed a multiobjective fuzzy clustering algorithm with multiple spatial information (MFCMSI) by combining evolutionary operations and fuzzy clustering. Kumar et al. [22] developed a particle swarm optimization improved fuzzy c-means to deal with noisy data and initialization problem.
To overcome the influence of noise on performance of clustering algorithm, lots of improved clustering algorithms are proposed to more accurately resolve uncertainty [2326]. For image segmentation, one of the most popular ideas is that local spatial information is incorporated into clustering algorithms to improve the segmentation effect. Lei et al. [27] proposed a fast and robust fuzzy c-means clustering algorithm (FRFCM) to improve the robustness of the algorithm by using morphological reconstruction (MR) and membership filtering. Wang et al. [28] presented an improved FCM with adaptive spatial & intensity constraint and membership linking (FCM-SICM) for noise image segmentation. Roy and Maji [29] proposed a spatially constrained rough fuzzy c-means (sRFCM) clustering algorithm, which incorporates the local spatial neighborhood information of the image into RFCM to avoid the effect of noise on brain magnetic resonance image segmentation. Recently, in order to reduce fuzzification and smooth over-preserved noisy pixels, Wang et al. [30] devised a fuzzy adaptive local and region-level information c-means (FALRCM) by introducing Kullback–Leibler information with locally median membership degrees. Wu et al. [31] proposed a full-parameter adaptive fuzzy clustering, which improves the robustness by integrating the spatial information of the image and implementing adaptive computation of parameters.
To overcome the limitation of single view clustering, some scholars have proposed multi-view clustering algorithms [3234]. Generally speaking, the term “multi-view clustering” indicates a clustering algorithm that makes use of multiple different feature sets of the data. However, feature sets are not always available. In this case, the term “multi-view” is also able to be extended to the collaborative interaction of multiple different matrices that can be characterized by various distance measures based on a single feature space [35]. Therefore, we argue that the multi-view clustering algorithms possess the potential to deal with different forms of the data. To highlight the advantages of multi-view clustering, Wang et al. [36] proposed a multi-view clustering method by combining the minimax optimization and FCM, called MinimaxFCM. Recently, Hu et al. [37] developed a two-level weighted collaborative multi-view fuzzy clustering (TW-Co-MFC) algorithm. It can simultaneously consider the importance of views and features. However, the above methods are limited to the application to feature sets. Some methods based on the use of multiple relational descriptions or dissimilarity matrices have been proposed sequentially. Liu et al. [38] proposed a multi-objective evolutionary clustering based on combining multiple distance measures (MOECDM) that considers two different distance functions simultaneously to overcome the effect of view weights. García et al. [35] proposed a multi-objective evolutionary multi-view clustering (MVMC) method, which can consider multiple views that are different feature sets or different matrices, and also overcome the limitation of the number of views in existing multi-view clustering methods.
Compared to the single view, an important feature of multi-view is that complementary and consistent information is usually generated in different views [31]. The greater the difference in clustering results between arbitrary views, the greater the need to integrate this information. However, the potential information among views is frequently neglected, resulting in poor clustering results. In recent years, as an efficient learning strategy, transfer learning is widely applied in clustering [8, 3941]. Gargees et al. [42] designed a transfer-learning possibilistic c-means (TLPCM), which takes the cluster prototypes of the source domain as the references for target data clustering to solve the problem of insufficient data. Shi et al. [43] proposed the transfer clustering ensemble selection algorithm (TECS) by combining the transfer learning and clustering ensemble selection, which adaptively selects clustering members based on the tradeoff between quality and diversity, and transfers them to a target dataset based on three objective functions. Jiao et al. [44] proposed a transfer evidential c-means algorithm (TECM). It uses the cluster prototype of the source data as knowledge to design a new objective function to solve the incompleteness and uncertainty in clustering.
On the basis of the issues discussed above, a robust multi-view knowledge transfer-based rough fuzzy c-means clustering algorithm (MKT-RFCCA) is proposed in this paper. The main contributions are highlighted as follows: (1) An enhanced version of RFCM combing with multi-view is presented to break through the limitation of the traditional algorithms with a single view. It employs different dissimilarity matrices as multiple views to improve the capability of the algorithm to explore multiple data structures; (2) Inspired by the knowledge transfer mechanism in transfer learning, a novel multi-view knowledge transfer-based rough fuzzy clustering objective function is constructed, which uses fuzzy memberships as the transfer knowledge to accomplish information with complementary and consistent interaction between views to further promote the clustering performance; (3) Based on the statistical information of color histogram, an initialized centroids selection strategy is proposed to overcome the instability and sensitivity of random cluster centroids in the image segmentation field; (4) A distance-based adaptive threshold determination mechanism is designed to determine the threshold parameter during the rough fuzzy clustering iteration process to improve the robustness of the algorithm.
The remainder of this paper is organized as follows. The brief description of related work is summarized in Sect. “Related work”. Sect. “A robust multi-view knowledge transfer-based rough fuzzy C-means clustering algorithm” describes the details of the proposed MKT-RFCCA. In addition, experimental results and analysis are provided in Sect. “Experimental study”. Finally, Sect. “Conclusion” gives some concluding remarks and discusses future works.
RFCM [18] introduces the concepts of upper and lower approximations of rough set theory into FCM. The combination of fuzzy membership and the lower and upper approximations of rough set theory in RFCM, which not only effectively handles overlapping partitions, but also deals with uncertainty, ambiguity and incompleteness in class definitions. Let \(X{ = }\left\{ {x_{1} ,x_{2} , \ldots ,x_{N} } \right\}\) be a dataset with N data points and \(v_{i} (1 \le i \le K)\) denotes the center of the cluster \(C_{i}\). The objective function of RFCM is given as follows:
$$ J_{RFCM} = \left\{ \begin{gathered} w_{low} \times J_{P} + w_{bon} \times J_{Q} ,\quad if\quad L(C_{i} ) \ne \emptyset , \, B(C_{i} ) \ne \emptyset \hfill \\ J_{P} ,\quad if\quad L(C_{i} ) \ne \emptyset , \, B(C_{i} ) = \emptyset \hfill \\ J_{Q} ,\quad if\quad L(C_{i} ) = \emptyset , \, B(C_{i} ) \ne \emptyset \hfill \\ \end{gathered} \right. $$
(1)
where \(J_{P}\) and \(J_{Q}\) are defined as follows:
$$ J_{P} = \sum\limits_{i = 1}^{K} {\sum\limits_{{x_{j} \in L(C_{i} )}} {\mu_{ij}^{m} \left\| {x_{j} - v_{i} } \right\|^{2} } } $$
(2)
$$ J_{Q} = \sum\limits_{i = 1}^{K} {\sum\limits_{{x_{j} \in B(C_{i} )}} {\mu_{ij}^{m} \left\| {x_{j} - v_{i} } \right\|^{2} } } $$
(3)
The cluster centroid is updated as follows:
$$ v_{i} = \left\{ \begin{gathered} w_{low} \frac{{\sum\nolimits_{{x_{j} \in L(C_{i} )}} {\mu_{ij}^{m} x_{j} } }}{{\sum\nolimits_{{x_{j} \in L(C_{i} )}} {\mu_{ij}^{m} } }} + w_{bon} \frac{{\sum\nolimits_{{x_{j} \in B(C_{i} )}} {\mu_{ij}^{m} x_{j} } }}{{\sum\nolimits_{{x_{j} \in B(C_{i} )}} {\mu_{ij}^{m} } }},\\ \quad if\quad L(C_{i} ) \ne \emptyset , \, B(C_{i} ) \ne \emptyset \hfill \\ \frac{{\sum\nolimits_{{x_{j} \in L(C_{i} )}} {\mu_{ij}^{m} x_{j} } }}{{\sum\nolimits_{{x_{j} \in L(C_{i} )}} {\mu_{ij}^{m} } }},\quad if\quad L(C_{i} ) \ne \emptyset , \, B(C_{i} ) = \emptyset \hfill \\ \frac{{\sum\nolimits_{{x_{j} \in B(C_{i} )}} {\mu_{ij}^{m} x_{j} } }}{{\sum\nolimits_{{x_{j} \in B(C_{i} )}} {\mu_{ij}^{m} } }},\quad if\quad L(C_{i} ) = \emptyset , \, B(C_{i} ) \ne \emptyset \hfill \\ \end{gathered} \right. $$
(4)
where \(\mu_{ij}\) has the same meaning as the fuzzy membership in FCM and m is the fuzzification coefficient. \(L(C_{i} )\) and \(B(C_{i} )\) denote the lower approximation and the boundary region of the cluster \(C_{i}\), respectively. The boundary region \(B(C_{i} ) = \left\{ {UP(C_{i} ) - L(C_{i} )} \right\}\), where \(UP(C_{i} )\) is the upper approximation of the cluster \(C_{i}\). The parameter \(w_{low}\) and \(w_{bon}\) are important for the lower approximation and the boundary region, respectively. It is notable that the membership values of objects in the lower approximation of RFCM are assigned to 1. Moreover, since data points in the lower approximation definitely belong to a cluster, they should be assigned higher weight compared to the data points in the boundary region, that is \(0 < w_{bou} < w_{low} < 1,w_{low} + w_{bou} = 1\).

A robust multi-view knowledge transfer-based rough fuzzy C-means clustering algorithm

To effectively recognize the clustering structure and overcome the limitation of single view, this paper adopts different dissimilarity matrices as multiple views and fuzzy memberships as transfer knowledge, and proposes a robust multi-view knowledge transfer-based rough fuzzy c-means clustering algorithm to explore the potential information between views and subsequently improve the clustering performance. In addition, an initialized centroids selection strategy and a distance-based adaptive threshold determination mechanism are designed to enhance the robustness. The details of MKT-RFCCA are as follows.

Initialized centroids selection strategy

It is well known that most clustering algorithms are sensitive to the selection of the initialized centroids. Clustering algorithms with unsuitable initialized centroids may easily fall into local optimum. Therefore, choosing appropriate initialized centroids is an important task for clustering algorithms. In Ref. [19], a heuristic centroids initialization mechanism is proposed by using the ratio of between-cluster variance to within-cluster variance. This mechanism can ensure the separability of clusters and improve the performance of clustering. It should be noted that a pairwise distance matrix between samples needs to be firstly constructed in this initialization method. Therefore, this initialization mechanism is fit for clustering data with small data size. When applied to segment an image, clustering algorithms with this initialization strategy may suffer from the storage and computation problem of the huge distance matrix. In order to solve this problem, a new centroids initialization strategy is designed for color image segmentation in this section.
A color image consists of three color channels, namely red, green, and blue. And the range of pixel values for each channel is [0, 255]. The proposed centroids initialization strategy is used to obtain the corresponding centroids for each color channel and the details of this strategy are presented in Algorithm 1. In this Algorithm, the centroids initialization for red color channel is presented as an example, and another color channels adopt the same operation. First, the histogram of red color channel is calculated and noted as \(h_{R} (s)\), and then the cumulative sum of the histogram is computed by \(H_{R} (s) = \sum\nolimits_{l = 0}^{s} {h_{R} (l)}\), where s and l are the pixel values. Second, all the pixels are initially divided into K parts \(C_{kR} (1 \le k \le K)\) utilizing the histogram cumulative sum. Finally, the initialized centroid \(\tilde{v}_{kR} (1 \le k \le K)\) for red color channel is obtained by calculating the mean value of the pixels within the kth part. Similarly, the initialized centroid for the other two color channels can be achieved and the initialized centroid of the image is \(\tilde{v}_{k} = (\tilde{v}_{kR} ,\tilde{v}_{kG} ,\tilde{v}_{kB} )(1 \le k \le K)\). Figure 1 show an example of this initialization method. The cumulative sum of histogram is shown in Fig. 1d, and the obtained initialized centroids are presented in Fig. 1e.

Distance-based adaptive threshold determination mechanism

Classical rough clustering algorithms generally utilize the relationship between the distance difference and the threshold \(\varepsilon\) to determine the upper and lower approximations of each cluster. The threshold \(\varepsilon\) is essential for rough clustering and is always manually set by experience. However, different values of threshold \(\varepsilon\) lead to different clustering results. Therefore, it is a significant task to find a suitable threshold \(\varepsilon\) for rough clustering. In this paper, a distance-based adaptive threshold determination mechanism is designed to reduce manual intervention. Suppose \(d(x_{j} ,v_{n} )\) and \(d(x_{j} ,v_{m} )\) denote the closest and second closest distance values between the data point \(x_{j}\) and the cluster centroids among all the clusters, respectively. The adaptive threshold \(\varepsilon\) is determined as follows:
$$ \varepsilon = \sum\limits_{j = 1}^{N} {\frac{{d(x_{j} ,v_{m} ) - d(x_{j} ,v_{n} )}}{N}} $$
(5)
where \(N\) is the number of data points. It is worth noting that the threshold \(\varepsilon\) will adaptively change in each iteration. In one iteration, if \(d(x_{j} ,v_{m} ) - d(x_{j} ,v_{n} ) > \varepsilon\), \(x_{j}\) belongs to the lower approximation of the cluster \(C_{n}\). Otherwise, \(x_{j}\) belongs to the upper approximations of the cluster \(C_{n}\) and \(C_{m}\), respectively, and it does not belong to any lower approximation of clusters.
The segmentation results of RFCM utilizing the fixed thresholds and the adaptive threshold determined by this mechanism are shown in Figs. 2 and 3. It is obvious that the adaptive threshold provides better segmentation results compared to the fixed thresholds.

Multi-view knowledge transfer-based rough fuzzy clustering objective function

For most clustering algorithms, it is crucial to choose a distance measure to identify particular types of cluster structures [35]. Euclidean distance is very common and more suitable for spherically-shaped clusters. Mahalanobis distance is able to eliminate the limitation of correlation among data, and Manhattan distance is more robust in dealing with outliers. Therefore, a single distance measure may be unsuited to different data. In this work, two or more multiple distance matrices are utilized as multiple views and introduced into the objective function of rough fuzzy clustering to improve the clustering performance. In addition, a transfer learning strategy which employs fuzzy memberships as transfer knowledge is introduced in the objective function to fully utilize the information between views and provide beneficial impacts on clustering.
In MKT-RFCCA, a novel rough fuzzy clustering objective function \(J_{MKT - RFCCA}\) is constructed based on multi-view knowledge transfer. It consists of two parts: the intra-cluster compactness term within each view and the knowledge transfer term between views. In detail, \(J_{MKT - RFCCA}\) is defined by
$$ J_{MKT - RFCCA} = \sum\limits_{l = 1}^{L} {J_{1} } + \lambda \sum\limits_{l = 1}^{L} {\sum\limits_{{l^{\prime } \ne l}} {J_{2} } } $$
(6)
where L is the number of views, \(\lambda\) is a nonnegative transfer learning factor which controls the important degree of knowledge learning between views. \(J_{1}\) in the first term ensures the intra-cluster compactness in each view, and \(J_{2}\) in the second term utilizes the potential information between views. They are defined as follows:
$$ J_{1} = \left\{ \begin{gathered} w_{low} J_{L1} + \left( {1 - \left. {w_{low} } \right)} \right.J_{B1} \quad if\\ \quad L(C_{i} ) \ne \emptyset , \, B(C_{i} ) \ne \emptyset \hfill \\ J_{L1} \quad if\quad L(C_{i} ) \ne \emptyset , \, B(C_{i} ) = \emptyset \hfill \\ J_{B1} \quad if\quad L(C_{i} ) = \emptyset , \, B(C_{i} ) \ne \emptyset \hfill \\ \end{gathered} \right. $$
(7)
$$ J_{2} = \left\{ \begin{gathered} w_{low} J_{L2} + \left( {1 - \left. {w_{low} } \right)} \right.J_{B2} \quad if \\ L(C_{i} ) \ne \emptyset , \, B(C_{i} ) \ne \emptyset \hfill \\ J_{L2} \quad if\quad L(C_{i} ) \ne \emptyset , \, B(C_{i} ) = \emptyset \hfill \\ J_{B2} \quad if\quad L(C_{i} ) = \emptyset , \, B(C_{i} ) \ne \emptyset \hfill \\ \end{gathered} \right. $$
(8)
where \(L(C_{i} )\) and \(B(C_{i} )\) denote the lower approximation and the boundary region of the cluster \(C_{i}\), respectively. The parameter \(w_{low}\) indicates the importance of the lower approximation and boundary region. \(J_{L1}\), \(J_{B1}\), \(J_{L2}\) and \(J_{B2}\) are defined as follows:
$$ J_{L1} = \sum\limits_{i = 1}^{K} {\sum\limits_{{x_{j} \in L(C_{i} )}} {\mu_{lij}^{m} D_{l}^{2} (x_{j} ,v_{i} )} } $$
(9)
$$ J_{B1} = \sum\limits_{i = 1}^{K} {\sum\limits_{{x_{j} \in B(C_{i} )}} {\mu_{lij}^{m} D_{l}^{2} (x_{j} ,v_{i} )} } $$
(10)
$$ J_{L2} = \sum\limits_{i = 1}^{K} {\sum\limits_{{x_{j} \in L(C_{i} )}} {\left( {\mu_{{l^{\prime}ij}}^{m} - \mu_{lij}^{m} } \right)D_{l}^{2} (x_{j} ,v_{i} )} } $$
(11)
$$ J_{B2} = \sum\limits_{i = 1}^{K} {\sum\limits_{{x_{j} \in B(C_{i} )}} {\left( {\mu_{{l^{\prime}ij}}^{m} - \mu_{lij}^{m} } \right)D_{l}^{2} (x_{j} ,v_{i} )} } $$
(12)
where K is the number of clusters, \(v_{i} \, (1 \le i \le K)\) denotes the center of the cluster \(C_{i}\). \(\mu_{lij}\) indicates the membership of the jth data point \(x_{j}\) belonging to the ith cluster in the lth view. It should satisfy \(\sum\nolimits_{i = 1}^{K} {\mu_{lij} } = 1\) and \({0} \le \mu_{lij} \le 1\). m is the fuzzification coefficient. When the proposed method is applied to segment the image, a morphological reconstruction method [27] is utilized to filter the image in advance to avoid the influence of the image noise.
In MKT-RFCCA, the Euclidean distance \(D_{1}^{2} {(}x_{j} ,v_{i} {) = }\left\| {x_{j} - v_{i} } \right\|^{2}\) and Mahalanobis distance \(D_{2}^{2} {(}x_{j} ,v_{i} {) = }(x_{j} - v_{i} )^{T} \Sigma_{i}^{{{ - }1}} (x_{j} - v_{i} )\) are adopted as two views, where \(\Sigma_{i}\) is the covariance matrix of the ith cluster. According to the analysis of the weights assigned to each view in other multi-view clustering methods [36, 37], the importance of each view is equal and the transfer factor satisfies \(\lambda = 1/L\) in our study. By utilizing the Lagrange multiplier method to minimize Eq. (6), the updating formulas of \(v_{i} \, \) and \(\mu_{lij}\) are obtained and presented in detail in Appendix. The centroid updating formula is given as:
$$ v_{i} = \left\{ \begin{gathered} w_{low} F_{L} + \left( {1 - \left. {w_{low} } \right)} \right.F_{B} \quad if\\ L(C_{i} ) \ne \emptyset , \, B(C_{i} ) \ne \emptyset \hfill \\ F_{L} \quad if\quad L(C_{i} ) \ne \emptyset , \, B(C_{i} ) = \emptyset \hfill \\ F_{B} \quad if\quad L(C_{i} ) = \emptyset , \, B(C_{i} ) \ne \emptyset \hfill \\ \end{gathered} \right. $$
(13)
where \(F_{L}\) and \(F_{B}\) are defined as follows:
$$ \begin{gathered} F_{L} = \frac{{\sum\nolimits_{l = 1}^{L} {\sum\nolimits_{{x_{j} \in L(C_{i} )}} {\{ [1 - (L - 1)\lambda ]\mu_{lij}^{m} + \sum\nolimits_{l^{\prime} \ne l} {\lambda \mu_{l^{\prime}ij}^{m} } \} } \cdot x_{j} } }}{{\sum\nolimits_{l = 1}^{L} {\sum\nolimits_{{x_{j} \in L(C_{i} )}} {\{ [1 - (L - 1)\lambda ]\mu_{lij}^{m} + \sum\nolimits_{l^{\prime} \ne l} {\lambda \mu_{l^{\prime}ij}^{m} } \} } } }} \hfill \\ F_{B} = \frac{{\sum\nolimits_{l = 1}^{L} {\sum\nolimits_{{x_{j} \in B(C_{i} )}} {\{ [1 - (L - 1)\lambda ]\mu_{lij}^{m} + \sum\nolimits_{l^{\prime} \ne l} {\lambda \mu_{l^{\prime}ij}^{m} } \} } \cdot x_{j} } }}{{\sum\nolimits_{l = 1}^{L} {\sum\nolimits_{{x_{j} \in B(C_{i} )}} {\{ [1 - (L - 1)\lambda ]\mu_{lij}^{m} + \sum\nolimits_{l^{\prime} \ne l} {\lambda \mu_{l^{\prime}ij}^{m} } \} } } }} \hfill \\ \end{gathered} $$
(14)
The membership updating formula is given as follows:
$$ \mu_{lij} = \left[ {\sum\limits_{c = 1}^{K} {\left( {\frac{{D_{l}^{2} (x_{j} ,v_{i} )}}{{D_{l}^{2} (x_{j} ,v_{c} )}}} \right)^{{\frac{1}{m - 1}}} } } \right]^{ - 1} $$
(15)
Because the importance of views is the same in our algorithm, the final membership degree of the jth data sample belonging to the ith cluster is calculated by
$$ \tilde{\mu }_{ij} = \frac{1}{L}\sum\limits_{l = 1}^{L} {\mu_{lij} } $$
(16)

Algorithm procedure

The details of the proposed method are described in Algorithm 2.

Time complexity analysis

Assume that there are N samples in the dataset, feature dimension is D, the number of views is L, the number of clusters is K, and the iteration number is T. In MKT-RFCCA, the initialized cluster centers are obtained in advance using the initialized centroids selection strategy, and the corresponding time complexity is \(O\left( {KND} \right)\).Then, the time complexity of updating the membership degree values and cluster centers under one iteration is \(O\left( {LKND} \right)\) and \(O\left( {LKND} \right)\), respectively. In T complete iterations, the time complexity consumed by the objective function computation is \(O\left( {TLKND} \right)\). Therefore, the total time complexity of MKT-RFCCA is \(O\left( {TLKND} \right)\).

Experimental study

In order to verify the effectiveness and superiority of the proposed algorithm, we conduct experiments on synthetic datasets [38], real-world datasets [45], Berkeley images [46], and Weizmann images [47]. A total of seven prevalent clustering algorithms are adopted as comparative algorithms for MKT-RFCCA, which are FCM [15], RCM [16], RFCM [18], transfer fuzzy c-means (TFCM) [39], MVMC [35], FCM-SICM [28], and sRFCM [29]. Among these comparative algorithms, FCM-SICM and sRFCM are proposed for image segmentation and can overcome the influence of noise in the image. Therefore, these two comparative methods are only used in the experiments of image segmentation. All experiments are implemented on a server with Intel Core i7-12700 processor, 16 GB of RAM, and Windows 11. Parameter settings of all algorithms are shown in Table 1. To ensure the fairness of the experiments, the parameter of the lower approximation weight \(w_{low}\) in RCM, RFCM, sRFCM and MKT-RFCCA are set to the same value.
Table 1
Parameter settings for all the algorithms
Algorithms
Input parameters
Appearance in
FCM
\(m{ = 2, }T = 50, \, \eta = 10^{ - 5}\)
Bezdek et al. [15]
RCM
\(T = 50,\quad \varepsilon = 50,\quad w_{low} = 0.8\)
Lingras et al. [16]
RFCM
\(m = 2,\quad T = 50,\quad w_{low} = 0.8\)
Maji et al. [18]
TFCM
\(m_{1} { = 2, }m_{{2}} { = 2, }T = 50, \, \eta = 10^{ - 5} , \, \lambda { = }1\)
Deng et al. [39]
MVMC
\(T = 50, \, Popsize = 50, \, p_{c} = 0.5, \, p_{m} = 0.03, \, Niche = 10\)
Adán José-García et al. [35]
FCM-SICM
\(m{ = 2, }T = 50,\eta = 10^{ - 5} ,sigma\_d{ = 1},sigma\_r{ = 7}\)
Wang et al. [28]
sRFCM
\(m_{1} = 2, \, m_{2} = 2, \, T = 50, \, w_{low} = 0.8, \, w = 3, \, \alpha = 0.2\)
Roy et al. [29]
MKT-RFCCA
\(m = 2, \, T = 50, \, w_{low} { = }0.8, \, \lambda = 0.5, \, L = 2\)
Proposed in this paper
To measure the clustering performance of MKT-RFCCA and comparative algorithms, this paper employs two well-accepted validity indicators, namely, clustering accuracy (CA) [48] and normalized mutual information (NMI) [49]. The greater values of CA and NMI, the better performance of the corresponding algorithm. In addition, in order to fairly compare the performance of all algorithms, the indicator results in the experiments of this paper are calculated by averaging the five maximum values.

Validation of adaptive threshold determination mechanism in MKT-RFCCA

In Sect. “Distance-based adaptive threshold determination mechanism”, we have verified the effectiveness of the adaptive threshold determination mechanism under the framework of RFCM. The positive contribution of the adaptive threshold determination mechanism in MKT-RFCCA is confirmed in this section. The experiments are performed on synthetic datasets and images by using two fixed thresholds and adaptive threshold. In particular, White Gaussian noise with the normalized variance (NV) of 0.004 and Salt & Pepper noise with the noise percentage (NP) of 0.02 are added to the images, respectively. The values of CA and NMI on the synthetic datasets are shown in Figs. 4, 5, 6 and the corresponding clustering results are given in Figs. 7, 8, where the red stars represent the obtained cluster centers. It can be seen that the smaller threshold provides clear clusters, while the larger threshold leads to over-shifting of cluster centers and misclassification. In contrast, adaptive threshold not only achieves the same favorable clustering results, but also has more accurate cluster center locations. The two evaluation index results on the images are given in Figs. 5, 6, and the visual segmentation results are presented in Figs. 9, 10, 11, 12. Likewise, the outcomes show that different thresholds significantly affect the clustering results, and adaptive threshold holds better results. In summary, the adaptive threshold determination mechanism in MKT-RFCCA can effectively determine the threshold parameter automatically, which can improve the feasibility and robustness of the algorithm.

Validation multi-view learning in MKT-RFCCA

The effectiveness of multi-view learning in MKT-RFCCA is validated in this section. In this experiment, only utilizing Euclidean distance in MKT-RFCCA is selected as the single view clustering case to compare with multi-view clustering. Some synthetic datasets and Berkeley images with noise are selected as test data. The corresponding experimental results are provided in Figs. 13, 14, 15. The experimental results reveal that the single view clustering results are difficult to maintain the optimal values on different types of data, which is due to the fact that the Euclidean distance only considers the spherical relationship. Comparatively, the multi-view clustering results are significantly better, which combines different distance metrics and utilizes transfer learning to further explore the potential complementary information between views, thus effectively enhancing the clustering performance of the algorithm. Therefore, the introduction of multi-view learning plays a crucial role in the superiority and effectiveness of MKT-RFCCA.

Clustering experiments on synthetic datasets

This section studies the capability of MKT-RFCCA to generate high-quality clustering solutions on synthetic datasets with different sizes, dimensions, and degrees of overlap. The synthetic datasets are used in this section are shown in Fig. 16. According to the data distribution, these synthetic datasets are classified into two categories: well-separated clusters and overlapping clusters. The details of the synthetic datasets and the corresponding clustering results of all algorithms are shown in Table 2, where N is the number of samples, D is the dimension, and K is the number of clusters.
Table 2
Index values of MKT-RFCCA and comparative algorithms on synthetic datasets
Dataset
N
D
K
Index
FCM
RCM
RFCM
TFCM
MVMC
MKT-RFCCA
Data43
400
3
4
CA
1.0000
0.8945
1.0000
1.0000
1.0000
1.0000
NMI
1.0000
0.8506
1.0000
1.0000
1.0000
1.0000
Sizes5
1000
2
4
CA
0.6272
0.8430
0.6540
0.6238
0.6324
0.9510
NMI
0.5721
0.6281
0.5734
0.5725
0.6052
0.8031
TwoDiamonds
800
2
2
CA
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
NMI
1.0000
1.0000
1.0000
1.0000
1.0000
1.0000
Square1
1000
2
4
CA
0.9790
0.6912
0.9790
0.9790
0.9790
0.9810
NMI
0.9197
0.6574
0.9198
0.9197
0.9197
0.9263
Data92
800
2
9
CA
0.9256
0.5896
0.8867
0.9258
0.8978
0.9167
NMI
0.8576
0.6461
0.8289
0.8579
0.8294
0.8490
Bold indicates the best value
As shown in Table 2, most methods obtain desirable clustering results on the well-separated datasets Data43 and TwoDiamonds. For Square1 and Sizes5 datasets, FCM, RCM, RFCM and TFCM clustering algorithms based on a single Euclidean distance perform poorly due to the overlapping distribution of the data. MVMC and MKT-RFCCA consider different distance metrics and are able to recognize various data structures, and MKT-RFCCA performs better than MVMC because of introducing a transfer mechanism. FCM and TFCM using the Euclidean distance metric obtain better results on the dataset Data92 with spherical and overlapping distributions, but TFCM outperforms FCM due to the guidance of the transfer mechanism. The above experimental results confirm the effectiveness of MKT-RFCCA in handling different data distributions.

Clustering experiments on real-world datasets

Here, we apply MKT-RFCCA to real datasets to test the performance. Five datasets from the UCI database are selected: Iris, Glass, Haberman, German, and Zoo. The CA and NMI values of MKT-RFCCA and comparative algorithms are shown in Table 3. It is evident that the evaluation index values of MKT-RFCCA and MVMC based on multi-view clustering algorithms are significantly superior to the other comparison algorithms with single view on most datasets. Specifically, for Glass, Haberman and German datasets, MKT-RFCCA obtains competitive clustering results by using the transfer mechanism between views. And for Iris and Zoo datasets, MVMC achieves the optimal performance by virtue of the evolutionary strategy, which yields better clustering results.
Table 3
Evaluation Index of MKT-RFCCA and Comparative Algorithms on UCI Datasets
Dataset
Index
FCM
RCM
RFCM
TFCM
MVMC
MKT-RFCCA
Iris
CA
0.8933
0.8693
0.8973
0.8933
0.9187
0.9133
NMI
0.7433
0.7378
0.7480
0.7433
0.7802
0.7491
Glass
CA
0.4206
0.4869
0.4906
0.4206
0.4393
0.4907
NMI
0.2963
0.3048
0.3467
0.2957
0.3080
0.3589
Haberman
CA
0.5196
0.6052
0.5268
0.5196
0.5569
0.8652
NMI
0.0024
0.0028
0.0015
0.0024
0.0007
0.6786
German
CA
0.5732
0.6250
0.6598
0.5888
0.6060
0.7020
NMI
0.0130
0.0108
0.0163
0.0123
0.0004
0.0054
Zoo
CA
0.6733
0.7426
0.6653
0.6911
0.7307
0.6832
NMI
0.8018
0.7819
0.7510
0.8102
0.8345
0.7320
Bold indicates the best value

Segmentation experiments on Berkeley images

In this section, several images from the Berkeley dataset are selected to demonstrate the segmentation performance of MKT-RFCCA. Although MVMC shows better clustering performance on some datasets in the previous sections, it is not applicable to image segmentation due to the large amounts of pixels in the image. In this section, FCM, RCM, RFCM, TFCM, FCM-SICM and sRFCM are used as the comparative algorithms. In this experiment, Berkeley images are added with the White Gaussian noise and the Salt & Pepper noise, respectively. The CA and NMI values of all algorithms on noisy images are shown in Tables 4 and 5. As can be seen, the performance of MKT-RFCCA outperforms the comparative algorithms on the majority of images with different noise types and levels.
Table 4
Evaluation Index of MKT-RFCCA and comparative algorithms on Berkeley images contaminated by Gaussian noise
Image
NV
Index
FCM
RCM
RFCM
TFCM
FCM-SICM
sRFCM
MKT-RFCCA
#3063
0.004
CA
0.7679
0.9727
0.9052
0.7786
0.8986
0.7827
0.9921
NMI
0.2739
0.7373
0.4808
0.2847
0.3642
0.8584
0.8998
0.04
CA
0.7295
0.7391
0.8079
0.7295
0.6062
0.8557
0.9813
NMI
0.2235
0.2299
0.2867
0.2235
0.1493
0.3759
0.8052
#3096
0.004
CA
0.9702
0.9776
0.9774
0.9703
0.9853
0.8584
0.9918
NMI
0.5596
0.6193
0.6175
0.5601
0.7264
0.2267
0.8294
0.04
CA
0.6061
0.6170
0.6866
0.6061
0.9343
0.6360
0.9786
NMI
0.0749
0.0775
0.0954
0.0749
0.3807
0.0834
0.6402
#15088
0.004
CA
0.9120
0.9138
0.9128
0.9120
0.9277
0.8634
0.9331
NMI
0.3180
0.3204
0.3191
0.3180
0.3176
0.1963
0.4066
0.04
CA
0.7051
0.7207
0.7937
0.7051
0.8124
0.7657
0.9340
NMI
0.1091
0.1147
0.1453
0.1091
0.1967
0.1235
0.4189
#24063
0.004
CA
0.9534
0.9449
0.9510
0.9534
0.8475
0.9622
0.9699
NMI
0.7873
0.7802
0.7822
0.7873
0.6137
0.8230
0.8553
0.04
CA
0.7323
0.6896
0.7689
0.7323
0.8008
0.7974
0.9584
NMI
0.3891
0.3779
0.4028
0.3891
0.5742
0.4647
0.8175
#33044
0.004
CA
0.9227
0.9265
0.9079
0.9227
0.9554
0.9383
0.9412
NMI
0.5776
0.5893
0.5367
0.5776
0.6940
0.6315
0.6384
0.04
CA
0.8049
0.8066
0.8200
0.8049
0.9172
0.9138
0.9172
NMI
0.3480
0.3501
0.3675
0.3480
0.5633
0.5549
0.5629
#8068
0.004
CA
0.9455
0.9452
0.9471
0.9455
0.9549
0.9503
0.9509
NMI
0.6216
0.6204
0.6290
0.6216
0.6695
0.6452
0.6487
0.04
CA
0.9260
0.9264
0.9259
0.9260
0.9455
0.9310
0.9355
NMI
0.5446
0.5457
0.5442
0.5446
0.6235
0.5674
0.5866
#118035
0.004
CA
0.9350
0.9342
0.9391
0.9350
0.9141
0.9188
0.9419
NMI
0.7587
0.7507
0.7576
0.7587
0.6909
0.7326
0.7691
0.04
CA
0.6935
0.7395
0.6484
0.6935
0.8635
0.8036
0.9520
NMI
0.4380
0.4466
0.4289
0.4380
0.5893
0.5136
0.7979
#101027
0.004
CA
0.8689
0.8694
0.8625
0.8689
0.6275
0.7068
0.9135
NMI
0.3965
0.4075
0.4140
0.3965
0.1206
0.1808
0.5883
0.04
CA
0.8091
0.8120
0.8272
0.8091
0.8489
0.8299
0.9161
NMI
0.2310
0.2349
0.2668
0.2310
0.3438
0.2918
0.5950
#124084
0.004
CA
0.7688
0.7770
0.8314
0.7688
0.8186
0.6965
0.8859
NMI
0.4885
0.4866
0.5163
0.4885
0.4229
0.5067
0.6308
0.04
CA
0.7225
0.7170
0.7489
0.7225
0.6733
0.6557
0.8886
NMI
0.3436
0.3598
0.3427
0.3436
0.4118
0.4135
0.5884
#167062
0.004
CA
0.9067
0.9764
0.9890
0.9064
0.9859
0.9363
0.9908
NMI
0.7879
0.8898
0.9012
0.7877
0.9183
0.8447
0.9433
0.04
CA
0.7976
0.8301
0.8531
0.7979
0.9528
0.8733
0.9899
NMI
0.7132
0.7319
0.7566
0.7133
0.8194
0.7684
0.9403
#147091
0.004
CA
0.9293
0.9296
0.9181
0.9293
0.8793
0.9317
0.9441
NMI
0.6329
0.6351
0.5862
0.6329
0.5133
0.6475
0.6854
0.04
CA
0.8777
0.8787
0.8666
0.8777
0.9229
0.9073
0.9320
NMI
0.4594
0.4620
0.4339
0.4594
0.6173
0.5558
0.6385
#113016
0.004
CA
0.7737
0.7973
0.8019
0.7737
0.9126
0.8977
0.9026
NMI
0.2225
0.2458
0.2511
0.2225
0.5335
0.4431
0.4753
0.04
CA
0.6938
0.6985
0.7201
0.6938
0.8834
0.7296
0.8767
NMI
0.1264
0.1282
0.1386
0.1264
0.3865
0.1675
0.3889
#238011
0.004
CA
0.6168
0.6220
0.9762
0.6168
0.9710
0.8648
0.9806
NMI
0.5599
0.5611
0.8152
0.5599
0.8033
0.6606
0.8424
0.04
CA
0.5766
0.5171
0.6275
0.5766
0.8644
0.6631
0.6822
NMI
0.2273
0.2096
0.2274
0.2273
0.6039
0.3347
0.5141
#196027
0.004
CA
0.8082
0.8172
0.8862
0.8082
0.9393
0.7247
0.9567
NMI
0.2543
0.2640
0.3613
0.2543
0.4555
0.1727
0.6009
0.04
CA
0.6533
0.6581
0.7029
0.6533
0.7745
0.6849
0.8939
NMI
0.1211
0.1230
0.1369
0.1211
0.1894
0.1281
0.3744
#241004
0.004
CA
0.8073
0.8629
0.8513
0.8073
0.8012
0.7054
0.8785
NMI
0.6975
0.7099
0.7203
0.6976
0.6409
0.6343
0.7799
0.04
CA
0.5565
0.5798
0.5634
0.5565
0.6222
0.6381
0.7245
NMI
0.4204
0.4110
0.4255
0.4204
0.5369
0.4772
0.6704
Bold indicates the best value
Table 5
Evaluation index of MKT-RFCCA and comparative algorithms on berkeley images contaminated by salt & pepper noise
Image
NP
Index
FCM
RCM
RFCM
TFCM
FCM-SICM
sRFCM
MKT-RFCCA
#3063
0.02
CA
0.7525
0.8301
0.8961
0.7670
0.8325
0.6965
0.9897
NMI
0.2516
0.4390
0.4502
0.2650
0.0984
0.3170
0.8751
0.05
CA
0.7641
0.8055
0.8674
0.7645
0.8211
0.7048
0.9503
NMI
0.2480
0.3211
0.3765
0.2483
0.0900
0.2380
0.6417
#3096
0.02
CA
0.9628
0.9689
0.9728
0.9628
0.9863
0.7331
0.9911
NMI
0.5070
0.5426
0.5690
0.5070
0.7384
0.1495
0.8179
0.05
CA
0.8864
0.9199
0.9472
0.8866
0.9668
0.8774
0.9903
NMI
0.2618
0.3255
0.4013
0.2618
0.5719
0.2623
0.8041
#15088
0.02
CA
0.9062
0.9077
0.9074
0.9062
0.9313
0.8367
0.9332
NMI
0.2979
0.2994
0.2988
0.2979
0.3344
0.2525
0.3993
0.05
CA
0.8834
0.8859
0.8864
0.8834
0.9216
0.8992
0.9331
NMI
0.2481
0.2499
0.2505
0.2481
0.2794
0.2659
0.3920
#24063
0.02
CA
0.9433
0.9403
0.9412
0.9433
0.8256
0.9533
0.9693
NMI
0.7490
0.7503
0.7440
0.7490
0.5818
0.7861
0.8513
0.05
CA
0.9020
0.8921
0.9020
0.9020
0.8360
0.9126
0.9651
NMI
0.6269
0.6219
0.6277
0.6269
0.5685
0.6661
0.8347
#33044
0.02
CA
0.9197
0.9228
0.9074
0.9197
0.9695
0.8327
0.9419
NMI
0.5539
0.5629
0.5204
0.5539
0.7599
0.4433
0.6410
0.05
CA
0.8896
0.8917
0.8869
0.8896
0.9615
0.7288
0.9458
NMI
0.4613
0.4656
0.4556
0.4613
0.7203
0.3222
0.6547
#8068
0.02
CA
0.9364
0.9358
0.9394
0.9364
0.9564
0.9305
0.9527
NMI
0.5804
0.5782
0.5921
0.5804
0.6815
0.5709
0.6572
0.05
CA
0.9202
0.9193
0.9240
0.9202
0.9105
0.9257
0.9469
NMI
0.5175
0.5150
0.5284
0.5175
0.4773
0.5416
0.6311
#118035
0.02
CA
0.9242
0.9228
0.9291
0.9242
0.9056
0.8671
0.9402
NMI
0.7237
0.7097
0.7203
0.7237
0.6829
0.6181
0.7650
0.05
CA
0.8942
0.8921
0.9081
0.8942
0.8903
0.8725
0.9382
NMI
0.6364
0.6102
0.6510
0.6364
0.6352
0.6449
0.7560
#101027
0.02
CA
0.8595
0.8618
0.8543
0.8595
0.6625
0.7306
0.9249
NMI
0.3583
0.3720
0.3720
0.3583
0.1969
0.2089
0.6266
0.05
CA
0.8388
0.8411
0.8375
0.8388
0.8432
0.7782
0.9169
NMI
0.2954
0.3019
0.3015
0.2954
0.3810
0.2322
0.5979
#124084
0.02
CA
0.7665
0.7642
0.8170
0.7665
0.7966
0.7074
0.8768
NMI
0.4733
0.4704
0.4960
0.4733
0.3881
0.4974
0.6110
0.05
CA
0.7721
0.7503
0.7880
0.7721
0.7498
0.6638
0.8724
NMI
0.4417
0.4357
0.4340
0.4417
0.3358
0.4894
0.6008
#167062
0.02
CA
0.9651
0.9725
0.9669
0.9651
0.9761
0.9704
0.9919
NMI
0.8557
0.8679
0.8570
0.8557
0.8919
0.8589
0.9506
0.05
CA
0.9392
0.9474
0.9405
0.9392
0.9189
0.9649
0.8057
NMI
0.7895
0.7979
0.7899
0.7895
0.8104
0.8118
0.7337
#147091
0.02
CA
0.9200
0.9201
0.9091
0.9200
0.9113
0.9234
0.9448
NMI
0.5976
0.5984
0.5551
0.5976
0.5816
0.6169
0.6882
0.05
CA
0.9004
0.9007
0.8895
0.9004
0.9024
0.9086
0.9391
NMI
0.5284
0.5300
0.4930
0.5284
0.5652
0.5658
0.6650
#113016
0.02
CA
0.7967
0.8314
0.7999
0.7967
0.7480
0.8925
0.8968
NMI
0.2456
0.2896
0.2487
0.2456
0.0522
0.4244
0.4614
0.05
CA
0.7779
0.8006
0.7847
0.7779
0.7534
0.8763
0.9082
NMI
0.2128
0.2356
0.2188
0.2128
0.0179
0.3627
0.4875
#238011
0.02
CA
0.6407
0.9544
0.9130
0.7043
0.9719
0.9172
0.9791
NMI
0.5296
0.7212
0.6962
0.5702
0.7979
0.6995
0.8331
0.05
CA
0.9136
0.9044
0.9149
0.9136
0.8749
0.8954
0.9786
NMI
0.5846
0.5657
0.5861
0.5846
0.4825
0.5788
0.8294
#196027
0.02
CA
0.8137
0.8203
0.8765
0.8137
0.9350
0.7100
0.9509
NMI
0.2544
0.2618
0.3364
0.2544
0.4108
0.1359
0.5743
0.05
CA
0.7884
0.7944
0.8454
0.7884
0.9016
0.7653
0.9493
NMI
0.2171
0.2227
0.2754
0.2171
0.2829
0.1706
0.5605
#241004
0.02
CA
0.8269
0.8659
0.8501
0.8251
0.8345
0.8067
0.8941
NMI
0.6837
0.6951
0.6967
0.6826
0.6954
0.6634
0.7944
0.05
CA
0.7463
0.8103
0.7879
0.7463
0.7714
0.7111
0.8884
NMI
0.5767
0.5905
0.5950
0.5767
0.6192
0.5645
0.7856
Bold indicates the best value
In addition, some segmentation results of Berkeley images with the White Gaussian noise (NV = 0.004) and the Salt & Pepper noise (NP = 0.02) are selected to compare the performance of MKT-RFCCA and the comparison algorithms more intuitively, as shown in Figs. 17, 18, 19, 20, 21, 22. These visual segmentation results indicate that FCM, RCM, and RFCM fail to effectively overcome the influence of noise and retain a large number of misclassified pixels in the segmentation results. Although TFCM is an algorithm that considers the transfer mechanism, it provides unsatisfactory results by ignoring spatial information. FCM_SICM and sRFCM obtain poor results because the local spatial information has been contaminated by different types and levels of noise. In contrast, MKT-RFCCA not only provides excellent segmentation results on images with different types of noise, but also preserves the details of the images.
Figure 23 shows the average values of CA and NMI metrics obtained by MKT-RFCCA and the comparative algorithms on sixty Berkeley images with the addition of the White Gaussian noise and the Salt & Pepper noise, respectively. It also demonstrates the superior clustering performance of MKT-RFCCA compared to the comparative algorithms.

Segmentation experiments on Weizmann images

In this section, we choose some Weizmann images to further validate the segmentation performance of MKT-RFCCA on noisy images with different types and degrees of noise. The segmentation experiments on the Weizmann images use the same experimental settings as the segmentation experiments on the Berkeley images. The CA and NMI values of the noisy images on MKT-RFCCA and comparative algorithms are shown in Tables 6 and 7. It can be observed from these tables that MKT-RFCCA performs more effectively than other algorithms on most images. To visually evaluate the effectiveness of MKT-RFCCA and the comparative algorithms, the corresponding results on four Weizmann images with the White Gaussian noise (NV = 0.004) and the Salt & Pepper noise (NP = 0.02) are illustrated in Figs. 24, 25, 26, 27. As can be seen from the results, FCM-SICM, sRFCM, and MKT-RFCCA perform better than the other comparative algorithms. However, FCM-SICM and sRFCM are difficult to overcome the noise effect due to the local spatial information is contaminated. In contrast, MKT-RFCCA performs well in terms of noise robustness and detail preservation. For instance, it is obvious from Fig. 26 that the results of FCM, RCM, RFCM, TFCM and FCM-SICM contain some misclassified pixels in the background and island. MKT-RFCCA and sRFCM outperform the other methods. Compared to sRFCM, MKT-RFCCA can completely segment the island from the background and effectively suppress the noise.
Table 6
Evaluation index of MKT-RFCCA and comparative algorithms on Weizmann images contaminated by Gaussian noise
Image
NV
Index
FCM
RCM
RFCM
TFCM
FCM-SICM
sRFCM
MKT-RFCCA
1000109
0.004
CA
0.9403
0.9433
0.9371
0.9403
0.9544
0.9421
0.9668
NMI
0.6853
0.7033
0.6663
0.6853
0.7734
0.7119
0.7993
0.04
CA
0.9152
0.9163
0.9128
0.9152
0.9075
0.9218
0.9348
NMI
0.5833
0.5879
0.5722
0.5833
0.5779
0.6161
0.6509
b2chopper008
0.004
CA
0.9897
0.8784
0.9892
0.9897
0.8620
0.8973
0.9901
NMI
0.7469
0.5062
0.7369
0.7469
0.0150
0.2489
0.7523
0.04
CA
0.7433
0.9877
0.9868
0.7433
0.9598
0.8718
0.9898
NMI
0.1122
0.7001
0.6840
0.1122
0.1726
0.1243
0.7445
bbmf_lancaster_july_06
0.004
CA
0.6643
0.9858
0.9849
0.7923
0.9558
0.6786
0.9870
NMI
0.0783
0.7346
0.7252
0.3334
0.3227
0.1686
0.7552
0.04
CA
0.6916
0.9742
0.9677
0.6916
0.9778
0.7231
0.9854
NMI
0.0835
0.5521
0.4929
0.0835
0.5924
0.0250
0.7288
beltaine_4_bg_050502
0.004
CA
0.8107
0.8512
0.9277
0.8107
0.8579
0.6784
0.9344
NMI
0.2412
0.2861
0.4306
0.2412
0.2835
0.0925
0.4505
0.04
CA
0.6958
0.7515
0.8788
0.6958
0.9212
0.6583
0.9531
NMI
0.1488
0.1768
0.2925
0.1488
0.0036
0.0560
0.5183
leafpav
0.004
CA
0.9642
0.9676
0.9571
0.9642
0.8637
0.9698
0.9929
NMI
0.7185
0.7342
0.6874
0.7185
0.2361
0.7490
0.9152
0.04
CA
0.8247
0.8673
0.8770
0.8247
0.8103
0.9379
0.9805
NMI
0.3572
0.4185
0.4340
0.3572
0.4098
0.5971
0.8112
sharp_image
0.004
CA
0.8448
0.8458
0.8386
0.8448
0.8597
0.8670
0.8579
NMI
0.5057
0.5075
0.4945
0.5057
0.4458
0.5490
0.5304
0.04
CA
0.8290
0.8280
0.8271
0.8290
0.8214
0.8489
0.8453
NMI
0.4507
0.4519
0.4469
0.4507
0.4446
0.4874
0.5066
europe_holiday_484
0.004
CA
0.9168
0.9161
0.9240
0.9168
0.9055
0.9208
0.9402
NMI
0.6016
0.6000
0.6160
0.6016
0.5297
0.6110
0.6716
0.04
CA
0.8936
0.8986
0.8858
0.8936
0.7844
0.9098
0.9228
NMI
0.4703
0.4904
0.4447
0.4703
0.2405
0.5555
0.5876
imagen_072_1
0.004
CA
0.9295
0.9288
0.9289
0.9295
0.8126
0.8753
0.9400
NMI
0.5862
0.5907
0.5721
0.5862
0.3231
0.4593
0.6366
0.04
CA
0.7890
0.8473
0.8604
0.7890
0.7310
0.8963
0.9212
NMI
0.2075
0.2840
0.3080
0.2075
0.0070
0.4257
0.5730
san_andres_130
0.004
CA
0.7831
0.7825
0.8928
0.7831
0.8674
0.8014
0.8805
NMI
0.1638
0.1634
0.2885
0.1638
0.0022
0.2329
0.2582
0.04
CA
0.6697
0.7134
0.7557
0.6697
0.9158
0.8350
0.9154
NMI
0.1041
0.1214
0.1415
0.1041
0.3250
0.2148
0.3288
yokohm060409_dyjsn191
0.004
CA
0.7389
0.7291
0.8276
0.7389
0.6849
0.7523
0.8274
NMI
0.1171
0.1136
0.1832
0.1171
0.0767
0.1269
0.2270
0.04
CA
0.6788
0.7083
0.7341
0.6788
0.7390
0.7350
0.8260
NMI
0.0881
0.0973
0.1067
0.0881
0.1233
0.1148
0.1694
Bold indicates the best value
Table 7
Evaluation index of MKT-RFCCA and comparative algorithms on Weizmann images contaminated by salt & pepper noise
Image
NP
Index
FCM
RCM
RFCM
TFCM
FCM-SICM
sRFCM
MKT-RFCCA
1000109
0.02
CA
0.9420
0.9435
0.9378
0.9420
0.9122
0.9448
0.9696
NMI
0.7005
0.7172
0.6735
0.7005
0.6204
0.7263
0.8284
0.05
CA
0.9306
0.9321
0.9276
0.9306
0.9267
0.9333
0.9693
NMI
0.6528
0.6677
0.6327
0.6528
0.6794
0.6824
0.8231
b2chopper008
0.02
CA
0.9787
0.9688
0.9876
0.9787
0.6996
0.9308
0.9873
NMI
0.5727
0.4888
0.6989
0.5727
0.0371
0.5699
0.7035
0.05
CA
0.9352
0.9672
0.9872
0.9352
0.7118
0.9458
0.9864
NMI
0.3205
0.4643
0.6904
0.3205
0.0357
0.4583
0.6873
bbmf_lancaster_july_06
0.02
CA
0.5759
0.9752
0.9736
0.8164
0.6016
0.8594
0.9821
NMI
0.0560
0.5627
0.5459
0.3710
0.0445
0.2538
0.6848
0.05
CA
0.9473
0.9428
0.9592
0.9473
0.6085
0.8897
0.9816
NMI
0.3697
0.3542
0.4298
0.3697
0.0445
0.3616
0.6774
beltaine_4_bg_050502
0.02
CA
0.8314
0.8384
0.9220
0.8314
0.5704
0.7858
0.9918
NMI
0.2616
0.2683
0.4061
0.2616
0.1098
0.2549
0.8407
0.05
CA
0.8070
0.8119
0.9090
0.8070
0.5572
0.7706
0.9910
NMI
0.2303
0.2341
0.3613
0.2303
0.1056
0.1728
0.8286
leafpav
0.02
CA
0.9601
0.9636
0.9592
0.9601
0.9832
0.8728
0.9926
NMI
0.6927
0.7086
0.6907
0.6927
0.8302
0.4133
0.9162
0.05
CA
0.9413
0.9486
0.9423
0.9413
0.9769
0.9304
0.9921
NMI
0.6034
0.6311
0.6051
0.6034
0.7861
0.5837
0.9094
sharp_image
0.02
CA
0.8414
0.8424
0.8317
0.8414
0.8166
0.8698
0.8524
NMI
0.4734
0.4752
0.4560
0.4734
0.4565
0.5195
0.5234
0.05
CA
0.8313
0.8327
0.8192
0.8313
0.8144
0.8634
0.8608
NMI
0.4317
0.4341
0.4105
0.4317
0.4530
0.4800
0.5362
europe_holiday_484
0.02
CA
0.9043
0.9037
0.9123
0.9043
0.9115
0.9169
0.9406
NMI
0.5402
0.5390
0.5573
0.5402
0.5911
0.5835
0.6889
0.05
CA
0.8885
0.8886
0.8955
0.8885
0.9114
0.9058
0.9374
NMI
0.4677
0.4677
0.4838
0.4677
0.5913
0.5308
0.6771
imagen_072_1
0.02
CA
0.9194
0.9192
0.9209
0.9194
0.6727
0.7254
0.9404
NMI
0.5205
0.5216
0.5248
0.5205
0.1278
0.1958
0.6527
0.05
CA
0.9022
0.9043
0.9018
0.9022
0.6642
0.8987
0.9383
NMI
0.4353
0.4448
0.4335
0.4353
0.1234
0.4373
0.6435
san_andres_130
0.02
CA
0.7595
0.7538
0.8954
0.7595
0.9405
0.8562
0.8836
NMI
0.1442
0.1414
0.2862
0.1442
0.3920
0.2979
0.2622
0.05
CA
0.7374
0.7617
0.8766
0.7374
0.9378
0.8169
0.9288
NMI
0.1282
0.1020
0.2389
0.1282
0.3721
0.2248
0.3610
yokohm060409_dyjsn191
0.02
CA
0.6988
0.6948
0.8533
0.6988
0.6868
0.6998
0.8698
NMI
0.0918
0.0931
0.2180
0.0918
0.1205
0.0964
0.2723
0.05
CA
0.6877
0.6845
0.8398
0.6877
0.6821
0.8140
0.8895
NMI
0.0833
0.0843
0.1813
0.0833
0.1187
0.0990
0.3074
Bold indicates the best value
Figure 28 shows the average values of CA and NMI metrics obtained by MKT-RFCCA and the comparative algorithms on thirty Weizmann images with the addition of the White Gaussian noise and the Salt & Pepper noise, respectively. It also reveals the superiority of MKT-RFCCA compared to the comparative algorithms.

Comprehensive evaluation for all algorithms

In this section, in order to comprehensively compare significant differences of all algorithms, the technique for order preference by similarity to ideal solution (TOPSIS) [50] is applied to rank and evaluate all algorithms. The CA and NMI are selected as the benchmark test metrics, and the results in Tables 2, 3, 4, 5, 6, 7 are combined to calculate the proximity of all algorithms on synthetic and real-world datasets, Berkeley and Weizmann images. A higher proximity means the algorithm is superior. Tables 8 and 9 show the proximity results of each algorithm. As can be seen from these tables, MVMC and MKT-RFCCA outperform the other algorithms on the datasets, and more importantly MKT-RFCCA is superior in comparison. For images with noise, FCM-SICM and RFCM show slightly better results, however, as the image information is contaminated with increasing noise, their results deteriorate. MKT-RFCCA provides the highest proximity in all cases and is the optimal choice compared to other prevalent comparison algorithms.
Table 8
Proximity results of all comparative algorithms for CA metrics
Proximity
Noise
FCM
RCM
RFCM
TFCM
MVMC
FCM-SICM
sRFCM
MKT-RFCCA
Synthetic and real-world datasets
0.4769
0.3258
0.4862
0.4779
0.4921
0.9197
Berkeley and Weizmann images
NV = 0.004
0.3962
0.5573
0.6896
0.4313
0.7114
0.5477
0.9727
NV = 0.04
0.1731
0.3431
0.4365
0.1731
0.5739
0.3997
0.8552
NP = 0.02
0.4621
0.6640
0.7131
0.5417
0.7127
0.5475
0.9403
NP = 0.05
0.5332
0.5675
0.6973
0.5333
0.5884
0.4809
0.8539
Bold indicates the best value
Table 9
Proximity Results of All Comparative Algorithms for NMI Metrics
Proximity
Noise
FCM
RCM
RFCM
TFCM
MVMC
FCM-SICM
sRFCM
MKT-RFCCA
Synthetic and real-world datasets
0.3433
0.1880
0.3320
0.3444
0.3437
0.8822
Berkeley and Weizmann images
NV = 0.004
0.4708
0.6203
0.6516
0.5054
0.6562
0.5848
0.9536
NV = 0.04
0.1668
0.3785
0.3897
0.1668
0.3998
0.3178
0.9518
NP = 0.02
0.4012
0.5057
0.5671
0.4406
0.5527
0.4445
0.9182
NP = 0.05
0.3169
0.3626
0.4621
0.3170
0.3552
0.3581
0.9475
Bold indicates the best value

Conclusion

In this paper, a robust multi-view knowledge transfer-based rough fuzzy c-means clustering algorithm (MKT-RFCCA) is proposed. First, in order to overcome the restriction of traditional clustering algorithms based on single view, different distance metrics are adopted as multiple views to identify different data structures and satisfies the diverse clustering demands in the real world. Second, the objective function of MKT-RFCCA is constructed by introducing the transfer learning mechanism. The objective function fully exploits and utilizes the potential information with complementarity and consistency between multiple views, thus improving the capability of the algorithm to handle uncertain information. In addition, in order to reduce the sensitivity of the algorithm to random cluster centroids, an initialized cluster centroids selection strategy for image segmentation is presented, which is helpful to generate more accurate and stable clustering results. Finally, a distance-based adaptive threshold determination mechanism is proposed to determine the lower approximation and boundary region in rough fuzzy clustering. It eliminates the drawbacks of manually setting the threshold parameter and improves the robustness. Satisfactory experimental results obtained on synthetic datasets, real-world datasets, and noise-contaminated Berkeley and Weizmann images validate the performance of MKT-RFCCA.
This work focuses on the clustering objective function of intra-cluster compactness and ignores other clustering criteria. One of our future studies aims to construct various complementary objective functions and perform multi-view rough fuzzy clustering under multiple objective functions. In addition, we will design more effective transfer mechanisms to further enhance the clustering performance and introduce evolutionary optimization strategies to find the global optimal scheme.

Acknowledgements

The authors thank both the editors and reviewers for their valuable suggestions. This work was supported by the National Natural Science Foundation of China (Grant Nos. 62071379, 62071378, 61901365 and 62106196) and the Youth Innovation Team of Shaanxi Universities.

Declarations

Conflict of interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Compliance with ethical standards

This article does not contain any studies with human participants or animals performed by any of the authors.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Anhänge

Appendix

This part gives the proof process of updating formulas in MKT-RFCCA. The objective function of MKT-RFCCA based on fuzzy clustering is as follows:
$$ \begin{gathered} J_{MKT - RFCCA}^{\prime} = \sum\limits_{l = 1}^{L} {\sum\limits_{i = 1}^{K} {\sum\limits_{j = 1}^{N} {\mu_{lij}^{m} D_{l}^{2} {(}x_{j} ,v_{i} {)}} } } \\ \qquad + \lambda \sum\limits_{l = 1}^{L} {\sum\limits_{{l^{\prime} \ne l}} {\sum\limits_{i = 1}^{K} {\sum\limits_{j = 1}^{N} {\left( {\mu_{{l^{\prime}ij}}^{m} - \mu_{lij}^{m} } \right)D_{l}^{2} {(}x_{j} ,v_{i} {)}} } } } \hfill \\ \quad s.t. \, \sum\limits_{i = 1}^{K} {\mu_{lij} } = 1,{ 0} \le \mu_{lij} \le 1 \hfill \\ \end{gathered} $$
(17)
Firstly, the proof process of Eq. (15) is as follows:
$$ \begin{gathered} J_{{MKT - RFCCA}}^{\prime } = \sum\limits_{{l = 1}}^{L} {\sum\limits_{{i = 1}}^{K} {\sum\limits_{{j = 1}}^{N} {[1 - (L - 1)\lambda ]\mu _{{lij}}^{m} D_{l} ^{2} } } } (x_{j} ,v_{i} )\\ + \lambda \sum\limits_{{l = 1}}^{L} {\sum\limits_{{l^{\prime} \ne l}} {\sum\limits_{{i = 1}}^{K} {\sum\limits_{{j = 1}}^{N} {\mu _{{l^{\prime}ij}}^{m} D_{l} ^{2} (x_{j} ,v_{i} )} } } } \hfill \\ s.t.{\text{ }}\sum\limits_{{i = 1}}^{K} {\mu _{{lij}} } = 1,{\text{ }}0 \le \mu _{{lij}} \le 1 \hfill \\ \end{gathered} $$
(18)
The above optimization problem is converted to the following unconstrained minimization problem with the Lagrange multiplier technique.
$$ \begin{aligned} G\left( {\mu _{{lij}} ,\eta } \right) & = \sum\limits_{{l = 1}}^{L} {\sum\limits_{{i = 1}}^{K} {\sum\limits_{{j = 1}}^{N} {\left[ {1 - (L - 1)\lambda } \right]\mu _{{lij}}^{m} D_{l} ^{2} {\text{(}}x_{j} ,v_{i} {\text{)}}} } } \\ & + \lambda \sum\limits_{{l = 1}}^{L} {\sum\limits_{{l^{\prime} \ne l}} {\sum\limits_{{i = 1}}^{K} {\sum\limits_{{j = 1}}^{N} {\mu _{{l^{\prime}ij}}^{m} D_{l} ^{2} {\text{(}}x_{j} ,v_{i} {\text{)}}} } } } \\ & - \sum\limits_{{l = 1}}^{L} {\sum\limits_{{j = 1}}^{N} {\eta _{{lj}} \left( {\sum\limits_{{i = 1}}^{K} {\mu _{{lij}} - 1} } \right)} } \\ \end{aligned} $$
(19)
where \(\eta\) is the Lagrange multiplier. Setting the partial derivatives of \(G\left( {\mu_{lij} ,\eta } \right)\) to zero with respect to \(\mu_{lij}\) and \(\eta\).
$$ \begin{aligned} &\frac{{\partial G\left( {\mu _{{lij}} ,\eta } \right)}}{{\partial \mu _{{lij}} }} = m\mu _{{lij}}^{{m - 1}} [1 - (L - 1)\lambda ]D_{l} ^{2} (x_{j} ,v_{i} ) - \eta _{{lj}} = 0 \hfill \\ & \frac{{\partial G\left( {\mu _{{lij}} ,\eta } \right)}}{{\partial \eta _{{lj}} }} = \sum\limits_{{i = 1}}^{K} {\mu _{{lij}} - 1 = 0} \hfill \\ \end{aligned} $$
(20)
From Eq. (20), we have
$$ 1 = \sum\limits_{i = 1}^{K} {\mu_{lij} = } \sum\limits_{i = 1}^{K} {\left[ {\frac{{\eta_{lj} }}{m}} \right]^{{\frac{1}{m - 1}}} } \cdot \left[ {\frac{1}{{[1 - (L - 1)\lambda ]D_{l}^{2} {(}x_{j} ,v_{i} {)}}}} \right]^{{\frac{1}{m - 1}}} $$
(21)
Thus, \(\mu_{lij}\) update formula is obtained as follows:
$$ \mu_{lij} = \left[ {\sum\limits_{c = 1}^{K} {\left( {\frac{{D_{l}^{2} {(}x_{j} ,v_{i} {)}}}{{D_{l}^{2} {(}x_{j} ,v_{c} {)}}}} \right)^{{\frac{1}{m - 1}}} } } \right]^{ - 1} $$
(22)
Then, the proof process of Eq. (13) is as follows:
$$ \begin{aligned} & J_{MKT - RFCCA}^{\prime} = \sum\limits_{l = 1}^{L} {\sum\limits_{i = 1}^{K} {\sum\limits_{j = 1}^{N} {[1 - (L - 1)\lambda ]\mu_{lij}^{m} D_{l}^{2} } } } {(}x_{j} ,v_{i} {)} \\ & \quad + \lambda \sum\limits_{l = 1}^{L} {\sum\limits_{{l^{\prime} \ne l}} {\sum\limits_{i = 1}^{K} {\sum\limits_{j = 1}^{N} {\mu_{{l^{\prime}ij}}^{m} D_{l}^{2} {(}x_{j} ,v_{i} {)}} } } } \hfill \\ & = \sum\limits_{l = 1}^{L} {\sum\limits_{i = 1}^{K} {\sum\limits_{j = 1}^{N} {{\text{\{ }}[1 - (L - 1)\lambda ]\mu_{lij}^{m} } } } \\ & \quad + \lambda \sum\limits_{l^{\prime} \ne l} {\mu_{{l^{\prime}ij}}^{m} \} D_{l}^{2} {(}x_{j} ,v_{i} {)}} \hfill \\ & = \sum\limits_{i = 1}^{K} {\sum\limits_{j = 1}^{N} {{\text{\{ }}[1 - (L - 1)\lambda ]\mu_{1ij}^{m} + } } \lambda \mu_{2ij}^{m} \} \cdot \left\| {x_{j} - v_{i} } \right\|^{2} \\ &\quad + \sum\limits_{i = 1}^{K} {\sum\limits_{j = 1}^{N} {{\text{\{ }}[1 - (L - 1)\lambda ]\mu_{2ij}^{m} + } } \lambda \mu_{1ij}^{m} \} \\ & \quad \cdot (x_{j} - v_{i} )^{T} \Sigma_{i}^{ - 1} (x_{j} - v_{i} ) \hfill \\ \end{aligned} $$
(23)
Setting the partial derivatives of \(J_{MKT - RFCCA}{\prime}\) to zero with respect to \(v_{i}\).
$$ \begin{aligned} & \frac{{\partial J_{MKT - RFCCA}^{\prime} }}{{\partial v_{i} }} \\ & \quad = \sum\limits_{j = 1}^{N} {{\text{\{ }}[1 - (L - 1)\lambda ]\mu_{1ij}^{m} { + }\lambda \mu_{2ij}^{m} \} } \cdot ( - 2)\left( {x_{j} - v_{i} } \right) \\ & \qquad + \sum\limits_{j = 1}^{N} {{\text{\{ }}[1 - (L - 1)\lambda ]\mu_{2ij}^{m} { + }\lambda \mu_{1ij}^{m} \} } \cdot ( - 2)\Sigma_{i}^{ - 1} (x_{j} - v_{i} ) \\ & \quad = 0 \end{aligned} $$
(24)
From Eq. (24), we have
$$ v_{i} = \frac{{\sum\nolimits_{j = 1}^{N} {{\text{\{ \{ }}[1 - (L - 1)\lambda ]\mu_{1ij}^{m} { + }\lambda \mu_{2ij}^{m} \} } + {\text{\{ }}[1 - (L - 1)\lambda ]\mu_{2ij}^{m} { + }\lambda \mu_{1ij}^{m} \} \Sigma_{i}^{ - 1} \} \cdot x_{j} }}{{\sum\nolimits_{j = 1}^{N} {{\text{\{ \{ }}[1 - (L - 1)\lambda ]\mu_{1ij}^{m} { + }\lambda \mu_{2ij}^{m} \} } + {\text{\{ }}[1 - (L - 1)\lambda ]\mu_{2ij}^{m} { + }\lambda \mu_{1ij}^{m} \} \Sigma_{i}^{ - 1} \} }} $$
(25)
When the transfer factor satisfies \(\lambda = 1/L\), the following equation holds. And Eq. (13) is obtained based on the weights average of the lower approximation and the fuzzy boundary.
$$ v_{i} = \frac{{\sum\nolimits_{l = 1}^{L} {\sum\nolimits_{j = 1}^{N} {{\text{\{ }}[1 - (L - 1)\lambda ]\mu_{lij}^{m} { + }\sum\nolimits_{l^{\prime} \ne l} {\lambda \mu_{l^{\prime}ij}^{m} } \} } \cdot x_{j} } }}{{\sum\nolimits_{l = 1}^{L} {\sum\nolimits_{j = 1}^{N} {{\text{\{ }}[1 - (L - 1)\lambda ]\mu_{lij}^{m} { + }\sum\nolimits_{l^{\prime} \ne l} {\lambda \mu_{l^{\prime}ij}^{m} } \} } } }} $$
(26)
Literatur
Metadaten
Titel
A robust multi-view knowledge transfer-based rough fuzzy C-means clustering algorithm
verfasst von
Feng Zhao
Yujie Yang
Hanqiang Liu
Chaofei Wang
Publikationsdatum
25.04.2024
Verlag
Springer International Publishing
Erschienen in
Complex & Intelligent Systems
Print ISSN: 2199-4536
Elektronische ISSN: 2198-6053
DOI
https://doi.org/10.1007/s40747-024-01431-1

Premium Partner