Skip to main content
main-content
Top

Hint

Swipe to navigate through the articles of this issue

Published in: Granular Computing 3/2022

Open Access 28-10-2021 | Original Paper

The cognitive comparison enhanced hierarchical clustering

Authors: Chun Guan, Kevin Kam Fung Yuen

Published in: Granular Computing | Issue 3/2022

Abstract

The growth of online shopping is rapidly changing the buying behaviour of consumers. Today, there are challenges facing buyers in the selection of a preferred item from the numerous choices available in the market. To improve the consumer online shopping experience, recommender systems have been developed to reduce the information overload. In this paper, a cognitive comparison-enhanced hierarchical clustering (CCEHC) system is proposed to provide personalised product recommendations based on user preferences. A novel rating method, cognitive comparison rating (CCR), is applied to weigh the product attributes and measure the categorical scales of attributes according to expert knowledge and user preferences. Hierarchical clustering is used to cluster the products into different preference categories. The CCEHC model can be used to rank and cluster product data with the input of user preferences and produce reliable customised recommendations for the users. To demonstrate the advantages of the proposed model, the CCR method is compared with the rating approach of the analytic hierarchy process. Two recommendation cases are demonstrated in this paper with two datasets, one collected by this research for laptop recommendation and the other an open dataset for workstation recommendation. The simulation results demonstrate that the proposed system is feasible for providing personalised recommendations. The significance of this research is the provision of a recommendation solution that does not depend on historical purchase records; rather, one wherein the users’ rating preferences and expert knowledge, both of which are measured by CCR, is considered. The proposed CCEHC model could be further applied to other types of similar recommendation cases such as music, books, and movies.
Disclaimer

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 Introduction

Online shopping has already influenced the purchasing behaviour of consumers. Today, buyers face an overload of information to select the most preferred goods. Recommender systems (RSs) are developed to recommend appropriate products to consumers on the basis of their historical records. An effective RS service can boost sales by building and increasing customer loyalty (Aggarwal 2016). Reviews of RS technologies can be found in (Aggarwal 2016; Haruna, et al. 2017; Adomavicius and Kwon 2015; Kunaver and Požrl 2017; Kotkov et al. 2016; Zhang et al. 2017; Ma et al. 2018). RSs are typically categorised into three types: collaborative filtering, content-based, and hybrid (Aggarwal 2016). Since these types are based on user profiles including their historical ratings and purchase records (Lika et al. 2014), the RSs have insufficient information to learn the interests of new users. Lacking information for newly joined users is known as the cold-start problem, which is a critical challenge of RS (Kunaver and Požrl 2017; Lika et al. 2014; Volkovs et al. 2017; Viktoratos et al. 2018). A discussion and review of the cold-start problem can be found in Lika et al., (2014).
Cold start problems have significant influence on high-end consumer electronics such as smartphones, laptops, game consoles, and audio–video equipment. Since their electronic components and technologies are frequently updated, recommendations based on historical purchasing records could possibly not be applicable to new products. The motivation of this research is to propose an expert system for product recommendations that is based on the current individual users’ preferences and expert knowledge elicited from cognitive comparison rating (CCR) method. The proposed model does not have such a cold-start problem, as historical information is not used for the recommendations.
The evaluation of expert judgments and user preferences for products is complicated as numerous products such as the aforementioned high-end consumer electronics consist of different attributes. Multi-criteria decision making (MCDM) methods, which can measure both user preferences and expert judgments for multiple product attributes, have been used in RSs (van Capelleveen et al. 2019; Song 2018; Zhang et al. 2018). The analytic hierarchy process (AHP), a classical MCDM, has been adopted to evaluate user preferences for different product attributes (Hinduja and Pandey 2018; Karthikeyan et al. 2017; Pamučar et al. 2018; Wang and Tseng 2013). CCR, an improved alternative to AHP, is introduced in this study for evaluating expert judgments and user preferences. As an approach to rectify the mathematical representation problem of the perception of the paired differences in AHP, CCR is an ideal method for weighing product attributes and defining numerical values of nominal scales based on user preferences (Yuen 2009, 2012, 2014a; b).
To provide product recommendation services, the hierarchical clustering (HC) method is used to group the products based on the evaluation results of CCR. Different clustering analysis methods have been applied to identify groups of products that have similar attributes with respect to consumer preferences (Nilashi 2017; Frémal and Lecron 2017; Katarya and Verma 2017; Selvi and Sivasankar 2019). HC (Murtagh 1983; Ward Jr 1963; Han et al. 2011) is a popular clustering method; for example, HC has been adopted in other RSs (Selvi and Sivasankar 2019; Gupta and Patil 2015; Zheng et al. 2013; de Aguiar Neto et al. 2020). A hierarchical decomposition of a dataset can be built by HC in the form of a tree graph (called a dendrogram). The major advantage of HC is that the dendrogram can be easily interpreted since the distances between the objects are directly presented. HC has limitations when applied to product-recommendation cases. Firstly, the attributes of products are equally considered; however, different consumers can have different preferences for each attribute. Secondly, the product attributes of nominal scales cannot be directly used in clustering processes. To address these limitations, CCR is used to weigh product attributes and define numerical values of nominal scales with respect to user preferences. A novel system, cognitive comparison-enhanced hierarchical clustering (CCEHC), is proposed to provide product recommendations with respect to the current individual user’s rating preferences. The new method provides a solution to the cold start problem in RSs by using the expert knowledge elicited from CCR instead of the users’ historical data. In addition, non-specialized consumers can express their references to interact with the system.
This paper offers a significant extension of the previous initial work (Guan and Yuen 2015; Guan 2018), especially for the sections of methods, experiments, comparisons, and discussions. The remainder of this paper is organised as follows. Section 2 proposes the novel CCEHC system. Section 3 demonstrates the validity and feasibility of the proposed method using a laptop recommendation case, for which the dataset was collected in this study. Section 4 discusses the advantages and limitations of the proposed approach. Section 5 presents the application of CCEHC for workstation recommendations using an open dataset. Finally, Sect. 6 concludes the study.

2 Cognitive comparison enhanced hierarchical clustering

The procedures of the CCEHC model are presented in Fig. 1. In Steps 1 and 2, the attributes of the products are structured as an attribute tree. According to the attribute tree, a raw data table is collected from different sources. In Step 3, CCR is applied to measure the nominal attribute values and attribute weights with user preferences. The resulting table is normalised in Step 4. In Step 5, the values of the products are produced by aggregating the normalised table and attribute weights. In Step 6, a personalised top-N recommendation is produced by ranking the product values. In the final step, the products are clustered by HC, and similar products can be recommended to the different users.

2.1 Specifying attributes

Detailed product information can be obtained from different sources including manufacturer websites, product engineers, and retailers. A product is represented as a group of attributes, \(\left\{ {\delta_{i} } \right\} = \left( {\delta_{1} , \delta_{2} , \ldots ,\delta_{i} , \ldots ,\delta_{n} } \right)\), where \(\delta_{i}\) is the ith attribute of the product. Attributes can have sub-attributes. For example, an attribute \(\delta_{i}\) is represented by ni sub-attributes, \(\left\{ {\delta_{i,j} } \right\} = \left( {\delta_{i,1} , \delta_{i,2} , \ldots ,\delta_{i,j} , \ldots ,\delta_{{i,n_{i} }} } \right),\) where \(\delta_{i,j}\) is represented by the jth sub-attribute of \(\delta_{i}\); the attribute \(\delta_{i,j}\) is represented by ni,j sub-attributes, \(\left\{ {\delta_{i,j,k} } \right\} = \left( {\delta_{i,j,1} , \delta_{i,j,2} , \ldots ,\delta_{i,j,k} , \ldots ,\delta_{{i,j,n_{i,j} }} } \right)\), where \(\delta_{i,j,k}\) is the kth sub-attribute of \(\delta_{i,j}\). The attributes of the different levels are structured as an attributes tree. A sample of the laptop attribute tree is presented in Fig. 2 in Sect. 3.

2.2 Preprocessing data

The leaf attributes, denoted as L, are attributes without sub-attributes. The measurable values of leaf attributes are collected from different sources, as mentioned in Sect. 2.1. Product dataset D consisting of m products and l leaf attributes is denoted as \(D=\left\{{d}_{\alpha \beta }|\forall \alpha \in \left(1,\dots ,m\right),\forall \beta \in \left(1,\dots ,l\right),\right\}\). An example of a laptop data matrix is presented in Sect. 3.2. D cannot be directly clustered since it could contain nominal scales that do not have a natural ordering. In the proposed CCEHC system, the nominal scales are substituted by the numerical values measured using the CCR approach presented in the next step.

2.3 Evaluating user preferences by CCR

The user preferences for different attributes and nominal scales are measured using the CCR method. A sample of the CCR interface is displayed in Fig. 3.
Table 1 is a typical measurement scale schema \(\left( {\aleph ,\overline{X}} \right)\) applied to CCR (Yuen 2009, 2014a). The space of the linguistic labels \(\aleph\) of the paired interval scales is {Equally, Slightly, …, Outstandingly, Absolutely}. The numerical representation of the paired interval scales \(\overline{X}\) is as follows:
$$\overline{X} = \left\{ {\overline{x}_{q} = \frac{q\kappa }{\tau }|\forall q \in \left\{ { - \tau , \ldots , - 1,0,1, \ldots ,\tau } \right\},\quad \kappa > 0} \right\}.$$
(1)
Table 1
Measurement scale schema for CCR
Label (\(\aleph\))
 
Paired Interval Scale (\(\overline{X}\))
Equally
  
Slightly
1
\(\kappa\)/8
Moderately
2
2\(\kappa\)/8
Fairly
3
3\(\kappa\)/8
Highly
4
4\(\kappa\)/8
Strongly
5
\(5 \kappa { }/8\)
Significantly
6
\(6 \kappa { }/8\)
Outstandingly
7
\(7 \kappa { }/8\)
Absolutely
8
\(\kappa\)
The subjective perception of the difference between pairs is represented as the normal utility, \(\kappa\). By default, \(\kappa\). is set to \({\text{max}}\left( {\overline{X}} \right)\). Denoting the number of linguistic labels as \(\tau\), the number of scales is \(2\tau + 1\).
To measure the user preferences in paired interval scales, a pairwise opposite matrix (POM) is defined as follows.
$$B = \left[ {b_{ij} } \right] = \left[ {\begin{array}{*{20}c} 0 & {v_{1} - v_{2} } & \cdots & {v_{1} - v_{n} } \\ {v_{2} - v_{1} } & 0 & \cdots & {v_{2} - v_{n} } \\ \vdots & \vdots & \ddots & \vdots \\ {v_{n} - v_{1} } & {v_{n} - v_{2} } & \cdots & 0 \\ \end{array} } \right] \cong \left[ {\begin{array}{*{20}c} {\begin{array}{*{20}c} 0 & {b_{12} } \\ {b_{21} } & 0 \\ \end{array} } & {\begin{array}{*{20}c} \cdots & {b_{1n} } \\ \cdots & {b_{2n} } \\ \end{array} } \\ {\begin{array}{*{20}c} \vdots & \vdots \\ {b_{n1} } & {b_{n2} } \\ \end{array} } & {\begin{array}{*{20}c} \ddots & \vdots \\ \cdots & 0 \\ \end{array} } \\ \end{array} } \right] = \left[ {b_{ij} } \right] = B,$$
(2)
where B denotes a POM. \(v_{i}\) denotes the priority value, and \(b_{ij} \tilde{ = }\left[ {v_{i} - v_{j} } \right]\) denotes the approximate comparison value between objects i and j. The values of \(b_{ij}\) are obtained from a questionnaire. For example, \(b_{13} = 3\) means that the customer considers the first object to be fairly more important than the third.
To verify the validity of the POM, an Accordance Index (AI) is defined in Eq. (3). AI = 0 indicates that B is absolutely accordant. If 0 < AI ≤ 0.1, then B is recommended. If AI > 0.1, B is unacceptable, the survey should be rechecked.
$${\text{AI}} = \frac{1}{{n^{2} }}\mathop \sum \limits_{i = 1}^{n} \mathop \sum \limits_{j = 1}^{n} \sqrt {\frac{1}{n}\mathop \sum \limits_{p = 1}^{n} \left( {\frac{{b_{ip} + b_{pj} - b_{ij} }}{\kappa }} \right)^{2} } .$$
(3)
The priority values of objects are computed using the row average plus normal utility (RAU) as follows:
$${\text{RAU}}\left( {B,\kappa } \right) = \left\{ {v_{i} :v_{i} = {\frac{1}{n}\mathop \sum \limits_{j = 1}^{n} b_{ij} + \kappa , \forall i \in \left\{ {1, \ldots ,n} \right\}} } \right\}.$$
(4)
The RAU values are subsequently normalised as a vector W as follows:
$$W = \left\{ {w_{i} :w_{i} = \frac{{v_{i} }}{n\kappa },\forall i \in \left\{ {1, \ldots ,n} \right\}} \right\},{\text{where}}\mathop \sum \limits_{{i \in \{ 1, \ldots ,n\} }} v_{i} = n\kappa .$$
(5)
The vector W can represent a variety of items such as the priorities of options, item utilities, weights of features, and preferences for nominal values. In CCEHC, the weights of the product attributes and nominal scales in raw dataset D are substituted with their normalised RAU values.

2.4 Normalising dataset

Two equations are introduced to normalise the raw dataset D. If a higher value indicates a higher preference for a leaf attribute, the dividing maximal function \(\Delta_{\max }\) defined in Eq. (6) is used to rescale the column of raw attribute values, i.e., \(D_{\beta }^{T} = \left\{ {d_{1,\beta } , \ldots ,d_{\alpha ,\beta } , \ldots ,d_{m,\beta } } \right\}\). If a lower value reveals a higher preference, the minimal dividing function \(\Delta_{\min }\) defined in Eq. (7) is applied. The normalised data matrix is denoted as \(D^{\prime} = \left\{ {x_{\alpha \beta } {|}\forall \alpha \in \left( {1, \ldots ,m} \right),\forall \beta \in \left( {1, \ldots ,l} \right),} \right\}\).
$$x_{\alpha \beta } = \Delta_{\max } \left( {d_{\alpha \beta } } \right) = \frac{{d_{\alpha \beta } }}{{\max \left( {D_{\beta }^{T} } \right)}} ,\forall \alpha \in \left( {1, \ldots ,m} \right),\forall \beta \in \left( {1, \ldots ,l} \right),$$
(6)
$$x_{\alpha \beta } = \Delta_{\min } \left( {d_{\alpha \beta } } \right) = \frac{{\min \left( {D_{\beta }^{T} } \right)}}{{d_{\alpha \beta } }} , \forall \alpha \in \left( {1, \ldots ,m} \right),\forall \beta \in \left( {1, \ldots ,l} \right).$$
(7)

2.5 Fusing data

The product values \(\left\{ {\rho^{\left( a \right)} {|}\forall \alpha \in \left( {1, \ldots ,m} \right)} \right\}\) are the weighted summation of the product attribute values, where \(\alpha\) is the index of the product. Attribute values are the weighted summation of their sub-attributes. The detailed calculations of product/attribute are presented in Eqs. (8)–(10), where ri, ri,j, and ri,j,k are the weights of \(\delta_{i}\), \(\delta_{i,j}\), and \(\delta_{i,j,k}\), respectively. The leaf attribute values are obtained from the normalised data matrix \(D^{\prime}\).
$$\delta_{i,j}^{\left( \alpha \right)} = \mathop \sum \limits_{k = 1}^{{n_{i,j} }} r_{i,j,k} \cdot \delta_{i,j,k}^{\left( \alpha \right)} ,\forall i \in \left( {1, \ldots ,n} \right),\forall j \in \left( {1, \ldots ,n_{i} } \right),\forall \alpha \in \left( {1, \ldots ,m} \right),$$
(8)
$$\delta_{i}^{\left( \alpha \right)} = \mathop \sum \limits_{j = 1}^{{n_{i} }} r_{i,j} \cdot \delta_{i,j}^{\left( \alpha \right)} ,\forall j \in \left( {1, \ldots ,n_{i} } \right),\forall \alpha \in \left( {1, \ldots ,m} \right),$$
(9)
$$\rho^{\left( \alpha \right)} = \mathop \sum \limits_{i = 1}^{n} r_{i} \cdot \delta_{i}^{\left( \alpha \right)} ,\forall \alpha \in \left( {1, \ldots ,m} \right).$$
(10)

2.6 Generating top-N list

According to the product values, a personalised top-N list consisting of the N highest value products in descending order is provided to the user; the calculation details are described in Algorithm 1. For different users, the top-N lists are different since the product values are calculated with respect to personal preferences.

2.7 Clustering products

HC is used to group products according to their product values. The aim of HC is to iteratively combine the two nearest clusters into a larger cluster until all the objects are in one cluster or a preset termination condition is reached (Han et al. 2011). Murtagh (1983) briefly described the steps of hierarchical clustering methods. The steps of HC applied to the CCEHC are described below.
Step 1: One product is an atomic cluster, \({C}_{\sigma }=\left\{{\rho }^{(a)}\right\}\). The distances between each pair of clusters are computed in the following form:
$$d_{{\alpha ,a^{\prime}}} = \left| {\rho^{\left( a \right)} - \rho^{{\left( {a^{\prime}} \right)}} } \right|,\forall a,\forall a^{\prime} \in \left( {1, \ldots ,m} \right),$$
(11)
where \(d_{{\alpha ,a^{\prime}}}\) is the dissimilarity of product values for any two different products \(\rho^{\left( a \right)}\) and \(\rho^{{\left( {a^{\prime}} \right)}}\).
Step 2: The two closest clusters, \({C}_{s}\) and \({C}_{t}\), where \(\left( {s,t} \right)\)= argmin({\(d_{{\alpha ,a^{\prime}}}\)}), are combined into a larger cluster, i.e., \({C}_{s}={C}_{s}\cup {C}_{t}\), which means \({C}_{s}\) is updated by merging \({C}_{t}\) and \({C}_{s}\). The distances between the updated cluster \({C}_{s}\) and other clusters \({C}_{\neg s}\) are computed as the average distance (Han et al. 2011) in the following form:
$$d_{{{\text{avg}}}} \left( {C_{s} ,C_{\neg s} } \right) = \frac{1}{{{ }\eta_{s} { }\eta_{\neg s} { }}}\mathop \sum \limits_{{\rho^{\left( a \right)} \in C_{s} ,\rho^{{\left( {a^{\prime}} \right)}} \in C_{\neg s} }}^{{}} d_{{\rho^{\left( a \right)} ,\rho^{{\left( {a^{\prime}} \right)}} }} ,$$
(12)
where \({\eta }_{s}\) is the number of objects in cluster \({C}_{s}\). \({d}_{\mathrm{avg}}({C}_{s},{C}_{\neg s})\) updates the distances between clusters \(C_{s}\) and \(C_{\neg s}\). \(d_{{\rho^{\left( a \right)} ,\rho^{{\left( {a^{\prime}} \right)}} }}\) is the distance between products \(\rho^{\left( a \right)}\) and \(\rho^{{\left( {a^{\prime}} \right)}}\), where \(\rho^{\left( a \right)} \in C_{s}\) and \(\rho^{{\left( {a^{\prime}} \right)}} \in C_{\neg s}\). Step 2 is repeated until all products are in one cluster.
Step 3: A dendrogram indicating the arrangement of the merged clusters is produced. Two examples of dendrograms for similar laptop clusters are displayed in Fig. 4. The products are grouped into different clusters by cutting the branches at an appropriate height, which represents the distance between the clusters. Clustering results can be used for similar product recommendations. When a user searches for product \({\rho }^{(a)}\) such that \({\rho }^{(a)}\in {C}_{\sigma }\), the other products in Cluster \({C}_{\sigma }\), i.e., R, are recommended to the user. R is defined as follows:
$$R={C}_{\sigma }/\left\{{\rho }^{\left(a\right)}\right\},$$
(13)
where / is a complement operator.

3 Application of laptop recommendation

Laptops can be represented by a set of attributes. Consumers search for a set of preferred product attributes when searching for a laptop. To demonstrate the applicability and validity of the proposed CCEHC system, a laptop recommendation case for a consumer (denoted as User A) is illustrated. For the cases in Sects. 3–5, a dataset of 27 laptop configurations was manually collected from the websites of online retail shops and manufacturers in 2015.

3.1 Specifying attributes

A large number of laptop configurations can be found on websites for selling, introducing, and comparing electronic products. The majority of consumers are likely unfamiliar with specific technical properties such as the wireless type and video output details. Certain laptop components could be unimportant to other consumers such as USB ports, DVD/CD burners, and speakers. These attributes are not considered in this recommendation case. The selected attributes for choosing an ideal laptop are structured as a 3-level attribute tree, as indicated in Fig. 2.
The attributes in the first level of the tree are CPU (\({\delta }_{1}\)), Operating System (\({\delta }_{2}\)), Storage (\({\delta }_{3}\)), Brand (\({\delta }_{4}\)), Display (\({\delta }_{5}\)), Portable (\({\delta }_{6}\)), and Price (\({\delta }_{7}\)). Five of these have sub-attributes. For example, Storage includes the Hard Drive and Random-Access Memory (RAM). The sub-attributes of the first-level attributes are structured in the second level, including {RAM (\({\delta }_{3,1}\)), Hard Drive (\({\delta }_{\mathrm{3,2}}\))}, {USA (\({\delta }_{\mathrm{4,1}}\)), Asia (\({\delta }_{\mathrm{4,2}}\))}, {Screen (\({\delta }_{\mathrm{5,1}}\)), Graphics Card (\({\delta }_{\mathrm{5,2}}\))}, {Weight (\({\delta }_{\mathrm{7,1}}\)), Battery (\({\delta }_{\mathrm{7,2}}\))}. The sub-attributes of the second-level attributes are in the third level of the tree, which are {SSD (\({\delta }_{\mathrm{3,2},1}\)), Size (\({\delta }_{\mathrm{3,2},2}\))}, {Size (\({\delta }_{\mathrm{5,1},1}\)), Resolution (\({\delta }_{\mathrm{5,1},2}\))}.

3.2 Preprocessing data

From the attributes tree presented in Fig. 2, a laptop has 13 leaf attributes. A raw data matrix D of is obtained from the laptop configurations as indicated in Table 15 of the Appendix. The quantification approaches used to preprocess leaf attributes are summarised in Table 2. For example, the attribute values of the CPU and Graphics Card are quantified by their performance scores (3DMARK 2015). The SSD attribute has three nominal labels: SSD, which indicates that the laptop has an SSD, No SSD indicating that the laptop has no SSD and Hybrid indicating that the laptop has SSD and another type of hard disk. The three labels are respectively replaced by “2”, “0”, and “1”. The screen resolution attribute is represented by the production of the width and height pixels of the screen. The nominal scales of the attributes OS and Brand are measured by CCR in Sect. 3.3.
Table 2
Schema of laptop leaf attributes
 
Leaf attribute name
Measurement scale
Quantification approach
Normalization function
\({L}_{1}\)
CPU \({\delta }_{1}\)
Nominal: CPU model
3DMark06 Score
\({\Delta }_{\mathrm{max}}\)
\({L}_{2}\)
OS \({\delta }_{2}\)
Nominal: Linux
OS X
Windows 7
Windows 8
CCR
\({\Delta }_{\mathrm{max}}\)
\({L}_{3}\)
RAM \({\delta }_{\mathrm{3,1}}\)
GB
GB
\({\Delta }_{\mathrm{max}}\)
\({L}_{4}\)
SSD \({\delta }_{\mathrm{3,2},1}\)
Nominal: SSD
Hybrid
No SSD
SSD: 2
Hybrid:1
No SSD: 0
\({\Delta }_{\mathrm{max}}\)
\({L}_{5}\)
Hard Drive Size \({\delta }_{\mathrm{3,2},2}\)
GB
GB
\({\Delta }_{\mathrm{max}}\)
\({L}_{6}\)
Brand (USA)\({\delta }_{\mathrm{4,1}}\)
Nominal: Alienware
Apple
Dell
Microsoft
CCR
\({\Delta }_{\mathrm{max}}\)
\({L}_{7}\)
Brand (Asia)\({\delta }_{\mathrm{4,2}}\)
Nominal: Acer
ASUS
HP
Lenovo
Samsung
CCR
\({\Delta }_{\mathrm{max}}\)
\({L}_{8}\)
Screen Size \({\delta }_{\mathrm{5,1},1}\)
Inch
Inch
\({\Delta }_{\mathrm{max}}\)
\({L}_{9}\)
Screen Resolution \({\delta }_{\mathrm{5,1},2}\)
DPI
Width pixel \(\times\) height pixel
\({\Delta }_{\mathrm{max}}\)
\({L}_{10}\)
Graphics Card \({\delta }_{\mathrm{5,2}}\)
Nominal: Graphics Card model
3DMark06 Scores
\({\Delta }_{\mathrm{max}}\)
\({L}_{11}\)
Weight \({\delta }_{\mathrm{6,1}}\)
Kg
Kg
\({\Delta }_{\mathrm{min}}\)
\({L}_{12}\)
Battery \({\delta }_{\mathrm{6,2}}\)
Hour
Hour
\({\Delta }_{\mathrm{max}}\)
\({L}_{13}\)
Price \({\delta }_{7}\)
RMB
Thousand RMB
\({\Delta }_{\mathrm{min}}\)

3.3 Evaluating user preferences by CCR

The preferences of user were gathered from a CCR questionnaire. An example of a questionnaire using CCR is presented in Fig. 3. The measurement scale schema defined in Table 1 is used in this case, and \(\kappa\) is set to “8”. The POM for User A presented in Table 3 is obtained from the questionnaire results in Fig. 3 based on Eq. (2). The AI for the POM computed by Eq. (3) is less than 0.1, which means that the POM is acceptable. Table 3 lists the weights of the 1st level laptop attributes computed by Eqs. (4) and (5) within the detailed calculations steps. The POMs, AIs, and weights of the remaining sub-attributes are provided in Table 4. All the attribute weights for User A are given, including the attribute tree, in Fig. 2. The nominal attribute labels for User A of Operating System (\({L}_{2}\)), Asia Brand (\({L}_{6}\)) and USA Brand (\({L}_{7}\)) are also measured by CCR. The POMs and prioritisation results (called as preference values) are displayed in Table 5. The nominal attribute values in raw dataset D can be substituted with their preference values.
Table 3
Comparison matrices for 1st level laptop attributes (User A)
\({B}_{0}\)
\({\delta }_{1}\)
\({\delta }_{2}\)
\({\delta }_{3}\)
\({\delta }_{4}\)
\({\delta }_{5}\)
\({\delta }_{6}\)
\({\delta }_{7}\)
\(\sum_{j=1}^{7}{b}_{ij}\)
\(\frac{1}{7}\sum_{j=1}^{7}{b}_{ij}\)
\({v}_{i}=\frac{1}{7}\sum_{j=1}^{7}{b}_{ij}+8\)
\({r}_{i}={w}_{i}=\frac{{v}_{i}}{7\times 8}\)
\({\delta }_{1}\)
0
1
− 1
7
− 1
1
3
10
1.429
9.429
0.168
\({\delta }_{2}\)
− 1
0
− 3
5
− 2
0
2
1
0.143
8.143
0.145
\({\delta }_{3}\)
1
3
0
7
0
3
5
19
2.714
10.714
0.191
\({\delta }_{4}\)
− 7
− 5
− 7
0
− 7
− 5
− 3
− 34
− 4.857
3.143
0.056
\({\delta }_{5}\)
1
2
0
7
0
2
4
16
2.286
10.286
0.184
\({\delta }_{6}\)
− 1
0
− 3
5
− 2
0
2
1
0.143
8.143
0.145
\({\delta }_{7}\)
− 3
− 2
− 5
3
− 4
− 2
0
− 13
− 1.857
6.143
0.110
AI = 0.051
Table 4
Comparison matrices for 2nd and 3rd levels laptop attributes (User A)
\({B}_{3}\)
\({\delta }_{\mathrm{3,1}}\)
\({\delta }_{\mathrm{3,2}}\)
\({r}_{3,i}\)
\({B}_{\mathrm{3,2}}\)
\({\delta }_{\mathrm{3,2},1}\)
\({\delta }_{\mathrm{3,2},2}\)
\({r}_{\mathrm{3,2},i}\)
\({B}_{4}\)
\({\delta }_{\mathrm{4,1}}\)
\({\delta }_{\mathrm{4,2}}\)
\({r}_{4,i}\)
\({\delta }_{\mathrm{3,1}}\)
0
0
0.5
\({\delta }_{\mathrm{3,2},1}\)
0
− 6
0.313
\({\delta }_{\mathrm{4,1}}\)
0
− 2
0.437
\({\delta }_{\mathrm{3,2}}\)
0
0
0.5
\({\delta }_{\mathrm{3,2},2}\)
6
0
0.687
\({\delta }_{\mathrm{4,2}}\)
2
0
0.563
AI = 0
AI = 0
AI = 0
\({B}_{5}\)
\({\delta }_{\mathrm{5,1}}\)
\({\delta }_{\mathrm{5,2}}\)
\({r}_{5,i}\)
\({B}_{\mathrm{5,1}}\)
\({\delta }_{\mathrm{5,1},1}\)
\({\delta }_{\mathrm{5,1},2}\)
\({r}_{\mathrm{5,1},i}\)
\({B}_{6}\)
\({\delta }_{\mathrm{6,1}}\)
\({\delta }_{\mathrm{6,2}}\)
\({r}_{6,i}\)
\({\delta }_{\mathrm{5,1}}\)
0
− 4
0.375
\({\delta }_{\mathrm{5,1},1}\)
0
− 2
0.437
\({\delta }_{\mathrm{6,1}}\)
0
0
0.5
\({\delta }_{\mathrm{5,2}}\)
4
0
0.625
\({\delta }_{\mathrm{5,1},2}\)
2
0
0.563
\({\delta }_{\mathrm{6,2}}\)
0
0
0.5
AI = 0
AI = 0
AI = 0
Table 5
Comparison matrices for nominal attribute of \({L}_{2}\), \({L}_{6}\) and \({L}_{7}\) (User A)
\({B}_{2}\)
Linux
OS X
Windows 7
Windows 8
 
Preference value
Linux
0
− 2
− 3
0
 
0.211
OS X
2
0
− 1
2
 
0.273
Windows 7
3
1
0
3
 
0.305
Windows 8
0
− 2
− 3
0
 
0.211
AI = 0
\({B}_{6}\)
Alienware
Apple
Dell
Microsoft
HP
Preference value
Alienware
0
1
3
4
4
0.260
Apple
− 1
0
2
3
3
0.235
Dell
− 3
− 2
0
1
2
0.190
Microsoft
− 4
− 3
− 1
0
0
0.160
HP
− 4
− 3
− 2
0
0
0.155
AI = 0.043
\({B}_{7}\)
Acer
ASUS
Lenovo
Samsung
 
Preference value
Acer
0
− 1
− 3
2
 
0.234
ASUS
1
0
− 2
3
 
0.266
Lenovo
3
2
0
4
 
0.320
Samsung
− 2
− 3
− 4
0
 
0.180
AI = 0.042

3.4 Normalising dataset

The suitable normalisation equations for the leaf attributes are listed in Table 2. For example, a CPU (\({L}_{1}\)) with a higher performance score is attractive. \({\Delta }_{\mathrm{max}}\) defined in Eq. (6) is therefore applied to normalise the CPU attribute values. Typically, consumers prefer a lower product price; therefore, \({\Delta }_{\mathrm{min}}\) defined in Eq. (7) is used to normalise Price (\({L}_{13}\)). The normalised data matrix \(D{^{\prime}}\) is provided in Table 16 in the Appendix. Two samples of the normalisation process for the attribute values of CPU and Price for laptop ID1 are given below.
$${x}_{\mathrm{1,1}}={\Delta }_{\mathrm{max}}\left({d}_{\mathrm{1,1}}\right)=\frac{{d}_{\mathrm{1,1}}}{\mathrm{max}\left({D}_{1}^{T}\right)}=\frac{3367}{7060}=0.447,$$
(14)
$${x}_{\mathrm{1,13}}={\Delta }_{\mathrm{min}}\left({d}_{\mathrm{1,13}}\right)=\frac{\mathrm{min}\left({D}_{13}^{T}\right)}{{d}_{\mathrm{1,13}}} =\frac{2}{7}=0.285.$$
(15)

3.5 Fusing data

For each laptop, the 2nd level attribute values are calculated using Eq. (8), the weights in Tables 3 and 4, and the normalised data matrix \(D{^{\prime}}\) in Table 16 in the Appendix. An example of the calculation process for \({{\delta }_{\mathrm{3,2}}}^{(1)}\) is presented below.
$${\delta }_{\mathrm{3,2}}^{\left(1\right)}=\sum_{k=1}^{2}{r}_{\mathrm{3,2},k}\bullet {\delta }_{\mathrm{3,2},k}^{\left(1\right)}=\left({r}_{\mathrm{3,2},1}\bullet {\delta }_{\mathrm{3,2},1}^{\left(1\right)}\right)+\left({r}_{\mathrm{3,2},2}\bullet {\delta }_{\mathrm{3,2},2}^{\left(1\right)}\right)=\left(0.313\bullet {x}_{\mathrm{1,4}}\right)+\left(0.687\bullet {x}_{\mathrm{1,5}}\right)=\left(0.313\bullet 1.000\right)+\left(0.687\bullet 0.169\right)=0.429.$$
(16)
The value of attribute \({{\delta }_{\mathrm{5,2}}}^{(1)}\) is computed as 0.556. The 1st level attribute values are computed using Eq. (9). The calculation process for \({{\delta }_{3}}^{(1)}\) is given in Eq. (17) as an example.
$${\delta }_{3}^{\left(1\right)}=\sum_{j=1}^{2}{r}_{3,j}\bullet {\delta }_{3,j}^{\left(1\right)}=\left({r}_{\mathrm{3,1}}\bullet {\delta }_{\mathrm{3,1}}^{\left(1\right)}\right)+\left({r}_{\mathrm{3,2}}\bullet {\delta }_{\mathrm{3,2}}^{\left(1\right)}\right)=\left({r}_{\mathrm{3,1}}\bullet {x}_{\mathrm{1,5}}\right)+\left({r}_{\mathrm{3,2}}\bullet {\delta }_{\mathrm{3,2}}^{\left(1\right)}\right)=\left(0.500\bullet 0.250\right)+\left(0.500\bullet 0.429\right)=0.340.$$
(17)
The values of attributes \({\delta }_{4}^{(1)}\), \({\delta }_{5}^{(1)}\) and \({\delta }_{6}^{(1)}\) are 0.563, 0.327 and 0.550, respectively. The laptop product values are computed using Eq. (10). For example, the product value of the first laptop is 0.448; the detailed steps are presented in Eq. (18). All 27 laptop product values for User A are listed in Table 6.
Table 6
Laptop product values for 2 users
ID (\(\mathrm{\alpha }\))
Product value (\({\rho }^{(\alpha )}\)) for User A
Product value (\({\rho }^{(\alpha )}\)) for User B
1
0.448
0.392
2
0.553
0.479
3
0.465
0.347
4
0.403
0.398
5
0.447
0.443
6
0.459
0.431
7
0.507
0.503
8
0.660
0.661
9
0.478
0.507
10
0.476
0.437
11
0.662
0.609
12
0.475
0.423
13
0.427
0.382
14
0.419
0.374
15
0.603
0.548
16
0.431
0.408
17
0.535
0.377
18
0.380
0.386
19
0.378
0.356
20
0.431
0.361
21
0.488
0.414
22
0.445
0.400
23
0.493
0.409
24
0.457
0.414
25
0.643
0.658
26
0.462
0.512
27
0.676
0.680
$${\rho }^{\left(1\right)}=\sum_{j=1}^{7}{r}_{i}\bullet {\delta }_{i}^{\left(1\right)}=\left({r}_{1}\bullet {\delta }_{1}^{\left(1\right)}\right)+\left({r}_{2}\bullet {\delta }_{2}^{\left(1\right)}\right)+\left({r}_{3}\bullet {\delta }_{3}^{\left(1\right)}\right)+\left({r}_{4}\bullet {\delta }_{4}^{\left(1\right)}\right)+\left({r}_{5}\bullet {\delta }_{5}^{\left(1\right)}\right)+\left({r}_{6}\bullet {\delta }_{6}^{\left(1\right)}\right)+\left({r}_{7}\bullet {\delta }_{7}^{\left(1\right)}\right)=\left({r}_{1}\bullet {x}_{\mathrm{1,1}}\right)+\left({r}_{2}\bullet {x}_{\mathrm{1,2}}\right)+\left({r}_{3}\bullet {\delta }_{3}^{\left(1\right)}\right)+\left({r}_{4}\bullet {\delta }_{4}^{\left(1\right)}\right)+\left({r}_{5}\bullet {\delta }_{5}^{\left(1\right)}\right)+\left({r}_{6}\bullet {\delta }_{6}^{\left(1\right)}\right)+\left({r}_{2}\bullet {x}_{\mathrm{1,13}}\right)=0.448.$$
(18)

3.6 Generating top-N list

The top-N list for laptops is produced using Algorithm 1. According to User A’s preferences for laptop attributes, a top-10 list of laptops is provided in Table 7. The information and correspondingly web links of the laptops in the top-10 list can be recommended to User A in descending order after the user has completed the CCR survey.
Table 7
The top-10 laptops for User A
Rank
ID (\(\alpha\))
Product value (\({\rho }^{(\alpha )}\))
1
27
0.676
2
11
0.662
3
8
0.660
4
25
0.643
5
15
0.603
6
2
0.553
7
17
0.535
8
7
0.507
9
23
0.493
10
21
0.488

3.7 Clustering products

The details of the HC method are described in Sect. 2.7. The HC method is used to cluster the similar laptop products into different groups by measuring the dissimilarities between the product values calculated using Eq. (11). After merging the two closest clusters, the dissimilarities are updated using Eq. (12). The dendrogram produced by HC for User A is displayed in Fig. 4a. By cutting the dendrogram at height of 0.05, six clusters are generated: {4, 18, 19}, {14, 13, 16, 20, 1, 5, 22, 3, 24, 6, 26}, {25, 27, 8, 11}, {7, 9, 10, 12, 21, 23}, {15} and {2, 17}. The clustering results are used for product recommendations. For example, if User A browses the webpage of Laptop 4, Laptops 18 and 19, which are in the same cluster of Laptop 4, are recommended to the user. Similarly, if User A browses Laptop 2, Laptop 17 is recommended.

4 Discussions

Comparisons and discussions are presented in this section to demonstrate the advantages of the proposed RS. To demonstrate the advantage of providing personalization recommendations, the recommendations for User B are presented in Sect. 4.1. To demonstrate the differences between CCR and AHP, the results produced by AHP enhanced method are presented in Sect. 4.2. The limitations of the proposed method are discussed in Sect. 4.3.

4.1 Personalization

User B completes the questionnaire. The rating scores are presented in Tables 8, 9, 10; the rating scores of User A are given in Tables 3, 4, 5. The product values for Users A and B are listed in Table 6. The system produces personalised top-10 laptop lists and laptop clusters with respect to the two users’ preferences. Table 7 presents the top-10 laptops recommended for User A; Table 11 lists the top-10 laptops recommended for User B. The two dendrograms in Fig. 4 indicate the laptops clustering results for User A and B.
Table 8
Comparison matrices for 1st level laptop attributes (User B)
\({B}_{0}\)
\({\delta }_{1}\)
\({\delta }_{2}\)
\({\delta }_{3}\)
\({\delta }_{4}\)
\({\delta }_{5}\)
\({\delta }_{6}\)
\({\delta }_{7}\)
\({r}_{i}\)
\({\delta }_{1}\)
0
4
2
0
2
8
0
0.184
\({\delta }_{2}\)
− 4
0
− 2
− 3
− 2
4
− 4
0.115
\({\delta }_{3}\)
− 2
2
0
− 2
0
6
− 2
0.148
\({\delta }_{4}\)
0
3
2
0
2
8
0
0.181
\({\delta }_{5}\)
− 2
2
0
− 2
0
6
− 2
0.148
\({\delta }_{6}\)
− 8
− 4
− 6
− 8
− 6
0
− 8
0.041
\({\delta }_{7}\)
0
4
2
0
2
8
0
0.184
AI = 0.024 < 0.1
Table 9
Comparison matrices for 2nd and 3rd levels laptop attributes (User B)
\({B}_{3}\)
\({\delta }_{\mathrm{3,1}}\)
\({\delta }_{\mathrm{3,2}}\)
\({r}_{3,i}\)
\({B}_{\mathrm{3,2}}\)
\({\delta }_{\mathrm{3,2},1}\)
\({\delta }_{\mathrm{3,2},2}\)
\({r}_{\mathrm{3,2},i}\)
\({B}_{4}\)
\({\delta }_{\mathrm{4,1}}\)
\({\delta }_{\mathrm{4,2}}\)
\({r}_{4,i}\)
\({\delta }_{\mathrm{3,1}}\)
0
2
0.5625
\({\delta }_{\mathrm{3,2},1}\)
0
8
0.75
\({\delta }_{\mathrm{4,1}}\)
0
6
0.6875
\({\delta }_{\mathrm{3,2}}\)
− 2
0
0.4375
\({\delta }_{\mathrm{3,2},2}\)
− 8
0
0.25
\({\delta }_{\mathrm{4,2}}\)
− 6
0
0.3125
AI = 0
AI = 0
AI = 0
\({B}_{5}\)
\({\delta }_{\mathrm{5,1}}\)
\({\delta }_{\mathrm{5,2}}\)
\({r}_{5,i}\)
\({B}_{\mathrm{5,1}}\)
\({\delta }_{\mathrm{5,1},1}\)
\({\delta }_{\mathrm{5,1},2}\)
\({r}_{\mathrm{5,1},i}\)
\({B}_{6}\)
\({\delta }_{\mathrm{6,1}}\)
\({\delta }_{\mathrm{6,2}}\)
\({r}_{6,i}\)
\({\delta }_{\mathrm{5,1}}\)
0
− 8
0.25
\({\delta }_{\mathrm{5,1},1}\)
0
− 4
0.375
\({\delta }_{\mathrm{6,1}}\)
0
− 2
0.4375
\({\delta }_{\mathrm{5,2}}\)
8
0
0.75
\({\delta }_{\mathrm{5,1},2}\)
4
0
0.625
\({\delta }_{\mathrm{6,2}}\)
2
0
0.5625
AI = 0
AI = 0
AI = 0
Table 10
Comparison matrices for nominal attribute of \({L}_{2}\), \({L}_{5}\) and \({L}_{7}\) (User B)
\({B}_{2}\)
Linux
OS X
Windows 7
Windows 8
 
Preference value
Linux
0
− 8
− 1
− 5
 
0.141
OS X
8
0
7
3
 
0.391
Windows 7
1
− 7
0
− 4
 
0.172
Windows 8
5
− 3
4
0
 
0.297
AI = 0
\({B}_{6}\)
Alienware
Apple
Dell
Microsoft
HP
Preference value
Alienware
0
0
4
8
6
0.290
Apple
0
0
4
7
6
0.285
Dell
− 4
− 4
0
4
3
0.195
Microsoft
− 8
− 7
− 4
0
− 2
0.095
HP
− 6
− 6
− 3
2
0
0.135
AI = 0.059
\({B}_{7}\)
Acer
ASUS
Lenovo
Samsung
 
Preference value
Acer
0
− 6
− 1
0
 
0.195
ASUS
6
0
5
6
 
0.383
Lenovo
1
− 5
0
1
 
0.227
Samsung
0
− 6
− 1
0
 
0.195
AI = 0
Table 11
The top-10 laptops for User B
Rank
ID (\(\alpha\))
Product Value (\({\rho }^{(\alpha )}\))
1
27
0.680
2
8
0.661
3
25
0.658
4
11
0.609
5
15
0.548
6
26
0.512
7
9
0.507
8
7
0.503
9
2
0.479
10
5
0.443
Comparing the preferences indicated in Tables 3 and 8, both users require a laptop with a high-speed CPU, large storage, and acceptable graphics. Three differences between the preferences of the two users can be summarised by comparing Tables 3, 4, 5 with Tables 8, 9, 10. Firstly, User A is not very price sensitive, whereas for User B, the price is considerably more important. Secondly, User A requires a portable laptop that is light and has a long battery life; User B hardly considers portability. Thirdly, User A has no strong preference for the brand, whereas User B strongly prefers laptops produced by US companies, especially Apple and Alienware.
From the laptop configurations presented in Table 15 and the two top-10 lists generated for Users A and B, respectively, in Tables 7 and 11, it can be concluded that the laptops in the two top-10 lists meet the common requirements (high speed CPU, large storage and acceptable graphics) of the two users. There are three issues for the two users’ preferences leading to the top-10 lists results. Firstly, the best laptop for Users A and B is the same: Laptop 27. This laptop has almost all the best configurations, yet is the most expensive. The product value of Laptop 27 for User B is less than for User A. The main reason is that User B is more price sensitive. Secondly, two portable laptops, Laptops 21 and 23, are recommended to User A, even though the other configurations of the two laptops are not attractive. The main reason is that User A prefers portable laptops. Thirdly, as User B is faithful to the brands Apple and Alienware, all the laptops of the two brands are recommended to User B in the top-10 list. The laptop recommendations provided for the two users match their requirements, and it can be concluded that the proposed CCEHC method can provide personalized recommendations with respect to user preferences.

4.2 Comparisons between CCR and AHP

CCR is based on the cognitive network process (CNP) (Yuen 2009, 2014a). The CNP is proposed as an ideal alternative to AHP to solve the rating scale problem in AHP. The numerical definition of the AHP’s paired ratio scale inappropriately represents the human intuitive judgment of paired difference; CNP uses a paired interval scale instead of a paired ratio scale. Detailed comparisons between CNP and AHP can be found in Yuen (2009, 2014a).
This study uses the original version of AHP proposed in Saaty (1980) for comparisons. To produce the AHP results, the CCP rating scales are transformed to AHP scales. The method employed is called AHP Enhanced Hierarchical Clustering (AHPEHC). The transformation method of the rating scale between AHP and CCR is given in Yuen (2009, 2014a). Table 12 presents the transformed rating matrix and weights of the ratings listed in Table 3. The product values and clustering results of the laptops computed using AHP are shown in Table 13 and Fig. 5, respectively.
Table 12
AHP comparison matrices for 1st level laptop attributes (User A)
\({B}_{0}\)
\({\delta }_{1}\)
\({\delta }_{2}\)
\({\delta }_{3}\)
\({\delta }_{4}\)
\({\delta }_{5}\)
\({\delta }_{6}\)
\({\delta }_{7}\)
weight
\({\delta }_{1}\)
1
2
1/2
8
1/2
2
4
0.168
\({\delta }_{2}\)
1/2
1
1/4
6
1/3
1
3
0.100
\({\delta }_{3}\)
2
4
1
8
1
4
6
0.298
\({\delta }_{4}\)
1/8
1/6
1/8
1
1/8
1/6
1/4
0.022
\({\delta }_{5}\)
2
3
1
8
1
3
5
0.264
\({\delta }_{6}\)
1/2
1
1/4
6
1/3
1
3
0.100
\({\delta }_{7}\)
1/4
1/3
1/6
4
1/5
1/3
1
0.048
Table 13
Laptop product values by AHP (User A)
ID (\(\alpha\))
Product Value (\({\rho }^{(\alpha )}\))
1
0.328
2
0.511
3
0.373
4
0.268
5
0.378
6
0.391
7
0.379
8
0.597
9
0.340
10
0.412
11
0.734
12
0.336
13
0.342
14
0.265
15
0.605
16
0.376
17
0.517
18
0.240
19
0.226
20
0.282
21
0.366
22
0.280
23
0.384
24
0.296
25
0.682
26
0.398
27
0.758
The 27 laptop product values are displayed in Fig. 6. A significant difference between the values computed by CCR and AHP is that the product values computed by CCR are considerably closer than those computed by AHP. The results of CCR reflect that the recommendations for the products are difficult to make, whereas AHP results reflect that the problem is trivial. The reason for this difference is that the paired ratio scales applied in AHP typically exaggerate the human perception of the paired difference in ratio. It can be concluded that CCR outperforms AHP in reflecting the preferences of both expert and users.

4.3 Limitations

Regarding the limitations, as the proposed approach is typically designed for recommendations of the latest launch products, the datasets consider the latest products (assuming that a consumer is not likely to buy an obsolete product). As the obsolete products are not considered, the data set should not be excessively large. The proposed method is not designed for processing large-scale data; the processing capability for large datasets is limited; however, this is not typically a problem as it is rare that there are a large number of new products. The scope of the proposed RS is not to address the problems solved by content based and collaborative filtering RSs; in turn, the content-based and collaborative filtering RSs are not designed to address the research problem solved by the proposed approach. The clustering validity of the proposed method is not discussed as no ground truth class labels can be used to verify the results. Internal clustering criteria, such as Davies and Bouldin (1979) and Silhoette (Rousseeuw 1987) are normally used as references, although these do not necessarily reflect real validity.

5 Workstation recommendation with open dataset

The proposed CCEHC method can be applied to different kinds of RSs. A workstation RS is developed to demonstrate the usability of the CCEHC. As a special type of laptop, workstation is designed for technical, scientific, and other professional purposes. In general, the workstations are more expensive than the laptops. Thus, users typically spend more time selecting a suitable workstation. The RS built by CCEHC with expert opinions and user preferences could be helpful for workstation recommendation.
An open dataset for the characteristics and prices of laptop models (Kaggle 2018) is used for the workstation RS. The dataset contains 29 items related to workstation. The characteristics and prices of workstations obtained from the original datasets can be summarised as 13 attributes organised as a two-level attributes tree, as displayed in Fig. 7. The POMs are presented in Table 17 in the Appendix. The weights of the seven attributes (B0) in the first level and three second-level attributes, Screen (B2), Processor (B3), and Memory (B4), are evaluated by CCR. In addition, the attributes, Company (L1), Screen Type (L3), Hard Disk Type (L8) and OS (L10), are nominal and evaluated by CCR.
To demonstrate the usability of the workstation RS, two users, User C and D, use the CCEHC system to determine what workstation would fit their purpose. The comparison matrices of User C and D are presented in Table 17 in the Appendix. The weights of the users are indicated in the attribute trees presented in Fig. 7. To compare the results of CCR and AHP, both CCEHC and AHPEHC are used to build the workstation RSs. The recommendations produced by CCEHC and AHPEHC RSs for both users are displayed as dendrograms with clusters in Fig. 8 and the top-10 lists presented in Table 14.
Table 14
The top-10 workstations for each user using CCEHC and AHPEHC
Rank
User C
User D
CCEHC
AHPEHC
CCEHC
AHPEHC
ID (\(\alpha\))
PV (\({\rho }^{(\alpha )}\))
ID (\(\alpha\))
PV (\({\rho }^{(\alpha )}\))
ID (\(\alpha\))
PV (\({\rho }^{(\alpha )}\))
ID (\(\alpha\))
PV (\({\rho }^{(\alpha )}\))
1
1
0.860
6
0.896
1
0.852
10
0.874
2
6
0.850
13
0.895
6
0.837
2
0.874
3
13
0.829
10
0.892
10
0.818
6
0.873
4
10
0.829
12
0.871
13
0.812
1
0.872
5
9
0.819
5
0.863
9
0.810
13
0.870
6
12
0.815
15
0.862
15
0.805
15
0.862
7
15
0.813
2
0.854
12
0.795
12
0.856
8
23
0.810
1
0.850
5
0.794
5
0.854
9
5
0.807
19
0.815
19
0.786
7
0.854
10
19
0.797
8
0.804
7
0.784
18
0.826
By reading the preferences of the two users in Table 17 and Fig. 7, it can be determined that the preferences of Users C and D are not significantly different. For example, the two users both feel the memory and OS are important, and the price and weight are less important. Their preferences for Memory in the second-level attributes are also similar. The larger RAM and better type of hard disk (such as solid-state disk) are more important; however, a large hard disk capacity is less essential. The preferences of the company and processor are different. User C has certain preferences for the company of workstation, and feels the CPU and GPU are equally important; User D is not overly concerned with the company and feels the CPU is more important than GPU. By comparing the four top-10 lists produced by the two RSs for the two users, it can be observed that the RS applying CCEHC produces similar recommenders for the two users, whereas the RS applying AHPEHC method does otherwise. The recommendations produced by the two RSs are different for each user.
For the two users with similar preferences for the workstation, the RS applying CCEHC provides similar recommendations, whereas the RS applying AHPEHC provides considerably different results. The results demonstrate that the CCEHC can better reflect the user preferences than AHPEHC. The reason for the different results of CCR and AHP is the different mathematical representation of human opinions. As mentioned in Sect. 4.2, the paired ratio scales applied in AHP typically exaggerate the human perception of the paired difference in times; hence, the marginal difference in user preferences can lead to considerably different results. The application of workstation RS demonstrates that the CCEHC method can produce reasonable personalized recommendations to users.

6 Conclusions

RSs are helpful for consumers making choices among different products. To address the limitations of current AHC methods applied to RSs, this paper proposes a CCEHC approach for providing personalised product recommendations. CCEHC consists of two major parts: CCR and hierarchical clustering. CCR is used for user preferences elicitation. The user preferences elicited by CCR can be used to weigh the multi-level product attributes and quantify the nominal attribute values. The product values can be calculated by considering the attribute weights and normalised numerical attribute values. Hierarchical clustering is used to group similar products according to their product values. Recommendations can be produced according to the product values and clustering results. The applications of a laptop RS, where the dataset is collected by this research, and a workstation RS with an open dataset are demonstrated to confirm the validity and applicability of the proposed method. In RS applications, CCEHC can provide a top-10 list of products and similar products recommendations to customers based on their preferences provided.
The CCEHC method can be considered as an expert system that serves the recommendation function. As CCR can be used for expert judgments and user preferences, product data with human input can be processed by the clustering method and recommendations can be generated. The experimental results demonstrated that the proposed CCEHC method can provide personalised recommendations based on different user preferences. CCR outperformed AHP in reflecting the preferences of both expert(s) and users.
There are several possible paths for future work based on this research. Firstly, other clustering methods can be considered. Secondly, the interfaces for user input and recommendation output could be further improved for a better user experience. Thirdly, the approach to addressing missing values, such as user input data and the product data, could be further investigated. Fourthly, regarding the size of the data, the proposed method could be further improved to process large scale of data. Finally, to extend the application areas, the proposed CCEHC method could be further applied to numerous other product recommendation applications such as movie, music, book, cars, and smartphones.

Acknowledgements

The research work reported in this paper is partially supported by Research Grants from Shanghai Municipal Science and Technology Major Project (Project Number 2021SHZDZX0103) and National Natural Science Foundation of China (Project Number 61503306).

Declarations

Conflict of interest

The authors declare they have no conflicts of interest.
Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix

Appendix

See Tables 15, 16 and 17.
Table 15
Raw matrix D of 27 laptops
ID
Laptop model
\({L}_{1}\)
\({L}_{2}\)
\({L}_{3}\)
\({L}_{4}\)
\({L}_{5}\)
\({L}_{6}\)
\({L}_{7}\)
\({L}_{8}\)
\({L}_{9}\)
\({L}_{10}\)
\({L}_{11}\)
\({L}_{12}\)
\({L}_{13}\)
1
Lenovo Yoga3 14-IFI
3367
Win8
4
1
256
0
Lenovo
14
2,073,600
2385
1.6
6
7
2
Lenovo Y430p AT-ISE
6830
Win8
8
0
1000
0
Lenovo
14
1,049,088
4385
2.5
5
6
3
Lenovo ThinkPad E540 20C60019CD
3882
Linux
4
0
1000
0
Lenovo
15.6
1,049,088
1848
2.44
6
4
4
Dell XPS 11 (XPS11D-1508T)
2039
Win8
4
1
256
Dell
0
11.6
3,686,400
638
1.13
6
8
5
Dell Inspiron 15 (INS15UD-1748S)
3807
Win8
8
0
1000
Dell
0
15.6
1,049,088
1705
2.3
4
5
6
Dell Inspiron 15 7000 (Ins15BD-1748)
3807
Win8
8
0
1000
Dell
0
15.6
1,049,088
1857
2.11
7
7
7
MacBook 256 GB
2589
OS
8
1
256
Apple
0
12
3,317,760
658
0.92
9
9
8
MacBook Pro 15’
6990
OS
16
1
512
Apple
0
15.4
5,184,000
2543
2.02
9
17
9
MacBook Air (MJVE2CH/A)
3393
OS
4
1
128
Apple
0
13.3
1,296,000
1333
1.35
9
7
10
ASUS A550JK4200
4361
Win8
4
0
1000
0
ASUS
15.6
2,073,600
4385
2.35
4
5
11
ASUS GFX71JY4710
6980
Win8
16
0.5
1256
0
ASUS
17.3
2,073,600
12,632
4.8
3
19
12
ASUS U305FA5Y71 (8 GB/256 GB)
2503
Win8
8
4
256
0
ASUS
13.3
2,073,600
658
1.2
10
6
13
Acer VN7-591G-56BD
3367
Win8
4
0
500
0
Acer
15.6
2,073,600
4385
2.4
4
5
14
Acer E1-470G-33212G50Dnkk
2229
Linux
2
0
500
0
Acer
14
1,049,088
1213
2.1
4
2
15
Acer VN7-791G-78KL
7060
Win8
8
0.5
1064
0
Acer
17.3
2,073,600
9840
3
3
8
16
HP Envy 15-k222tx
3367
Win8
4
0
1000
HP
0
15.6
1,049,088
4385
2.34
4
5
17
HP ProBook 440 G2 (J7W06PA)
3420
Win7
8
0
1500
HP
0
14
1,440,000
1784
1.83
9
6
18
HP Pavilion 11-h112tu × 2 (G0A07PA)
2071
Win8
4
1
128
HP
0
11.6
1,049,088
638
1.49
6
5
19
Samsung 910S3G-K04
1375
Win8
4
1
128
0
Samsung
13.3
1,049,088
638
1.44
5
4
20
Samsung 930X2K-K01
2492
Win8
4
1
128
0
Samsung
12.2
4,096,000
658
0.95
7
8
21
Samsung 900X3K-K01
3807
Win8
8
1
256
0
Samsung
13.3
5,760,000
968
1.07
6
10
22
Surface Pro 3(i3/64 GB)
1675
Win8
4
1
64
Microsoft
0
12
3,110,400
638
0.8
9
4
23
Surface Pro 3 (i7/512 GB/Profession)
3249
Win8
8
1
512
Microsoft
0
12
3,110,400
1033
0.8
9
12
24
Surface 3 (4 GB/128 GB)
2320
Win8
4
1
128
Microsoft
0
10.8
2,457,600
638
0.887
10
4
25
Alienware 15 (ALW15ED-1718)
6980
Win8
16
0.5
1128
Alienware
0
15.6
2,073,600
9809
3.207
4
15
26
Alienware 13 (ALW13ED-2708)
3807
Win8
8
1
384
Alienware
0
13.3
2,073,600
5249
2.058
3
13
27
Alienware 17 (ALW17ED-2728)
7060
Win8
16
0.5
1512
Alienware
0
17.3
2,073,600
12,632
3.78
3
21
Table 16
Comparison matrices of User C and User D (for attribute weights and nominal attribute values)
User C
User D
B 0
\({\delta }_{1}\)
\({\delta }_{2}\)
\({\delta }_{3}\)
\({\delta }_{4}\)
\({\delta }_{5}\)
\({\delta }_{6}\)
\({\delta }_{7}\)
B 2
\({\delta }_{\mathrm{2,1}}\)
\({\delta }_{\mathrm{2,2}}\)
\({\delta }_{\mathrm{2,3}}\)
B 0
\({\delta }_{1}\)
\({\delta }_{2}\)
\({\delta }_{3}\)
\({\delta }_{4}\)
\({\delta }_{5}\)
\({\delta }_{6}\)
\({\delta }_{7}\)
B 2
\({\delta }_{\mathrm{2,1}}\)
\({\delta }_{\mathrm{2,2}}\)
\({\delta }_{\mathrm{2,3}}\)
\({\delta }_{1}\)
0
3
2
− 1
− 2
1
4
\({\delta }_{\mathrm{2,1}}\)
0
1
− 3
\({\delta }_{1}\)
0
− 1
− 4
− 2
− 2
1
0
\({\delta }_{\mathrm{2,1}}\)
0
3
− 1
\({\delta }_{2}\)
− 3
0
− 1
− 3
− 4
− 2
1
\({\delta }_{\mathrm{2,2}}\)
− 1
0
− 3
\({\delta }_{2}\)
1
0
− 3
− 1
− 2
2
1
\({\delta }_{\mathrm{2,2}}\)
− 3
0
− 4
\({\delta }_{3}\)
− 2
1
0
− 3
− 3
− 1
2
\({\delta }_{\mathrm{2,3}}\)
3
3
0
\({\delta }_{3}\)
4
3
0
2
2
4
3
\({\delta }_{\mathrm{2,3}}\)
1
4
0
\({\delta }_{4}\)
1
3
3
0
1
2
5
AI = 0.048
 
2
1
\({\delta }_{4}\)
0
0
3
2
AI = 0
\({\delta }_{5}\)
2
4
3
− 1
0
2
5
B 4
\({\delta }_{\mathrm{4,1}}\)
\({\delta }_{\mathrm{4,2}}\)
\({\delta }_{\mathrm{4,3}}\)
\({\delta }_{5}\)
2
2
− 2
0
0
3
2
B 4
\({\delta }_{\mathrm{4,1}}\)
\({\delta }_{\mathrm{4,2}}\)
\({\delta }_{\mathrm{4,3}}\)
\({\delta }_{6}\)
− 1
2
1
− 2
− 2
0
3
\({\delta }_{\mathrm{4,1}}\)
0
− 2
1
\({\delta }_{6}\)
− 1
− 2
− 4
− 3
− 3
0
− 1
\({\delta }_{\mathrm{4,1}}\)
0
− 3
3
\({\delta }_{7}\)
− 4
− 1
− 2
− 5
− 5
− 3
0
\({\delta }_{\mathrm{4,1}}\)
2
0
3
\({\delta }_{7}\)
0
− 1
− 3
− 2
− 2
1
0
\({\delta }_{\mathrm{4,1}}\)
3
0
5
AI = 0.057
\({\delta }_{\mathrm{4,3}}\)
− 1
− 3
0
AI = 0.050
\({\delta }_{\mathrm{4,3}}\)
− 3
− 5
0
        
AI = 0
        
AI = 0.048
L 3
\({v}_{1}\)
\({v}_{2}\)
\({v}_{3}\)
\({v}_{4}\)
  
L 8
\({v}_{1}\)
\({v}_{2}\)
\({v}_{3}\)
\({v}_{4}\)
L 3
\({v}_{1}\)
\({v}_{2}\)
\({v}_{3}\)
\({v}_{4}\)
  
L 8
\({v}_{1}\)
\({v}_{2}\)
\({v}_{3}\)
\({v}_{4}\)
\({v}_{1}\)
0
2
1
− 1
  
\({v}_{1}\)
0
− 1
2
1
\({v}_{1}\)
0
3
2
− 2
  
\({v}_{1}\)
0
− 1
4
1
\({v}_{2}\)
− 2
0
− 1
− 2
  
\({v}_{2}\)
1
0
3
2
\({v}_{2}\)
2
0
− 1
− 5
  
\({v}_{2}\)
1
0
5
2
\({v}_{3}\)
− 1
1
0
− 3
  
\({v}_{3}\)
− 2
− 3
0
− 1
\({v}_{3}\)
1
− 1
0
− 4
  
\({v}_{3}\)
− 4
− 5
0
− 4
\({v}_{4}\)
1
2
3
0
  
\({v}_{4}\)
− 1
− 2
1
0
\({v}_{4}\)
− 1
− 3
− 2
0
  
\({v}_{4}\)
− 1
− 2
4
0
AI = 0.077
  
AI = 0
AI = 0
  
AI = 0.042
L 1
\({v}_{1}\)
\({v}_{2}\)
\({v}_{3}\)
 
B 3
\({\delta }_{\mathrm{3,1}}\)
\({\delta }_{\mathrm{3,2}}\)
 
L 10
\({v}_{1}\)
\({v}_{2}\)
L 1
\({v}_{1}\)
\({v}_{2}\)
\({v}_{3}\)
 
B 3
\({\delta }_{\mathrm{3,1}}\)
\({\delta }_{\mathrm{3,2}}\)
 
L 10
\({v}_{1}\)
\({v}_{2}\)
\({v}_{1}\)
0
2
1
 
\({\delta }_{\mathrm{3,1}}\)
0
0
 
\({v}_{1}\)
0
4
\({v}_{1}\)
0
0
2
 
\({\delta }_{\mathrm{3,1}}\)
0
5
 
\({v}_{1}\)
0
0
\({v}_{2}\)
− 2
0
1
 
\({\delta }_{\mathrm{3,2}}\)
0
0
 
\({v}_{2}\)
− 4
0
\({v}_{2}\)
0
0
2
 
\({\delta }_{\mathrm{3,2}}\)
− 5
0
 
\({v}_{2}\)
0
0
\({v}_{3}\)
− 1
− 1
0
 
AI = 0
 
AI = 0
\({v}_{3}\)
− 2
− 2
0
 
AI = 0
 
AI = 0
AI = 0.096
        
AI = 0
        
Table 17
Normalized data matrix D’ of 27 laptops For User A
ID
\({L}_{1}\)
\({L}_{2}\)
\({L}_{3}\)
\({L}_{4}\)
\({L}_{5}\)
\({L}_{6}\)
\({L}_{7}\)
\({L}_{8}\)
\({L}_{9}\)
\({L}_{10}\)
\({L}_{11}\)
\({L}_{12}\)
\({L}_{13}\)
1
0.477
0.692
0.250
1.000
0.169
0.000
1.000
0.809
0.360
0.189
0.500
0.600
0.286
2
0.967
0.692
0.500
0.000
0.661
0.000
1.000
0.809
0.182
0.347
0.320
0.500
0.333
3
0.550
0.692
0.250
0.000
0.661
0.000
1.000
0.902
0.182
0.146
0.328
0.600
0.500
4
0.289
0.692
0.250
1.000
0.169
0.672
0.000
0.671
0.640
0.051
0.708
0.600
0.250
5
0.539
0.692
0.500
0.000
0.661
0.672
0.000
0.902
0.182
0.135
0.348
0.400
0.400
6
0.539
0.692
0.500
0.000
0.661
0.672
0.000
0.902
0.182
0.147
0.379
0.700
0.286
7
0.367
0.897
0.500
1.000
0.169
0.983
0.000
0.694
0.576
0.052
0.870
0.900
0.222
8
0.990
0.897
1.000
1.000
0.339
0.983
0.000
0.890
0.900
0.201
0.396
0.900
0.118
9
0.481
0.897
0.250
1.000
0.085
0.983
0.000
0.769
0.225
0.106
0.593
0.900
0.286
10
0.618
0.692
0.250
0.000
0.661
0.000
0.829
0.902
0.360
0.347
0.340
0.400
0.400
11
0.989
0.692
1.000
0.500
0.831
0.000
0.829
1.000
0.360
1.000
0.167
0.300
0.105
12
0.355
0.692
0.500
1.000
0.169
0.000
0.829
0.769
0.360
0.052
0.667
1.000
0.333
13
0.477
0.692
0.250
0.000
0.331
0.000
0.732
0.902
0.360
0.347
0.333
0.400
0.400
14
0.316
0.692
0.125
0.000
0.331
0.000
0.732
0.809
0.182
0.096
0.381
0.400
1.000
15
1.000
0.692
0.500
0.500
0.704
0.000
0.732
1.000
0.360
0.779
0.267
0.300
0.250
16
0.477
0.692
0.250
0.000
0.661
0.466
0.000
0.902
0.182
0.347
0.342
0.400
0.400
17
0.484
1.000
0.500
0.000
0.992
0.466
0.000
0.809
0.250
0.141
0.437
0.900
0.333
18
0.293
0.692
0.250
1.000
0.085
0.466
0.000
0.671
0.182
0.051
0.537
0.600
0.400
19
0.195
0.692
0.250
1.000
0.085
0.000
0.561
0.769
0.182
0.051
0.556
0.500
0.500
20
0.353
0.692
0.250
1.000
0.085
0.000
0.561
0.705
0.711
0.052
0.842
0.700
0.250
21
0.539
0.692
0.500
1.000
0.169
0.000
0.561
0.769
1.000
0.077
0.748
0.600
0.200
22
0.237
0.692
0.250
1.000
0.042
0.328
0.000
0.694
0.540
0.051
1.000
0.900
0.500
23
0.460
0.692
0.500
1.000
0.339
0.328
0.000
0.694
0.540
0.082
1.000
0.900
0.167
24
0.329
0.692
0.250
1.000
0.085
0.328
0.000
0.624
0.427
0.051
0.902
1.000
0.500
25
0.989
0.692
1.000
0.500
0.746
1.000
0.000
0.902
0.360
0.777
0.249
0.400
0.133
26
0.539
0.692
0.500
1.000
0.254
1.000
0.000
0.769
0.360
0.416
0.389
0.300
0.154
27
1.000
0.692
1.000
0.500
1.000
1.000
0.000
1.000
0.360
1.000
0.212
0.300
0.095
Literature
go back to reference Adomavicius G, Manouselis N, Kwon Y (2011) Multi-Criteria Recommender Systems. In: Ricci F, Rokach L, Shapira B, Kantor P (eds) Recommender Systems Handbook. Springer, Boston, MA, pp.769–803 CrossRef Adomavicius G, Manouselis N, Kwon Y (2011) Multi-Criteria Recommender Systems. In: Ricci F, Rokach L, Shapira B, Kantor P (eds) Recommender Systems Handbook. Springer, Boston, MA, pp.769–803 CrossRef
go back to reference Aggarwal CC (2016) Recommender systems. Springer International Publishing, Berlin CrossRef Aggarwal CC (2016) Recommender systems. Springer International Publishing, Berlin CrossRef
go back to reference Davies DL, Bouldin DW (1979) A cluster separation measure. IEEE Trans Pattern Anal Mach Intell 2:224–227 CrossRef Davies DL, Bouldin DW (1979) A cluster separation measure. IEEE Trans Pattern Anal Mach Intell 2:224–227 CrossRef
go back to reference de Aguiar Neto FS, da Costa AF, Manzato MG, Campello RJ (2020) Pre-processing approaches for collaborative filtering based on hierarchical clustering. Inf Sci 534:172–191 CrossRef de Aguiar Neto FS, da Costa AF, Manzato MG, Campello RJ (2020) Pre-processing approaches for collaborative filtering based on hierarchical clustering. Inf Sci 534:172–191 CrossRef
go back to reference Frémal S, Lecron F (2017) Weighting strategies for a recommender system using item clustering based on genres. Expert Syst Appl 77:105–113 CrossRef Frémal S, Lecron F (2017) Weighting strategies for a recommender system using item clustering based on genres. Expert Syst Appl 77:105–113 CrossRef
go back to reference Guan C (2018) Evolutionary and swarm algorithm optimized density-based clustering and classification for data analytics. Ph.D. thesis, University of Liverpool Guan C (2018) Evolutionary and swarm algorithm optimized density-based clustering and classification for data analytics. Ph.D. thesis, University of Liverpool
go back to reference Guan C, Yuen KK (2015) Towards a hybrid approach of primitive cognitive network process and agglomerative hierarchical clustering for music recommendation. In: Heterogeneous networking for quality, reliability, security and robustness (QSHINE), 2015 11th international conference on. IEEE, pp 206–209 Guan C, Yuen KK (2015) Towards a hybrid approach of primitive cognitive network process and agglomerative hierarchical clustering for music recommendation. In: Heterogeneous networking for quality, reliability, security and robustness (QSHINE), 2015 11th international conference on. IEEE, pp 206–209
go back to reference Guan C, Yuen KK, Coenen F (2018) Particle swarm optimized density-based clustering and classification: supervised and unsupervised learning approaches. Swarm Evol Comput 44:876–896 CrossRef Guan C, Yuen KK, Coenen F (2018) Particle swarm optimized density-based clustering and classification: supervised and unsupervised learning approaches. Swarm Evol Comput 44:876–896 CrossRef
go back to reference Gupta U, Patil N (2015) Recommender system based on hierarchical clustering algorithm chameleon. In: 2015 IEEE international advance computing conference (IACC) Gupta U, Patil N (2015) Recommender system based on hierarchical clustering algorithm chameleon. In: 2015 IEEE international advance computing conference (IACC)
go back to reference Han J, Pei J, Kamber M (2011) Data mining: concepts and techniques. Elsevier, Amsterdam MATH Han J, Pei J, Kamber M (2011) Data mining: concepts and techniques. Elsevier, Amsterdam MATH
go back to reference Haruna K, Akmar Ismail M, Suhendroyono S, Damiasih D, Pierewan AC, Chiroma H, Herawan T (2017) Context-aware recommender system: a review of recent developmental process and future research direction. Appl Sci 46:1211 CrossRef Haruna K, Akmar Ismail M, Suhendroyono S, Damiasih D, Pierewan AC, Chiroma H, Herawan T (2017) Context-aware recommender system: a review of recent developmental process and future research direction. Appl Sci 46:1211 CrossRef
go back to reference Hinduja A, Pandey M (2018) An intuitionistic fuzzy AHP based multi criteria recommender system for life insurance products. Int J Adv Stud Comput Sci Eng 38(5):1–8 Hinduja A, Pandey M (2018) An intuitionistic fuzzy AHP based multi criteria recommender system for life insurance products. Int J Adv Stud Comput Sci Eng 38(5):1–8
go back to reference Karthikeyan R, Michael G, Kumaravel A (2017) A housing selection method for design, implementation & evaluation for web based recommended systems. Int J Pure Appl Math 42(3):23–28 Karthikeyan R, Michael G, Kumaravel A (2017) A housing selection method for design, implementation & evaluation for web based recommended systems. Int J Pure Appl Math 42(3):23–28
go back to reference Katarya R, Verma OP (2017) An effective web page recommender system with fuzzy c-mean clustering. Multim Tools Appl 34(2):21481–21496 CrossRef Katarya R, Verma OP (2017) An effective web page recommender system with fuzzy c-mean clustering. Multim Tools Appl 34(2):21481–21496 CrossRef
go back to reference Kotkov D, Wang S, Veijalainen J (2016) A survey of serendipity in recommender systems. Knowl Based Syst 111:180–192 CrossRef Kotkov D, Wang S, Veijalainen J (2016) A survey of serendipity in recommender systems. Knowl Based Syst 111:180–192 CrossRef
go back to reference Kunaver M, Požrl T (2017) Diversity in recommender systems—a survey. Knowl-Based Syst 12(4):154–162 CrossRef Kunaver M, Požrl T (2017) Diversity in recommender systems—a survey. Knowl-Based Syst 12(4):154–162 CrossRef
go back to reference Lika B, Kolomvatsos K, Hadjiefthymiades S (2014) Facing the cold start problem in recommender systems. Expert Syst Appl 41(4):2065–2073 CrossRef Lika B, Kolomvatsos K, Hadjiefthymiades S (2014) Facing the cold start problem in recommender systems. Expert Syst Appl 41(4):2065–2073 CrossRef
go back to reference Ma YY, Zhang HR, Xu YY, Gao L (2018) Three-way recommendation integrating global and local information. J Eng 16:1397–1401 CrossRef Ma YY, Zhang HR, Xu YY, Gao L (2018) Three-way recommendation integrating global and local information. J Eng 16:1397–1401 CrossRef
go back to reference Murtagh F (1983) A survey of recent advances in hierarchical clustering algorithms. Comput J 26(4):354–359 CrossRef Murtagh F (1983) A survey of recent advances in hierarchical clustering algorithms. Comput J 26(4):354–359 CrossRef
go back to reference Nilashi MB (2017) Clustering items for collaborative A recommender system for tourism industry using cluster ensemble and prediction machine learning techniques. Comput Ind Eng 109:357–368 CrossRef Nilashi MB (2017) Clustering items for collaborative A recommender system for tourism industry using cluster ensemble and prediction machine learning techniques. Comput Ind Eng 109:357–368 CrossRef
go back to reference Pamučar D, Stević Ž, Zavadskas EK (2018) Integration of interval rough AHP and interval rough MABAC methods for evaluating university web pages. Appl Soft Comput 42(3):141–163 CrossRef Pamučar D, Stević Ž, Zavadskas EK (2018) Integration of interval rough AHP and interval rough MABAC methods for evaluating university web pages. Appl Soft Comput 42(3):141–163 CrossRef
go back to reference Rousseeuw PJ (1987) Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. J Comput Appl Math 20:53–65 CrossRef Rousseeuw PJ (1987) Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. J Comput Appl Math 20:53–65 CrossRef
go back to reference Saaty T (1980) Analytic hierarchy process: planning, priority, setting resource allocation. McGraw-Hill, New York MATH Saaty T (1980) Analytic hierarchy process: planning, priority, setting resource allocation. McGraw-Hill, New York MATH
go back to reference Selvi C, Sivasankar E (2019) A novel optimization algorithm for recommender system using modified fuzzy c-means clustering approach. Soft Comput 23:1901–1916 CrossRef Selvi C, Sivasankar E (2019) A novel optimization algorithm for recommender system using modified fuzzy c-means clustering approach. Soft Comput 23:1901–1916 CrossRef
go back to reference Song WS (2018) An environmentally conscious PSS recommendation method based on users’ vague ratings: a rough multi-criteria approach. J Clean Prod 26(2):1592–1606 CrossRef Song WS (2018) An environmentally conscious PSS recommendation method based on users’ vague ratings: a rough multi-criteria approach. J Clean Prod 26(2):1592–1606 CrossRef
go back to reference van Capelleveen G, Amrit C, Yazan DM, Zijm H (2019) The recommender canvas: a model for developing and documenting recommender system design. Expert Syst Appl 22(3):97–117 CrossRef van Capelleveen G, Amrit C, Yazan DM, Zijm H (2019) The recommender canvas: a model for developing and documenting recommender system design. Expert Syst Appl 22(3):97–117 CrossRef
go back to reference Volkovs M, Yu G, Poutanen T (2017) Dropoutnet: addressing cold start in recommender systems. Adv Neural Inf Process Syst 40(3):4957–4966 Volkovs M, Yu G, Poutanen T (2017) Dropoutnet: addressing cold start in recommender systems. Adv Neural Inf Process Syst 40(3):4957–4966
go back to reference Wang Y, Tseng MM (2013) Customized products recommendation based on probabilistic relevance model. J Intell Manuf 24(5):951–960 CrossRef Wang Y, Tseng MM (2013) Customized products recommendation based on probabilistic relevance model. J Intell Manuf 24(5):951–960 CrossRef
go back to reference Yuen KK (2009) Cognitive network process with fuzzy soft computing technique in collective decision aiding. The Hong Kong Polytechnic University Yuen KK (2009) Cognitive network process with fuzzy soft computing technique in collective decision aiding. The Hong Kong Polytechnic University
go back to reference Yuen KK (2012) Pairwise opposite matrix and its cognitive prioritization operators: comparisons with pairwise reciprocal matrix and analytic prioritization operators. J Oper Res Soc 63(3):322–338 CrossRef Yuen KK (2012) Pairwise opposite matrix and its cognitive prioritization operators: comparisons with pairwise reciprocal matrix and analytic prioritization operators. J Oper Res Soc 63(3):322–338 CrossRef
go back to reference Yuen KK (2014a) Fuzzy cognitive network process: comparisons with fuzzy analytic hierarchy process in new product development strategy. IEEE Trans Fuzzy Syst 22(3):597–610 CrossRef Yuen KK (2014a) Fuzzy cognitive network process: comparisons with fuzzy analytic hierarchy process in new product development strategy. IEEE Trans Fuzzy Syst 22(3):597–610 CrossRef
go back to reference Yuen KK (2014b) The Primitive cognitive network process in healthcare and medical decision making: comparisons with the analytic hierarchy process. Appl Soft Comput 14:109–119 CrossRef Yuen KK (2014b) The Primitive cognitive network process in healthcare and medical decision making: comparisons with the analytic hierarchy process. Appl Soft Comput 14:109–119 CrossRef
go back to reference Zheng L, Li L, Hong W, Li T (2013) PENETRATE: personalized news recommendation using ensemble hierarchical clustering. Expert Syst Appl 6(3):2127–2136 CrossRef Zheng L, Li L, Hong W, Li T (2013) PENETRATE: personalized news recommendation using ensemble hierarchical clustering. Expert Syst Appl 6(3):2127–2136 CrossRef
Metadata
Title
The cognitive comparison enhanced hierarchical clustering
Authors
Chun Guan
Kevin Kam Fung Yuen
Publication date
28-10-2021
Publisher
Springer International Publishing
Published in
Granular Computing / Issue 3/2022
Print ISSN: 2364-4966
Electronic ISSN: 2364-4974
DOI
https://doi.org/10.1007/s41066-021-00287-x

Other articles of this Issue 3/2022

Granular Computing 3/2022 Go to the issue

Premium Partner