Introduction
-
K-means clustering method is an effective clustering approach. The k-means clustering method is extended to the R-numbers environment for clustering decision information.
-
Simple methods of determining weights based on the majority principle do not take into account the degree of uncertainty of information. So this paper innovates a new method of determining grouping weights based on the entropy weight method and majority principle.
-
Optimized minimum cost consensus model is proposed via the use of the HM operator in determining the mean value information.
-
HM operators have better flexibility in information aggregation. In this paper, we propose HM operators with R-numbers for aggregating expert information.
-
LOPCOW and CRADISA methods are simple and convenient. In this paper, LOPCOW and CRADISA methods are applied to determine the attribute weights and alternatives ranking in the R-numbers environment, respectively.
-
Adopt the proposed model to the logistics path selection of batteries in hydrogen battery automobile companies.
Acronym | Description |
---|---|
ARAS | Additive ratio assessment |
CCSD | Coefficient of correlation and standard deviation |
CRADIS | Compromise ranking of alternatives from distance to ideal solution |
COPRAS | Complex proportional assessment |
DMs | Decision makers |
FUCOM | Full consistency method |
HM | Hamy mean |
LSMAGDM | Large-scale multi-attribute group decision making |
LOPCOW | Logarithmic percentage change-driven objective weight |
LOT | Level of togetherness |
MAGDM | Multi-attribute group decision-making |
MARCOS | Measurement of alternatives and ranking according to compromise solution |
MCC | Minimum cost consensus |
MEREC | Method based on the removal effects of criteria |
R-LSMAGDM | R-numbers large-scale multi-attribute group decision making |
RNWA | R-numbers weighted average |
RNWHM | R-numbers weighted Hamy mean |
TFN | Triangular fuzzy number |
TOPSIS | Technique for order of preference by similarity to ideal solution |
VIKOR | VlseKri terijumska Optimizacija I Kompromisno Resenje |
SCC | Spearman correlation coefficient |
Related work
R-numbers
Hamy mean operator
LOPCOW method
CRADIS method
Preliminaries
Proposed R-LSMAGDM model
Description of the R-LSMAGDM problems
-
The number of experts in the pool of experts is large, equal to or greater than 20.
-
Experts provide decision-making information in the form of R-numbers.
-
The attribute weights are unknown.
\({C_{1}}\) | \({C_{2}}\) | ... | \({C_{n}}\) | |
---|---|---|---|---|
\({A_1}\) | \({{\tilde{x}}}_{11}^f\) | \({{\tilde{x}}}_{12}^f\) | ... | \({{\tilde{x}}}_{1n}^f\) |
\({A_2}\) | \({{\tilde{x}}}_{21}^f\) | \({{\tilde{x}}}_{22}^f\) | ... | \({{\tilde{x}}}_{2n}^f\) |
\({A_3}\) | \({{\tilde{x}}}_{31}^f\) | \({{\tilde{x}}}_{32}^f\) | ... | \({{\tilde{x}}}_{3n}^f\) |
\(\vdots \) | \(\vdots \) | \(\vdots \) | \(\vdots \) | \(\vdots \) |
\({A_m}\) | \({{\tilde{x}}}_{m1}^f\) | \({{\tilde{x}}}_{m2}^f\) | ... | \({{\tilde{x}}}_{mn}^f\) |
Framework of the R-LSMAGDM model
Procedure of the proposed R-LSMAGDM
Construct the initial matrix
Sub-group management
Sub-group detection
-
Step 1. Draw a scatterplot of the distribution of expert information and determine the number of clusters k based on the distribution of the scatterplot. Here the expert sample points are represented as \({e_f} = \frac{1}{{mn}}\sum \nolimits _{j = 1}^n \sum \nolimits _{i = 1}^m {COA\left( {{{\hat{x}}}_{ij}^f} \right) } \).
-
Step 2. The k is randomly selected samples from the expert information matrix as initial clustering centers \(C{C^t} = \left( {CC_1^t,CC_2^t,\ldots ,CC_k^t} \right) \), where \(CC_l^t = {\left( {x_{ij}^{l,t}} \right) _{m \times n}}~\left( {l = 1,2,\ldots ,k,t = 0} \right) \).
-
Step 3. Calculate the distance of the expert to each cluster center by$$\begin{aligned} d\left( {{{{{\hat{X}}}}^f},CC_l^t} \right) = \frac{1}{{mn}}\sum \limits _{j = 1}^n {\sum \limits _{i = 1}^m {d\left( {{{\hat{x}}}_{ij}^f,cc_{ij}^{l,t}} \right) } }. \end{aligned}$$(17)
-
Step 4. The expert matrix \({{{\hat{X}}}^f}\) is classified into nearest clusters with minimum distance, i.e. \(CC_p^t\) , where \(p = \arg \left( {{{\min }_{l \in \left( {1,2,\ldots ,k} \right) }}d\left( {{{{{\hat{X}}}}^f},CC_l^t} \right) } \right) \).
-
Step 5. Updating the clustering centers based on expert opinion on clustering.where \(\left\| {CC_l^t} \right\| \) is the number of matrix in cluster \(CC_l^t\).$$\begin{aligned} cc_{ij}^{l,t} = { \oplus _{{{\hat{X}}}_{}^f \in CC_l^t}}\frac{{{{\hat{x}}}_{ij}^f}}{{\left\| {CC_l^t} \right\| }}, \end{aligned}$$(18)
-
Step 6. Suppose \(\varepsilon \) be the predefined parameter. Ifgo to next step; otherwise, let \(t = t + 1\) and go back to step 1.$$\begin{aligned} \frac{1}{{rk}}\sum \limits _{f = 1}^r {\sum \limits _{l = 1}^k {\left| {d\left( {{{\hat{x}}}_{ij}^f,cc_{ij}^{l,t + 1}} \right) - \sum \limits _{i = 1}^m {d\left( {{{\hat{x}}}_{ij}^f,cc_{ij}^{l,t}} \right) } } \right| } } \le \varepsilon ,\nonumber \\ \end{aligned}$$(19)
-
Step 7. Output the clusters.
Sub-group aggregation
Sub-group weighting
Consensus reaching process
Aggregation process
Selection process
Attribute weighting
-
Step 1. This step converts the elements of the aggregated expert decision matrix into clear values using the R-numbers defuzzification formula, and the transformed matrix values are obtained in Eq. (33).$$\begin{aligned} {z_{ij}}= & {} COA\left( {{x_{ij}}} \right) \nonumber \\= & {} \frac{1}{9}\left( \begin{array}{l} {x_{ij}}_{11} + {x_{ij}}_{12} + {x_{ij}}_{13} + {x_{ij}}_{21} + \\ {x_{ij}}_{22} + {x_{ij}}_{23} + {x_{ij}}_{31} + {x_{ij}}_{32} + {x_{ij}}_{33} \end{array} \right) \end{aligned}$$(33)
-
Step 2. Normalize the matrix obtained in Step 1.$$\begin{aligned} {z_{ij}}= & {} \frac{{{z_{ij}} - {z_{\min }}}}{{{z_{\max }} - {z_{\min }}}},j \in Benefit; \end{aligned}$$(34)$$\begin{aligned} {z_{ij}}= & {} \frac{{{z_{\max }} - {z_{ij}}}}{{{z_{\max }} - {z_{\min }}}},j \in Cost. \end{aligned}$$(35)
-
Step 3. The percentage value corresponding to each attribute is calculated by Eq. (36).where \(\sigma \) and m denote the standard deviation and the number of alternatives, respectively.$$\begin{aligned} P{V_j} = \left| {\ln \left( {\frac{{\sqrt{\frac{{\sum \nolimits _{i = 1}^m {z_{ij}^2} }}{m}} }}{{{\sigma _j}}}} \right) \cdot 100} \right| , \end{aligned}$$(36)
-
Step 4. Calculate attribute weights. The weight of each attribute is calculated by Eq. (37).$$\begin{aligned} {\psi _j} = \frac{{P{V_j}}}{{\sum \nolimits _{j = 1}^n {P{V_j}} }}. \end{aligned}$$(37)
CRADIS approach for ranking
-
Step 1. Calculate the weighting matrix by considering the data from “Aggregation process” section and the attribute weight vectors from “Attribute weighting” section, where .
-
Step 5: Identify the ranked values of the alternatives by applying Eq. (44) based on the values from Step 4.$$\begin{aligned} R{A_i} = \frac{{U_i^ - + U_i^ + }}{2}. \end{aligned}$$(44)
Case study
Linguistic terms | Triangular fuzzy numbers |
---|---|
Very low (VL) | (0, 0, 0.3) |
Low (L) | (0.1, 0.3, 0.5) |
Medium (M) | (0.3, 0.5, 0.7) |
High (H) | (0.5, 0.7, 0.9) |
Very high (VH) | (0.7, 0.99, 0.99) |
Linguistic terms | Triangular fuzzy numbers |
---|---|
Optimistic expert | |
Very low (VL) | (0, 0, 0.3) |
Low (L) | (0.1, 0.3, 0.5) |
Medium (M) | (0.3, 0.5, 0.7) |
High (H) | (0.5, 0.7, 0.9) |
Very high (VH) | (0.7, 0.99, 0.99) |
Pessimistic expert | |
Very low (VL) | (\(-\) 0.3, 0, 0) |
Low (L) | (\(-\) 0.5, \(-\) 0.3, \(-\) 0.1) |
Medium (M) | (\(-\) 0.7, \(-\) 0.5, \(-\) 0.3) |
High (H) | (\(-\) 0.9, \(-\) 0.7, \(-\) 0.5) |
Very high (VH) | (\(-\) 0.99, \(-\) 0.99, \(-\) 0.7) |
Application of the proposed method in the selection of hydrogen fuel cell logistics paths
Sub-groups | Cluster Label |
---|---|
Cluster 1 | \(\left\{ {{e_3},{e_9},{e_{12}},{e_{15}},{e_{18}}} \right\} \) |
Cluster 2 | \(\left\{ {{e_1},{e_4},{e_7},{e_{10}},{e_{13}},{e_{16}},{e_{19}}} \right\} \) |
Cluster 3 | \(\left\{ {{e_2},{e_5},{e_6},{e_8},{e_{11}},{e_{14}},{e_{17}},{e_{20}}} \right\} \) |
-
Step 1. Use Eq. (33) to convert the elements of the expert decision aggregation matrix to clear values, and the results of the conversion are shown in Table 6.Table 6Clear value matrix\({C_1}\)\({C_2}\)\({C_3}\)\({C_4}\)\({A_1}\)0.4360.3890.3890.4541\({A_2}\)0.45310.39920.39920.4246\({A_3}\)0.37570.42860.42860.4568\({A_4}\)0.4040.40250.40250.4258\({A_5}\)0.38260.43420.43420.4554
-
Step 2. Use Eqs. (34)–(35) to normalize the clear value matrix, and the results are shown in Table 7.Table 7Normalized clear value matrix\({C_1}\)\({C_2}\)\({C_3}\)\({C_4}\)\({A_1}\)0.7791110.0839\({A_2}\)10.77430.08941\({A_3}\)00.123900\({A_4}\)0.36560.70130.97710.9627\({A_5}\)0.089100.20170.0435
-
Step 3. Use Eq. (36) to calculate the percentage value corresponding to each attribute to get$$\begin{aligned}{} & {} P{V_1}=30.8215,P{V_2}=12.3087,\\{} & {} P{V_3}=41.8336,P{V_4}=44.9247. \end{aligned}$$
-
Step 4. Use the Eq. (37) to compute the weights of each attribute to get$$\begin{aligned} {\psi _1}=0.2373,{\psi _2}=0.0948,{\psi _3}=0.3221,{\psi _4}=0.3459. \end{aligned}$$
-
Step 1. Calculate the weighting matrix as shown in Table 21.
Deviation | Utility measures | Ranked values | Ranking | |
---|---|---|---|---|
I | ||||
\({A_1}\) | 0.0443 | 0.987 | 0.9886 | 4 |
\({A_2}\) | 0.0452 | 0.9692 | 0.9728 | 3 |
\({A_3}\) | 0.0788 | 0.5554 | 0.4789 | 1 |
\({A_4}\) | 0.0438 | 1 | 1 | 5 |
\({A_5}\) | 0.0741 | 0.5906 | 0.5735 | 2 |
AI | ||||
\({A_1}\) | 0.0581 | 0.9902 | ||
\({A_2}\) | 0.0572 | 0.9763 | ||
\({A_3}\) | 0.0236 | 0.4025 | ||
\({A_4}\) | 0.0586 | 1 | ||
\({A_5}\) | 0.0326 | 0.5564 |
Methods | Sub-group weighting | Ranking | Optimal solution | ||
---|---|---|---|---|---|
\({\lambda _1}\) | \({\lambda _2}\) | \({\lambda _3}\) | |||
Proposed | 0.2319 | 0.3861 | 0.3820 | \({A_3} \succ {A_5} \succ {A_2} \succ {A_1} \succ {A_4}\) | \({A_3}\) |
Literature 1 [71] | 0.25 | 0.35 | 0.4 | \({A_5} \succ {A_4} \succ {A_1} \succ {A_3} \succ {A_2}\) | \({A_5}\) |
Literature 2 [34] | 0.2520 | 0.4094 | 0.3385 | \({A_5} \succ {A_4} \succ {A_1} \succ {A_3} \succ {A_2}\) | \({A_5}\) |
Comparative analysis
Comparative analysis of sub-group weighting
Comparative analysis of operators
By RNWHM operator | By RNWM operator | |||
---|---|---|---|---|
RA | Ranking | RA | Ranking | |
\({A_1}\) | 0.9886 | 4 | 0.7973 | 4 |
\({A_2}\) | 0.9728 | 3 | 1 | 5 |
\({A_3}\) | 0.4789 | 1 | 0.5245 | 1 |
\({A_4}\) | 1 | 5 | 0.6397 | 3 |
\({A_5}\) | 0.5735 | 2 | 0.5948 | 2 |
Comparative analysis of attribute weighting
Average weighting | Ranking | LOPCOW method | Ranking | |
---|---|---|---|---|
\({A_1}\) | 1 | 5 | 0.9886 | 4 |
\({A_2}\) | 0.8302 | 4 | 0.9728 | 3 |
\({A_3}\) | 0.5939 | 1 | 0.4789 | 1 |
\({A_4}\) | 0.8237 | 3 | 1 | 5 |
\({A_5}\) | 0.6778 | 2 | 0.5735 | 2 |
Comparative analysis of methods
-
Different fuzzy sample additive weighting (FSAW) methods have been developed earlier [11] for multi-attribute decision making problems. Seiti and Hafezalkotob extended the FSAW method by proposing the R-SWA method [61]. In this paper, the sorting results in Table 12 are obtained for the case study using the R-SAW method and the proposed method, respectively. As can be seen in Fig. 8, there is a strong correlation between R-SAW and the proposed method.
-
Seiti and Hafezalkotob [60] proposed the R-TOPSIS method. From Table 12 and Fig. 7, it can be seen that the computational results of the proposed method and the R-TOPSIS method are in perfect consistency. This is because both methods have the same ordering idea, i.e. closest to the positive ideal solution and farthest from the negative ideal solution. Since the TOPSIS method is the most classical MADM method, it also illustrates the reliability of the proposed method.
-
Mousavi et al. [43] proposed the R-VIKOR method. The results of using the R-VIKOR method for the case study are shown in Table 12, and also it can be seen from Fig. 7 that the proposed method and the R-VIKOR method have negative correlation coefficients. This is because the R-VIKOR method relies on the group utility maximization and individual regret minimization ideas, which are completely different ideas from the proposed method, so different ranking results are obtained.
Methods | Ranking | Optimal solution |
---|---|---|
Proposed method | \({A_3} \succ {A_5} \succ {A_2} \succ {A_1} \succ {A_4}\) | \({A_3}\) |
R-SAW method [61] | \({A_2} \succ {A_3} \succ {A_5} \succ {A_1} \succ {A_4}\) | \({A_2}\) |
R-TOPSIS method [60] | \({A_3} \succ {A_5} \succ {A_4} \succ {A_2} \succ {A_1}\) | \({A_3}\) |
R-VIKOR method [43] | \({A_4} \succ {A_2} \succ {A_1} \succ {A_5} \succ {A_3}\) | \({A_4}\) |
R-MARICA method [10] | \({A_4} \succ {A_1} \succ {A_3} \succ {A_5} \succ {A_2}\) | \({A_4}\) |
Sensitivity analysis
-
When \(p=1\), it is a state of combination of all elements singly, so there is specificity in the result.
-
When \(p=3\), then \(C_3^3 = 1\), the RNWHM operator changes to a geometric weighted average operator, a result in a special state.
\(p=1\) | \(p=2\) | \(p=3\) | ||||
---|---|---|---|---|---|---|
RA | Ranking | RA | Ranking | RA | Ranking | |
\({A_1}\) | 0.5726 | 3 | 0.9886 | 5 | 0.6493 | 4 |
\({A_2}\) | 1 | 5 | 0.9728 | 3 | 1 | 5 |
\({A_3}\) | 0.6128 | 4 | 0.4789 | 1 | 0.6182 | 3 |
\({A_4}\) | 0.5379 | 2 | 1 | 4 | 0.5143 | 1 |
\({A_5}\) | 0.4955 | 1 | 0.5735 | 2 | 0.549 | 2 |