Skip to main content

Open Access 22.04.2024 | Original Article

A large-scale multi-attribute group decision-making method with R-numbers and its application to hydrogen fuel cell logistics path selection

verfasst von: Rui Cheng, Jianping Fan, Meiqin Wu, Hamidreza Seiti

Erschienen in: Complex & Intelligent Systems

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

The large-scale multi-attribute group decision-making (LSMAGDM) problem has become a hot research topic in the field of decision science. An R-numbers large-scale multi-attribute group decision-making (R-LSMAGDM) model is proposed to be constructed in this paper based on the advantages of R-numbers in capturing risks. First, the most commonly used clustering method, k-means, is introduced to determine the sub-groups. Then, a new sub-group weighting determination model is constructed by considering sub-group size and sub-group entropy. Next, we also build an optimized consensus-reaching model by improving the calculation method of the mean value. Then, the R-numbers weighted Hamy mean (RNWHM) operator is proposed to aggregate the sub-group information. In addition, the logarithmic percentage change-driven objective weighting (LOPCOW) method and the compromise ranking of alternatives from distance to ideal solution (CRADIS) method are used for attribute weighting calculation and alternative ranking, respectively. Finally, the effectiveness of the model is verified by an application example of hydrogen fuel cell logistics path selection.
Hinweise

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Introduction

In light of the prevailing global energy challenges and the imperative of environmental conservation, nations are increasingly prioritizing the advancement of energy transition and the mitigation of greenhouse gas emissions. The pressing demand for sustainable energy has arisen due to climate change and environmental issues resulting from the excessive utilization of fossil fuels. Consequently, the pursuit of alternative energy sources that exhibit reduced carbon emissions in comparison to conventional fossil fuels has emerged as a widely shared objective on a global scale. Conventional fossil fuels, namely coal, oil, and natural gas, hold significant significance in contemporary civilization [3]. However, they also provide notable environmental and climatic obstacles. The combustion of fossil fuels results in the emission of substantial quantities of greenhouse gases, notably carbon dioxide, which contributes to the acceleration of global climate change and exerts adverse effects on ecosystems and human well-being. Consequently, there exists a pressing imperative to transition towards alternative energy sources, with particular emphasis on low carbon emission energy solutions. Hydrogen fuel cell technology has garnered significant interest within the given context [65, 76]. Hydrogen fuel cells are a technology that facilitates the electrochemical conversion of hydrogen and oxygen into electrical energy, resulting in the generation of electricity. Notably, the sole by-product of this process is water vapor, so distinguishing hydrogen fuel cells as an environmentally friendly energy source with zero emissions. This implies that hydrogen fuel cells possess significant potential as a viable substitute for clean energy sources, so aiding in the reduction of greenhouse gas emissions and the amelioration of climate change. Moreover, as compared to traditional fuel cells, hydrogen fuel cells demonstrate enhanced energy efficiency and storage capacities, positioning them as a prominent component of future energy systems. Hydrogen fuel cell vehicles have gained significant attention in the market due to their utilization of hydrogen fuel cells [22, 39]. The transportation logistics pertaining to batteries is a critical concern for producers of vehicles. The matter of transportation logistics pertaining to hydrogen fuel cells encompasses the conveyance of these cells from their manufacturing location to the intended destination for utilization.
However, transportation logistics path selection for hydrogen fuel cells is a difficult task in which several factors such as safety, economy, and environmental sustainability need to be considered. Multi-attribute group decision-making (MAGDM) [79, 80] is a well-established technique and a commonly used management tool. In view of the complexity of decision-making problems, more and more decision makers (DMs) are called to participate in the decision-making process, and MAGDM [37, 66] applications are born. In recent years, various large-scale multi-attribute group decision-making (LSMAGDM) methods have been focused on to form a systematic system of LSMAGDM to help DMs make the most correct choices. Traditional LSMAGDM methods have been gradually extended to fuzzy environments with a view to capturing the ambiguity and uncertainty of decision-making information, such as type-2 fuzzy sets [40, 41], Z-numbers [78], picture fuzzy sets [12], spherical fuzzy sets [56], and others. However, these information representation models are subject to uncertainty and error when faced with uncertain information sources or involving future events, leading to biased results in decision-making frameworks. Therefore, it is necessary to consider the impact of these factors, and Seiti et al. [61] proposed the concept of R-numbers which can easily capture the risk associated with fuzzy numbers or risks involving future events. In the risk analysis literature, there are concepts such as risk preference (degree of risk acceptability) and risk perception, which are considered to better construct the R-numbers model. In the definition of R-numbers, referring to the definition of pessimistic-optimistic risk and the use of fuzzy numbers to describe the risk in case of inaccurate value at risk, the pessimistic and optimistic intervals of fuzzy numbers are established using type-2 fuzzy sets. R-numbers can be regarded as type-2 fuzzy distributions of classical fuzzy numbers in the face of risk and uncertainty, which contain fuzzy pessimistic and optimistic risks, acceptable pessimistic and optimistic risks, and risk perception parameters, which improve the accuracy of the final decision results. Therefore, R-numbers have more advantages in risk information representation.
In LSMAGDM, a number of clustering algorithms have been developed to simplify the decision-making process due to the large scale of DMs [55, 57, 70]. The k-means clustering method [20, 72] is the most classical clustering method. Once the clustering of DMs has been obtained, a crucial challenge is determining the weights of the sub-groups. Li et al. [34] proposed a sub-group weighting determination method combining the level of togetherness and majority principle. Wu and Xu [71] assumed that all decision makers are assigned the same importance weight. Different methods of weight determination produce different aggregation results and thus different decision results.
In LSMAGDM, it is more difficult and complex to reach a consensus because of the large number of DMs with different backgrounds and knowledge, which can easily cause conflicts. The models and methods of consensus reaching have been widely studied in the literature, mainly including three types of models: consensus model on consensus feedback mechanism [67, 73, 81], consensus model based on minimum cost [6, 33], and consensus model on social network [23, 35, 62]. Of course, the most commonly used model is the minimum cost consensus (MCC) model.
After the sub-groups have reached a consensus, an important step is the aggregation of information from the sub-groups. There are many researches on aggregation operators, such as arithmetic mean operator [68], geometric mean operator [74], ordered weighted mean operator [17], choqut aggregation operator [26, 77], Bonferroni operator [9], Domi operator [13, 75], etc. The above aggregation operators can be divided into two categories according to whether there is the association between attributes. One is the aggregation operators that are independent of each other [17, 68, 74], and the other is the aggregation operators that are interrelated between attributes [9, 75, 77]. Hamy mean (HM) operator was proposed by Hara et al. [21]. When the relevant attributes are determined, firstly, the geometric average of several attribute values is calculated, and then the arithmetic average of these geometric averages is carried out. This aggregation method thus reflects the correlation between attributes.
In LSMAGDM, after obtaining the aggregated information, the important problem faced by DMs is the selection of the solutions. The first step is to determine the attribute weights. There are many methods for determining attribute weights, such as Coefficient of Correlation and Standard Deviation (CCSD) [7], Full Consistency Method (FUCOM) [19, 48], Method based on the Removal Effects of Criteria (MEREC) [30], etc. The logarithmic percentage change-driven objective weighting (LOPCOW) technique was proposed by Ecer and Pamucar [16], which greatly reduces the effect of the extreme values by adopting logarithmic operators. effect of extreme values. After obtaining the attribute weights it is necessary to rank the alternatives to obtain the solution preferences, commonly used methods are Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) [69], VlseKri terijumska Optimizacija I Kompromisno Resenje (VIKOR) [47, 49], COmplex PRoportional ASsessment (COPRAS) [29], etc. The comprise ranking of alternatives from distance to ideal solution (CRADIS) [53] is a method based on the alternative’s deviations from ideal and anti-ideal solutions to determine the optimal solution. The CRADIS is developed by combining some of the features of the Additive Ratio ASsessment (ARAS), Measurement of Alternatives and Ranking according to COmpromise Solution (MARCOS), and TOPSIS methodologies, and is a more direct, practical, and effective way of assessing and prioritizing alternatives.
The motivation of this study is to propose a new LSMAGDM model and rank its advantages based on the opinions of DMs. After obtaining multiple DMs’ opinions, the DMs are grouped to obtain sub-groups combined opinions. The feedback model is adjusted to achieve a certain degree of consistency in the DMs’ opinions. After aggregating all the DMs’ opinions the criterion weights are determined using the criterion determination technique. With the multi-criteria decision-making tool, each alternative is evaluated based on specified criteria, which are determined through expert opinion and literature review.
The aim of this study is to propose an effective R-numbers-based LSMAGDM model (R-LSMAGDM) including LOPCOW and CRADIS methods for solving the battery logistics path selection problem of hydrogen battery car companies. The LOPCOW method is used to calculate the weight coefficients of the criteria. The CRADIS method is used to evaluate the alternatives in the R-numbers environment. The main contributions of this study are as follows:
  • K-means clustering method is an effective clustering approach. The k-means clustering method is extended to the R-numbers environment for clustering decision information.
  • Simple methods of determining weights based on the majority principle do not take into account the degree of uncertainty of information. So this paper innovates a new method of determining grouping weights based on the entropy weight method and majority principle.
  • Optimized minimum cost consensus model is proposed via the use of the HM operator in determining the mean value information.
  • HM operators have better flexibility in information aggregation. In this paper, we propose HM operators with R-numbers for aggregating expert information.
  • LOPCOW and CRADISA methods are simple and convenient. In this paper, LOPCOW and CRADISA methods are applied to determine the attribute weights and alternatives ranking in the R-numbers environment, respectively.
  • Adopt the proposed model to the logistics path selection of batteries in hydrogen battery automobile companies.
The rest of the paper is organized as follows. In the next section, we review the concepts related to R-numbers. In the subsequent section, we describe the LSMAGDM problem in the R-numbers environment and propose a solution framework based on R-numbers followed by which, we describe the decision-making process in the R-LSMAGDM framework. First, an initial matrix is constructed, fuzzy numbers are converted to R-numbers, and they are normalized. Second, the clustering of experts is performed. Then, consensus building is performed. Next, the RNWHM operator is developed. Finally, attribute weights are determined and alternatives are ranked using the LOPCOW and CRADIS methods, respectively. In the penultimate section, we validate the practicality and effectiveness of the method by taking the risk assessment of the logistics path of batteries in hydrogen battery automobile companies as an example. The superiority of the method is verified through comparative analysis and sensitivity analysis. The final section summarizes the whole paper and points out the shortcomings and future research directions of this study. The structure of this paper is shown in Fig. 1. Table 1 summarizes all the acronyms used in this paper.
Table 1
Description of acronyms used in this paper
Acronym
Description
ARAS
Additive ratio assessment
CCSD
Coefficient of correlation and standard deviation
CRADIS
Compromise ranking of alternatives from distance to ideal solution
COPRAS
Complex proportional assessment
DMs
Decision makers
FUCOM
Full consistency method
HM
Hamy mean
LSMAGDM
Large-scale multi-attribute group decision making
LOPCOW
Logarithmic percentage change-driven objective weight
LOT
Level of togetherness
MAGDM
Multi-attribute group decision-making
MARCOS
Measurement of alternatives and ranking according to compromise solution
MCC
Minimum cost consensus
MEREC
Method based on the removal effects of criteria
R-LSMAGDM
R-numbers large-scale multi-attribute group decision making
RNWA
R-numbers weighted average
RNWHM
R-numbers weighted Hamy mean
TFN
Triangular fuzzy number
TOPSIS
Technique for order of preference by similarity to ideal solution
VIKOR
VlseKri terijumska Optimizacija I Kompromisno Resenje
SCC
Spearman correlation coefficient
There are four areas of existing related work that we describe below, including R-numbers, Hamy mean operator, LOPCOW method, and CRADIS method.

R-numbers

The available body of research on R-numbers is currently limited in scope [61]. The MARICA approach was proposed by Cheng et al. [10] in the context of R-numbers environments and was subsequently utilized to conduct risk assessments on 5 G base station construction projects. Zhao et al. [82] proposed an innovative decision framework that integrates R-numbers and preference models to tackle the challenges of multi-criteria decision-making. The R-VIKOR technique was created by Mousavi et al. [43]. The unique approach of modified R-numbers was introduced by Seiti et al. [59]. The aforementioned methodology was specifically devised for risk-based fuzzy information fusion and its subsequent use in the examination of failure modes, effects, and system resilience, commonly referred to as FMESRA. In their study, Mou et al. [42] introduced a Petri net model that incorporates the R-numbers Maclaurin symmetric mean operator. The R-BWM model was introduced by Liu et al. [36] as a method for integrating the R-numbers into the R &D project selection process. The aforementioned work has not yet presented a systematic analysis of the R-numbers, necessitating a more thorough investigation and elucidation of the R-numbers.

Hamy mean operator

Hamy operator was proposed by Hara et al. [21] to aggregate information. Subsequently, it has been extended by many scholars. Akram et al. [1] extended Hamy operator to 2-tuple linguistic complex q-rung orthopair fuzzy environments. Garg et al. [18] defined the complex Pythagorean fuzzy Hamy operator. Hussain et al. [24] introduced the complex intuitionistic fuzzy Hamy mean operators. Akram et al. [2] extended the Hamy operator to the Fermatean fuzzy environment. Ali et al. [4] defined complex interval-valued q-Rung orthopair fuzzy Hamy mean operators. Rong et al. [58] introduced hesitant fuzzy linguistic Hamy mean aggregation operators. The Hamy operator serves as an effective aggregation operator, so there is then to define the R-numbers Hamy aggregation operator.

LOPCOW method

The LOPCOW method was proposed by Ecer and Pamucar [16] to determine weights. Nila and Roy [45] developed a new hybrid MCDM framework based on LOPCOW method for third-party logistics provider selection under sustainability perspectives. Simic et al. [64] proposed the neutrosophic LOPCOW-ARAS model for prioritizing industry 4.0-based material handling technologies in smart and sustainable warehouse management systems. Ecer et al. [14] analyzed the sustainability performance of micro-mobility solutions in urban transportation using the novel IVFNN-Delphi-LOPCOW-CoCoSo framework. Ecer et al. [15] constructed the q-rung fuzzy LOPCOW-VIKOR model to assess the role of unmanned aerial vehicles in achieving precision agriculture in the Agri-Food industry. Niu et al. [46] proposed a novel hybrid group decision-making method based on EDAS, LOPCOW and regret theory in Fermatean cubic fuzzy environments. The LOPCOW method adopts a logarithmic operator that greatly reduces the influence of extreme values, so the weights obtained by this method are more accurate. Therefore, it is necessary to extend its application environment.

CRADIS method

The CRADIS method was proposed by Puska et al. [53] for ranking alternatives. Puska et al. [50] proposed an extension of the MEREC-CRADIS method using a double normalization approach and applied the proposed method to a case study of electric vehicle selection. Q-rung fuzzy CRADIS and unknown weights are used by Krishankumar and Ecer [32] to select sustainable transport service providers. Puska et al. [51] selected green suppliers in an uncertain agricultural environment using a hybrid MCDM model: Z-numbers-fuzzy LMAW-fuzzy CRADIS model. Fuzzy CRADIS and CRITIC methods for market assessment of pear varieties in Serbia by Puska et al. [52]. Puska et al. [54] presented extended sustainability criteria and multi-criteria analysis methods for assessing and selecting healthcare waste incinerators. The CRADIS method is a more direct, practical, and efficient method for multi-criteria decision-making. Therefore, it is necessary to extend the CRADIS method to the R-numbers environment.

Preliminaries

In this section, related concepts and properties on R-numbers are introduced.
Definition 1
[8] A fuzzy number \({{\tilde{A}}}\) on R can be a triangular fuzzy number (TFN) if its membership function \({\mu _{{{\tilde{A}}}}}\left( x \right) :R \rightarrow \left[ {0,1} \right] \) is defined as
$$\begin{aligned} {\mu _{{{\tilde{A}}}}}\left( x \right) = \left\{ \begin{array}{ll} {{\left( {x - {a_1}} \right) } /{\left( {{a_2} - {a_1}} \right) }},&{}\quad {a_1} \le x < {a_2}\\ {{\left( {{a_3} - x} \right) } /{\left( {{a_3} - {a_2}} \right) }},&{}\quad {a_2} \le x \le {a_3}\\ 0,&{}\quad {\text {otherwise}} \end{array} \right. \end{aligned}$$
(1)
Here, \({a_1}\) and \({a_3}\) are the lower and upper bounds of the fuzzy number \({{\tilde{A}}}\) , respectively, and \({a_2}\) is the modal value. The TFN can also be defied as \({{\tilde{A}}} = \left( {{a_1},{a_2},{a_3}} \right) \).
Definition 2
[40] A Type-2 fuzzy set \({{\tilde{A}}}\) can be defined by \({\mu _{{{\tilde{A}}}}}:X \times U \rightarrow U\) , in which x is the primary variable of \({{\tilde{A}}}\) and X is the universe of discourse of x. The three-dimensional membership function of \({{\tilde{A}}}\left( {{\mu _{{{\tilde{A}}}}}\left( {x,u} \right) } \right) \) can be described by the following equation:
$$\begin{aligned} {{\tilde{A}}} = \left\{ {\left( {\left( {x,u} \right) ,{\mu _{{{\tilde{A}}}}}\left( {x,u} \right) } \right) |x \in X,u \in \left[ {0,1} \right] } \right\} , \end{aligned}$$
(2)
where \(u \in \left[ {0,1} \right] \) is the secondary variable and \({\mu _{{{\tilde{A}}}}}\left( {x,u} \right) \) . Besides \({\mu _{{{\tilde{A}}}}}\left( {x,u} \right) \) is recognized as secondary membership grade of x and the primary membership of x is defined as \({J_x} = \left\{ {\left( {x,u} \right) |u \in \left[ {0,1} \right] ,{\mu _{{{\tilde{A}}}}}\left( {x,u} \right) > 0} \right\} \).
Definition 3
[5] The Type-2 triangular fuzzy number \(\tilde{{{\tilde{A}}}}\) can be defined using a triangular fuzzy number with fuzzy elements as follows:
$$\begin{aligned} {\tilde{{{\tilde{A}}}}} = \left( {{{{{\tilde{A}}}}_l},{{{{\tilde{A}}}}_m},{{{{\tilde{A}}}}_u}} \right) \end{aligned}$$
(3)
where \({{{\tilde{A}}}_l}\) and \({{{\tilde{A}}}_u}\) are the fuzzy lower and upper bounds of \({\tilde{{{\tilde{A}}}}}\), respectively, and \({{{\tilde{A}}}_m}\) is the fuzzy modal value.
The R-number is a newly established pessimistic-optimistic type-2 triangular fuzzy number form by considering a variety of parameters, including fuzzy positive and negative risks, fuzzy positive and negative acceptable risks, and fuzzy risk perception.
Definition 4
[61] The R-numbers for arbitrary fuzzy number \({{\tilde{B}}}\) with lower limit \({l_{{{\tilde{B}}}}}\) and upper limit \({u_{{{\tilde{B}}}}}\) in beneficial and non-beneficial modes are denoted by \({R_b}\left( {{{\tilde{B}}}} \right) \) and \({R_c}\left( {{{\tilde{B}}}} \right) \) , respectively, and can be described as follows:
$$\begin{aligned} {R_b}\left( {{{\tilde{B}}}} \right) = \left( {{R_{1b}}\left( {{{\tilde{B}}}} \right) ,{R_{2b}}\left( {{{\tilde{B}}}} \right) ,{R_{3b}}\left( {{{\tilde{B}}}} \right) } \right) , \end{aligned}$$
(4)
where
$$\begin{aligned} \left\{ \begin{array}{lllll} {R_{1b}}\left( {{{\tilde{B}}}} \right) = \max \left( {{{\tilde{B}}} \otimes \left( \begin{array}{l} 1 \ominus \min \left( {\frac{{{{{{\tilde{r}}}}^ - }}}{{1 \ominus {{{\widetilde{RP}}}^ - }}},\tau } \right) \\ \otimes \left( {1 \ominus {{{\widetilde{AR}}}^ - }} \right) \end{array} \right) ,{l_{{{\tilde{B}}}}}} \right) ,\\ {R_{2b}}\left( {{{\tilde{B}}}} \right) = {{\tilde{B}}},\\ {R_{3b}}\left( {{{\tilde{B}}}} \right) = \min \left( {{{\tilde{B}}} \otimes \left( {1 \oplus \frac{{{{{{\tilde{r}}}}^ + }}}{{1 \ominus {{{\widetilde{RP}}}^ + }}} \otimes \left( {1 \ominus {{{\widetilde{AR}}}^ + }} \right) } \right) ,{u_{{{\tilde{B}}}}}} \right) ,\\ 0< {{{{\tilde{r}}}}^ - } < 1,{{{{\tilde{r}}}}^ + } > 0. \end{array} \right. \end{aligned}$$
(5)
and
$$\begin{aligned} {R_c}\left( {{{\tilde{B}}}} \right) = \left( {{R_{1c}}\left( {{{\tilde{B}}}} \right) ,{R_{2c}}\left( {{{\tilde{B}}}} \right) ,{R_{3c}}\left( {{{\tilde{B}}}} \right) } \right) , \end{aligned}$$
(6)
where
$$\begin{aligned} \left\{ \begin{array}{l} {R_{1c}}\left( {{{\tilde{B}}}} \right) = \max \left( {{{\tilde{B}}} \otimes \left( \begin{array}{l} 1 \ominus \min \left( {\frac{{{{{{\tilde{r}}}}^ + }}}{{1 \ominus {{{\widetilde{RP}}}^ + }}},\tau } \right) \\ \otimes \left( {1 \ominus {{{\widetilde{AR}}}^ + }} \right) \end{array} \right) ,{l_{{{\tilde{B}}}}}} \right) ,\\ {R_{2c}}\left( {{{\tilde{B}}}} \right) = {{\tilde{B}}},\\ {R_{3c}}\left( {{{\tilde{B}}}} \right) = \min \left( {{{\tilde{B}}} \otimes \left( {1 \oplus \frac{{{{{{\tilde{r}}}}^ - }}}{{1 \ominus {{{\widetilde{RP}}}^ - }}} \otimes \left( {1 \ominus {{{\widetilde{AR}}}^ - }} \right) } \right) ,{u_{{{\tilde{B}}}}}} \right) ,\\ {{{{\tilde{r}}}}^ - } > 1,0< {{{{\tilde{r}}}}^ + } < 1. \end{array} \right. \end{aligned}$$
(7)
in which \(\tau \) is a number infinitely close to one and \({{{\tilde{r}}}^ - }, {{{\tilde{r}}}^ + }\) are fuzzy negative and positive risks and are defined as the fuzzy error and risk, which leads the fuzzy numbers becomes worse or better in the future. The fuzzy amount of positive and negative risks that can be tolerated by decision-makers are denoted by \({{\widetilde{AR}}^ - }\) and \({{\widetilde{AR}}^ + }\) , respectively, and \({{\widetilde{RP}}^ - }, {{\widetilde{RP}}^ + }\) are experts’ fuzzy risk perceptions related to negative and positive risks. In these relations, \({{\widetilde{AR}}^ - }\) and \({{\widetilde{AR}}^ + }\) values are always between zero and one, but the possible range of \({{\widetilde{RP}}^ - }\) and \({{\widetilde{RP}}^ + }\) is defined as follows:
$$\begin{aligned} \left\{ \begin{array}{l} {\text {Optimistic experts }}0< {{\widetilde{RP}}^ - },{{\widetilde{RP}}^ + }< 1;\\ {\text {Neutral experts }}{{\widetilde{RP}}^ - },{{\widetilde{RP}}^ + } = \mathrm{{0}};\\ {\text {Pessimistic experts }}- \infty \mathrm{{< }}{{\widetilde{RP}}^ - },{{\widetilde{RP}}^ + } < 0. \end{array} \right. \nonumber \\ \end{aligned}$$
In this paper, we assume that \({{\tilde{B}}}\) is a triangular fuzzy number, then \(R\left( {{{\tilde{B}}}} \right) \) is the Type-2 triangular fuzzy number. defined as \(R\left( {{{\tilde{B}}}} \right) = \big ( \left( {{\mu _{11}},{\mu _{12}},{\mu _{13}}} \right) ,\big ( {\mu _{21}},{\mu _{22}},{\mu _{23}} \big ),\big ( {{\mu _{31}},{\mu _{32}},{\mu _{33}}} \big ) \big )\)
Definition 5
[61] Suppose R-numbers \(R\left( {{{\tilde{B}}}} \right) \) and \(R\left( {{{\tilde{C}}}} \right) \) be Type-2 triangular fuzzy numbers, then the operations between them are defined as
$$\begin{aligned}{} & {} R\left( {{{\tilde{B}}}} \right) \oplus R\left( {{{\tilde{C}}}} \right) \nonumber \\{} & {} \quad = \left( \begin{array}{l} \left( {{\mu _{11{{\tilde{B}}}}} + {\mu _{11{{\tilde{C}}}}},{\mu _{12{{\tilde{B}}}}} + {\mu _{12{{\tilde{C}}}}},{\mu _{13{{\tilde{B}}}}} + {\mu _{13{{\tilde{C}}}}}} \right) , \\ \left( {{\mu _{21{{\tilde{B}}}}} + {\mu _{21{{\tilde{C}}}}},{\mu _{22{{\tilde{B}}}}} + {\mu _{22{{\tilde{C}}}}},{\mu _{23{{\tilde{B}}}}} + {\mu _{23{{\tilde{C}}}}}} \right) ,\\ \left( {{\mu _{31{{\tilde{B}}}}} + {\mu _{31{{\tilde{C}}}}},{\mu _{32{{\tilde{B}}}}} + {\mu _{32{{\tilde{C}}}}},{\mu _{33{{\tilde{B}}}}} + {\mu _{33{{\tilde{C}}}}}} \right) \end{array} \right) ; \end{aligned}$$
(8)
$$\begin{aligned}{} & {} R\left( {{{\tilde{B}}}} \right) \otimes R\left( {{{\tilde{C}}}} \right) \nonumber \\{} & {} \quad = \left( \begin{array}{l} \left( {{\mu _{11{{\tilde{B}}}}}{\mu _{11{{\tilde{C}}}}},{\mu _{12{{\tilde{B}}}}}{\mu _{12{{\tilde{C}}}}},{\mu _{13{{\tilde{B}}}}}{\mu _{13{{\tilde{C}}}}}} \right) ,\\ \left( {{\mu _{21{{\tilde{B}}}}}{\mu _{21{{\tilde{C}}}}},{\mu _{22{{\tilde{B}}}}}{\mu _{22{{\tilde{C}}}}},{\mu _{23{{\tilde{B}}}}}{\mu _{23{{\tilde{C}}}}}} \right) ,\\ \left( {{\mu _{31{{\tilde{B}}}}}{\mu _{31{{\tilde{C}}}}},{\mu _{32{{\tilde{B}}}}}{\mu _{32{{\tilde{C}}}}},{\mu _{33{{\tilde{B}}}}}{\mu _{33{{\tilde{C}}}}}} \right) \end{array} \right) ; \end{aligned}$$
(9)
$$\begin{aligned}{} & {} \lambda R\left( {{{\tilde{B}}}} \right) = \left( \begin{array}{l} \left( {\lambda {\mu _{11{{\tilde{B}}}}},\lambda {\mu _{12{{\tilde{B}}}}},\lambda {\mu _{13{{\tilde{B}}}}}} \right) ,\\ \left( {\lambda {\mu _{21{{\tilde{B}}}}},\lambda {\mu _{22{{\tilde{B}}}}},\lambda {\mu _{23{{\tilde{B}}}}}} \right) ,\\ \left( {\lambda {\mu _{31{{\tilde{B}}}}},\lambda {\mu _{32{{\tilde{B}}}}},\lambda {\mu _{33{{\tilde{B}}}}}} \right) \end{array} \right) ; \end{aligned}$$
(10)
$$\begin{aligned}{} & {} {\left( {R\left( {{{\tilde{B}}}} \right) } \right) ^\lambda } = \left( \begin{array}{l} \left( {{{\left( {{\mu _{11{{\tilde{B}}}}}} \right) }^\lambda },{{\left( {{\mu _{12{{\tilde{B}}}}}} \right) }^\lambda },{{\left( {{\mu _{13{{\tilde{B}}}}}} \right) }^\lambda }} \right) ,\\ \left( {{{\left( {{\mu _{21{{\tilde{B}}}}}} \right) }^\lambda },{{\left( {{\mu _{22{{\tilde{B}}}}}} \right) }^\lambda },{{\left( {{\mu _{23{{\tilde{B}}}}}} \right) }^\lambda }} \right) ,\\ \left( {{{\left( {{\mu _{31{{\tilde{B}}}}}} \right) }^\lambda },{{\left( {{\mu _{32{{\tilde{B}}}}}} \right) }^\lambda },{{\left( {{\mu _{33{{\tilde{B}}}}}} \right) }^\lambda }} \right) \end{array} \right) . \end{aligned}$$
(11)
Definition 6
[61] The defuzzification operation of R-number \(R\left( {{{\tilde{B}}}} \right) \) is defined:
$$\begin{aligned} COA\left( {R\left( {{{\tilde{B}}}} \right) } \right) = \frac{1}{9}\left( \begin{array}{l} {\mu _{11}} + {\mu _{12}} + {\mu _{13}} + {\mu _{21}} + \\ {\mu _{22}} + {\mu _{23}} + {\mu _{31}} + {\mu _{32}} + {\mu _{33}} \end{array} \right) .\nonumber \\ \end{aligned}$$
(12)
The COA method is one of the simple and effective methods in the defuzzification method and has a wide range of applications. In this paper, the COA method is used to defuzzify the R-numbers, which simplifies the calculation process and reduces the complexity of the decision-making process in some calculations.

Proposed R-LSMAGDM model

In the field of decision-making, most of the research mainly focuses on MAGDM, but it is worth noting that the number of experts is quite large in some realistic MAGDM. At the same time, considering the advantages of R-numbers in capturing risks, it is necessary to build an R-LSMAGDM model. The methodology consists of five stages. In the first stage, the initial decision matrix is constructed by determining the decision objective, attribute set, and alternative set. Subsequently, the initial decision matrix is converted to R-numbers form. In the second stage, expert sub-group management is performed to obtain sub-group groupings. In the third stage, the sub-group information is adjusted according to the MCC model. In the fourth stage, the RNWHM operator is studied and all sub-group information is aggregated using the RNWHM operator. In the fifth stage, the alternatives are ranked in a multi-attribute environment and the best solution is determined. Figure 1 shows the decision-making framework of R-LSMAGDM.

Description of the R-LSMAGDM problems

There are several aspects of the decision-making framework for R-LSMAGDM that need attention:
  • The number of experts in the pool of experts is large, equal to or greater than 20.
  • Experts provide decision-making information in the form of R-numbers.
  • The attribute weights are unknown.
The notations used in the decision framework for R-LSMAGDM are shown below:
(1)
\(E=\big \{ e_{1},e_{2},\dots ,e_{r}\big \}\): the set of experts, where \(e_{f}\big ( f=1,2,\dots ,r \big )\) indicates the fth expert.
 
(2)
\(A=\left\{ A_{1},A_{2},\dots ,A_{m}\right\} \): the set of alternatives, where \(A_{i}\left( i=1,2,\dots ,m \right) \) is the ith alternative, m is the number of alternatives.
 
(3)
\(C=\left\{ C_{1},C_{2},\dots ,C_{n}\right\} \): the set of attributes, where \(C_{j}\left( j=1,2,\dots ,n \right) \) is the jth attribute, n is the number of attributes.
 
(4)
\(\psi = {\left( {\psi _1^{},\psi _2^{},\ldots ,\psi _n^{}} \right) ^T}\): the weight vector of attribute, where \(\psi _j^{}\left( {j = 1,2,\ldots ,n} \right) \) is the weight of jth attribute, \(0 \le \psi _j^{} \le 1\), and \(\sum \nolimits _{j = 1}^n {\psi _j^{}} = 1\).
 
(5)
\({{{\tilde{X}}}^f} = {{{\left( {{{\tilde{x}}}_{ij}^f} \right) }_{m \times n}}}\): the initial decision matrix in the form of fuzzy numbers from fth expert, where \({{{\tilde{x}}}_{ij}^f}\) denotes the evaluation information about the ith alternative of jth attribute. The details of the matrix are shown in Table 2.
 
(6)
\({{{\tilde{X}}}^f} = {{{\left( {x_{ij}^f} \right) }_{m \times n}}}\): R-numbers matrix converted from the initial decision matrix, where \( x_{ij}^f\) denotes the evaluation information about the ith alternative of jth attribute in R-numbers.
 
Table 2
Initial matrix
 
\({C_{1}}\)
\({C_{2}}\)
...
\({C_{n}}\)
\({A_1}\)
\({{\tilde{x}}}_{11}^f\)
\({{\tilde{x}}}_{12}^f\)
...
\({{\tilde{x}}}_{1n}^f\)
\({A_2}\)
\({{\tilde{x}}}_{21}^f\)
\({{\tilde{x}}}_{22}^f\)
...
\({{\tilde{x}}}_{2n}^f\)
\({A_3}\)
\({{\tilde{x}}}_{31}^f\)
\({{\tilde{x}}}_{32}^f\)
...
\({{\tilde{x}}}_{3n}^f\)
\(\vdots \)
\(\vdots \)
\(\vdots \)
\(\vdots \)
\(\vdots \)
\({A_m}\)
\({{\tilde{x}}}_{m1}^f\)
\({{\tilde{x}}}_{m2}^f\)
...
\({{\tilde{x}}}_{mn}^f\)

Framework of the R-LSMAGDM model

The framework of R-LSMAGDM consists of five stages as follows.
Stage 1. Construct the initial matrix.
For LSMAGDM, there is a first task to determine the decision objective, attribute set, and alternative set. The initial decision matrices are obtained based on the evaluation information from the experts. The basis of the construction of this paper is to study LSMAGDM in the R-numbers environment, so the initial decision matrix requires to be converted to R-numbers form at this stage.
Stage 2. Sub-group management.
Compared to the MAGDM problem, the LSMAGDM problem has a larger number of experts, which means that more matrices of expert information need to be processed. Simple methods of aggregating expert information face a huge workload. So there is a need to detect subgroups of expert information and group the experts to simplify the information processing. This stage of sub-group detection contains three parts: sub-group detection, internal aggregation of sub-groups, and sub-group weighting. The k-means cluster analysis method is used to determine sub-groups; and the expert information within the same sub-group is aggregated; and the entropy weight method and majority principle are integrated to determine sub-group weights.
Stage 3. Consensus reaching process.
It is much more difficult to obtain a common solution for the LSMAGDM problem compared to the MAGDM problem because the number of experts is higher, which means more conflicting opinions among experts. In this stage, the MCC model is improved to adjust the internally aggregated sub-group information to reach consensus, and the adjusted sub-group information is obtained.
Stage 4. Aggregation process.
In LSMAGDM, information aggregation of sub-groups is a key element. In this phase, the RNWHM operator will be investigated and its properties explored. Subsequently, the adjusted sub-group information is aggregated using the RNWHM operator.
Stage 5. Selection process.
During this stage, certain methodologies are employed to determine the ultimate prioritization of the various options. Firstly, it is necessary to get attribute weights. The LOPCOW method is a weight calculation approach that is objective and does not rely on standard a priori information. It is capable of effectively capturing the experts’ hesitation during the preference-sharing process. By utilizing the logarithmic operator, LOPCOW can significantly mitigate the impact of extreme values. Furthermore, the CRADIS approach is employed for the purpose of ranking alternatives. The CRADIS method was created by combining different elements from the ARAS, MARCOS, and TOPSIS methodologies. It offers a more direct, practical, useful, and efficient approach to assess and rank alternative options.

Procedure of the proposed R-LSMAGDM

This section discusses the decision-making procedures of the proposed methodology, which are based on the decision-making framework described above for the R-LSMAGDM model.

Construct the initial matrix

Construct initial decision matrices for multiple experts based on their evaluation information:
$$\begin{aligned} {{{\tilde{X}}}^f} = {{{\left( {{{\tilde{x}}}_{ij}^f} \right) }_{m \times n}}}, \end{aligned}$$
(13)
where \(f = 1,2,\ldots ,r,i = 1,2,\ldots ,m,j = 1,2,\ldots ,n.\)
The initial matrix is transformed into an R-numbers matrix by the definition of R-numbers. In the R-LSMAGDM framework proposed in this paper, the values of risk parameters are given by experts, namely pessimistic risk \({{{\tilde{r}}}^ - }\) , optimistic risk \({{{\tilde{r}}}^ + }\) , pessimistic acceptable risk \({{\widetilde{AR}}^ - }\) , optimistic acceptable risk \({{\widetilde{AR}}^ + }\) , pessimistic risk perception \({{\widetilde{RP}}^ - }\) , optimistic risk perception \({{\widetilde{RP}}^ + }\) . The initial decision matrix is converted into the form of R-numbers following the definition of R-numbers.
$$\begin{aligned} {X^f} = R\left( {{{\left( {{{\tilde{x}}}_{ij}^f} \right) }_{m \times n}}} \right) = {\left( {x_{ij}^f} \right) _{m \times n}}, \end{aligned}$$
(14)
where \(x_{ij}^f = R\left( {{{\tilde{x}}}_{ij}^f} \right) = \left( \begin{array}{l} \left( {x_{ij11}^f,x_{ij12}^f,x_{ij13}^f} \right) ,\\ \left( {x_{ij21}^f,x_{ij22}^f,x_{ij23}^f} \right) ,\\ \left( {x_{ij31}^f,x_{ij32}^f,x_{ij33}^f} \right) \end{array} \right) ,\) \(f = 1,2,\ldots ,r,i = 1,2,\ldots ,m,j = 1,2,\ldots ,n.\)
The beneficial and non-beneficial modes are as follows, respectively.
$$\begin{aligned} {R_b}\left( {{{\tilde{x}}}_{ij}^f} \right) = \left( {{R_{1b}}\left( {{{\tilde{x}}}_{ij}^f} \right) ,{R_{2b}}\left( {{{\tilde{x}}}_{ij}^f} \right) ,{R_{3b}}\left( {{{\tilde{x}}}_{ij}^f} \right) } \right) , \end{aligned}$$
where
$$\begin{aligned} \left\{ \begin{array}{l} {R_{1b}}\left( {{{\tilde{x}}}_{ij}^f} \right) \\ \quad =\max \left( {{{\tilde{x}}}_{ij}^f \otimes \left( \begin{array}{l} 1 \ominus \min \left( {\frac{{{{{{\tilde{r}}}}^ - }}}{{1 \ominus {{{\widetilde{RP}}}^ - }}},\tau } \right) \otimes \left( {1 \ominus {{{\widetilde{AR}}}^ - }} \right) \end{array} \right) ,{l_{{{\tilde{B}}}}}} \right) ,\\ {R_{2b}}\left( {{{\tilde{x}}}_{ij}^f} \right) = {{\tilde{x}}}_{ij}^f,\\ {R_{3b}}\left( {{{\tilde{x}}}_{ij}^f} \right) = \min \left( {{{\tilde{x}}}_{ij}^f \otimes \left( {1 \oplus \frac{{{{{{\tilde{r}}}}^ + }}}{{1 \ominus {{{\widetilde{RP}}}^ + }}} \otimes \left( {1 \ominus {{{\widetilde{AR}}}^ + }} \right) } \right) ,{u_{{{\tilde{B}}}}}} \right) ,\\ 0< {{{{\tilde{r}}}}^ - } < 1,{{{{\tilde{r}}}}^ + } > 0. \end{array} \right. \end{aligned}$$
and
$$\begin{aligned} \begin{aligned} {R_c}\left( {{{\tilde{x}}}_{ij}^f} \right) = \left( {{R_{1c}}\left( {{{\tilde{x}}}_{ij}^f} \right) ,{R_{2c}}\left( {{{\tilde{x}}}_{ij}^f} \right) ,{R_{3c}}\left( {{{\tilde{x}}}_{ij}^f} \right) } \right) , \end{aligned} \end{aligned}$$
where
$$\begin{aligned} \left\{ \begin{array}{l} {R_{1c}}\left( {{{\tilde{x}}}_{ij}^f} \right) \\ \quad =\max \left( {{{\tilde{x}}}_{ij}^f \otimes \left( \begin{array}{l} 1 \ominus \min \left( {\frac{{{{{{\tilde{r}}}}^ + }}}{{1 \ominus {{{\widetilde{RP}}}^ + }}},\tau } \right) \otimes \left( {1 \ominus {{{\widetilde{AR}}}^ + }} \right) \end{array} \right) ,{l_{{{\tilde{B}}}}}} \right) ,\\ {R_{2c}}\left( {{{\tilde{x}}}_{ij}^f} \right) = {{\tilde{x}}}_{ij}^f,\\ {R_{3c}}\left( {{{\tilde{x}}}_{ij}^f} \right) = \min \left( {{{\tilde{x}}}_{ij}^f \otimes \left( {1 \oplus \frac{{{{{{\tilde{r}}}}^ - }}}{{1 \ominus {{{\widetilde{RP}}}^ - }}} \otimes \left( {1 \ominus {{{\widetilde{AR}}}^ - }} \right) } \right) ,{u_{{{\tilde{B}}}}}} \right) ,\\ {{{{\tilde{r}}}}^ - } > 1,0< {{{{\tilde{r}}}}^ + } < 1. \end{array} \right. \end{aligned}$$
Normalize the decision matrix to obtain the normalized matrix \({{{\hat{X}}}^f} = {\left( {{{\hat{x}}}_{ij}^f} \right) _{m \times n}}\), the elements of which are calculated from Eqs. (15) and (16).
$$\begin{aligned} {{\hat{x}}}_{ij}^f= & {} \left( \begin{array}{l} \left( {\frac{{x_{ij11}^f - a_{}^ - }}{{c_{}^ + - a_{}^ - }},\frac{{x_{ij12}^f - a_{}^ - }}{{c_{}^ + - a_{}^ - }},\frac{{x_{ij13}^f - a_{}^ - }}{{c_{}^ + - a_{}^ - }}} \right) ,\\ \left( {\frac{{x_{ij21}^f - a_{}^ - }}{{c_{}^ + - a_{}^ - }},\frac{{x_{ij22}^f - a_{}^ - }}{{c_{}^ + - a_{}^ - }},\frac{{x_{ij23}^f - a_{}^ - }}{{c_{}^ + - a_{}^ - }}} \right) ,\\ \left( {\frac{{x_{ij31}^f - a_{}^ - }}{{{c^ + } - {a^ - }}},\frac{{x_{ij32}^f - a_{}^ - }}{{{c^ + } - {a^ - }}},\frac{{x_{ij33}^f - a_{}^ - }}{{{c^ + } - {a^ - }}}} \right) \end{array} \right) ,\quad j \in Benefit; \end{aligned}$$
(15)
$$\begin{aligned} {{\hat{x}}}_{ij}^f= & {} \left( \begin{array}{l} \left( {\frac{{c_{}^ + - x_{ij33}^f}}{{c_{}^ + - a_{}^ - }},\frac{{c_{}^ + - x_{ij32}^f}}{{c_{}^ + - a_{}^ - }},\frac{{c_{}^ + - x_{ij31}^f}}{{c_{}^ + - a_{}^ - }}} \right) ,\\ \left( {\frac{{c_{}^ + - x_{ij23}^f}}{{c_{}^ + - a_{}^ - }},\frac{{c_{}^ + - x_{ij22}^f}}{{c_{}^ + - a_{}^ - }},\frac{{c_{}^ + - x_{ij21}^f}}{{c_{}^ + - a_{}^ - }}} \right) ,\\ \left( {\frac{{c_{}^ + - x_{ij13}^f}}{{c_{}^ + - a_{}^ - }},\frac{{c_{}^ + - x_{ij12}^f}}{{c_{}^ + - a_{}^ - }},\frac{{c_{}^ + - x_{ij11}^f}}{{c_{}^ + - a_{}^ - }}} \right) \end{array} \right) ,\quad j \in Cost, \end{aligned}$$
(16)
where \( a_{}^ - = \mathop {\min }\limits _i x_{ij11}^f,c_{}^ + = \mathop {\max }\limits _i x_{ij33}^f\).

Sub-group management

This stage is divided into three steps:
(1)
Sub-group detection: we use the k-means clustering method for sub-group determination.
 
(2)
Sub-group aggregation: the opinions of experts within the sub-group are aggregated using the aggregation operator.
 
(3)
Sub-group weighing determination: the weights of sub-groups are calculated by considering the sub-group dimensions.
 

Sub-group detection

In this section, an adapted fuzzy k-means clustering method [25] is developed to detect the preferences of experts represented by R-numbers.
  • Step 1. Draw a scatterplot of the distribution of expert information and determine the number of clusters k based on the distribution of the scatterplot. Here the expert sample points are represented as \({e_f} = \frac{1}{{mn}}\sum \nolimits _{j = 1}^n \sum \nolimits _{i = 1}^m {COA\left( {{{\hat{x}}}_{ij}^f} \right) } \).
  • Step 2. The k is randomly selected samples from the expert information matrix as initial clustering centers \(C{C^t} = \left( {CC_1^t,CC_2^t,\ldots ,CC_k^t} \right) \), where \(CC_l^t = {\left( {x_{ij}^{l,t}} \right) _{m \times n}}~\left( {l = 1,2,\ldots ,k,t = 0} \right) \).
  • Step 3. Calculate the distance of the expert to each cluster center by
    $$\begin{aligned} d\left( {{{{{\hat{X}}}}^f},CC_l^t} \right) = \frac{1}{{mn}}\sum \limits _{j = 1}^n {\sum \limits _{i = 1}^m {d\left( {{{\hat{x}}}_{ij}^f,cc_{ij}^{l,t}} \right) } }. \end{aligned}$$
    (17)
  • Step 4. The expert matrix \({{{\hat{X}}}^f}\) is classified into nearest clusters with minimum distance, i.e. \(CC_p^t\) , where \(p = \arg \left( {{{\min }_{l \in \left( {1,2,\ldots ,k} \right) }}d\left( {{{{{\hat{X}}}}^f},CC_l^t} \right) } \right) \).
  • Step 5. Updating the clustering centers based on expert opinion on clustering.
    $$\begin{aligned} cc_{ij}^{l,t} = { \oplus _{{{\hat{X}}}_{}^f \in CC_l^t}}\frac{{{{\hat{x}}}_{ij}^f}}{{\left\| {CC_l^t} \right\| }}, \end{aligned}$$
    (18)
    where \(\left\| {CC_l^t} \right\| \) is the number of matrix in cluster \(CC_l^t\).
  • Step 6. Suppose \(\varepsilon \) be the predefined parameter. If
    $$\begin{aligned} \frac{1}{{rk}}\sum \limits _{f = 1}^r {\sum \limits _{l = 1}^k {\left| {d\left( {{{\hat{x}}}_{ij}^f,cc_{ij}^{l,t + 1}} \right) - \sum \limits _{i = 1}^m {d\left( {{{\hat{x}}}_{ij}^f,cc_{ij}^{l,t}} \right) } } \right| } } \le \varepsilon ,\nonumber \\ \end{aligned}$$
    (19)
    go to next step; otherwise, let \(t = t + 1\) and go back to step 1.
  • Step 7. Output the clusters.

Sub-group aggregation

The expert matrices within a sub-group are aggregated using the simple arithmetic mean operator to obtain an aggregation matrix for each subgroup:
$$\begin{aligned} {G_l} = {\left( {g_{ij}^l} \right) _{m \times n}}, \end{aligned}$$
(20)
where \(g_{ij}^l = RNAM\left( {{{\hat{x}}}_{ij}^1,{{\hat{x}}}_{ij}^2,\ldots ,{{\hat{x}}}_{ij}^{\left\| {C{C_l}} \right\| }} \right) = \frac{{\mathop \oplus \limits _{f = 1}^{\left\| {C{C_l}} \right\| } {{\hat{x}}}_{ij}^f}}{{\left\| {C{C_l}} \right\| }}\).

Sub-group weighting

Different sub-groups may play different roles in reaching consensus. Therefore, it is crucial to calculate sub-group weights. According to the majority principle, we will assign greater weights to more members within a sub-group. At the same time, considering the information entropy of the sub-groups, the higher the information entropy of a sub-group indicates that the more discrete information provided by the experts, the greater the influence of the discrete information provided by the experts on the overall evaluation, which should be given a higher weight, and vice versa. Therefore, the sub-group weights are determined as shown below:
The information entropy of each sub-group is calculated as follows:
$$\begin{aligned}{} & {} {\theta ^1}\left( {C{C_l}} \right) \nonumber \\{} & {} \quad = \frac{{\sum \nolimits _{j = 1}^n {\sum \nolimits _{i = 1}^m {\left( {COA\left( {g_{ij}^l} \right) \times \ln \left( {COA\left( {g_{ij}^l} \right) } \right) } \right) } } }}{{\ln \left( {nm} \right) }}, \end{aligned}$$
(21)
where \(i=1,2,\ldots ,m,j=1,2,\ldots ,n,l=1,2,\ldots ,k.\)
The importance of the sub-groups is determined according to the majority principle and the equation determined is as follows:
$$\begin{aligned} {\theta ^2}\left( {C{C_l}} \right) = \frac{{\left\| {C{C_l}} \right\| }}{{\sum \nolimits _{l = 1}^k {\left\| {C{C_l}} \right\| } }}, \end{aligned}$$
(22)
where \(\left\| {C{C_l}} \right\| \) denotes the number of expert matrices contained in the cluster \({C{C_l}}\) and \(\sum \nolimits _{l = 1}^k {\left\| {C{C_l}} \right\| }\) denotes the number of all expert matrices.
Determine the weights of the sub-groups based on the majority principle and the entropy of the sub-groups.
$$\begin{aligned} \theta \left( {C{C_l}} \right) = \frac{{1 - {\theta ^1}\left( {C{C_l}} \right) }}{{\sum \nolimits _{l = 1}^k {\left( {1 - {\theta ^1}\left( {C{C_l}} \right) } \right) } }} \cdot {\theta ^2}\left( {C{C_l}} \right) , \end{aligned}$$
(23)
where \(l=1,2,\ldots ,k.\)
The above equation is normalized according to the following equation
$$\begin{aligned} {\lambda _l} = \frac{{\theta \left( {C{C_l}} \right) }}{{\sum \nolimits _{l = 1}^k {\theta \left( {C{C_l}} \right) }}}, \end{aligned}$$
(24)
where \(l=1,2,\ldots ,k.\)

Consensus reaching process

The MCC model is an automated consensus reaching process that reformulates the group decision-making problem in the form of a mathematical programming problem. These models, first proposed by Ben-Arieh and Easton [6], aim to minimize the cost of moving preferences of decision makers to ensure that an individual’s modified opinion is sufficiently close to the group’s opinion. In this paper we use the MCC model with the aim of minimizing the cost of moving subgroups’ opinions to ensure that sub-groups’ modified opinions are close enough to the group opinion (predefined threshold \(\varepsilon \in \left[ {0,1} \right] \)). Assuming that the initial subgroup opinion \(G = \left( {{G_1},{G_2},\ldots ,{G_l}} \right) \) is and the cost vector is \(\vartheta = \left( {{\vartheta _1},{\vartheta _2},\ldots ,{\vartheta _l}} \right) \), the minimum cost consensus model is defined as follows:
$$\begin{aligned} \begin{array}{l} \mathop {\min }\limits _{G'} \quad \sum \limits _{l = 1}^k {{\vartheta _l}\left| {{{G'}_l} - {G_l}} \right| } \\ s.t.\quad \left| {{{G'}_l} - {{{{\bar{G}}}'}_l}} \right| \le \varepsilon ,l = 1,2,\ldots ,k, \end{array} \end{aligned}$$
where \(\left( {{{G'}_1},{{G'}_2},\ldots ,{{G'}_l}} \right) \) is the subgroup-adjusted opinion, \({{\bar{G}}}'\) denotes the mean of the adjusted opinion, and \(\varepsilon \in \left[ {0,1} \right] \) is the maximum absolute deviation from each sub-group and the collective opinion.
The MCC model is now improved. Unlike the aggregation results obtained by simple arithmetic averaging in the above equation, the model is optimized by means of the RNWHM operator, which, after determining the relevant attributes, first calculates the geometric mean of the values of multiple attributes, and then performs arithmetic mean of these geometric means. This aggregation method reflects the correlation between attributes. The model definition of the improved MCC-HM is as follows:
$$\begin{aligned} \begin{array}{l} \mathop {\min }\limits _{G'} \quad \sum \limits _{l = 1}^k {{\vartheta _l}\left| {{{G'}_l} - {G_l}} \right| } \\ s.t.\quad \left\{ \begin{array}{l} {{\bar{G}}}' = RNWH{M^p}\left( {{{G'}_1},{{G'}_2},\ldots ,{{G'}_l}} \right) ,\\ \left| {{{G'}_l} - {{{{\bar{G}}}'}_l}} \right| \le \varepsilon ,l = 1,2,\ldots ,k, \end{array} \right. \end{array} \end{aligned}$$
where \(\left( {{{G'}_1},{{G'}_2},\ldots ,{{G'}_l}} \right) \) is the subgroup-adjusted opinion, \({{\bar{G}}}'\) denotes the mean of the adjusted opinion, and \(\varepsilon \in \left[ {0,1} \right] \) is the maximum absolute deviation from each sub-group and the collective opinion.
The study of specific RNWHM operators is detailed in the latter subsection.

Aggregation process

RNWHM operator is studied and its properties are argued in this section.
Definition 7
[21] Let \({a_i}\left( {i = 1,2,\ldots ,m} \right) \) be a collection of crisp numbers, and \(p = 1,2,\ldots ,m\), if
$$\begin{aligned}{} & {} H{M^{\left( p \right) }}\left( {{a_1},{a_2},\ldots ,{a_m}} \right) \nonumber \\{} & {} \quad \quad = \frac{{\sum \nolimits _{1 \le {i_1}< \cdots < {i_p} \le m} {{{\left( {\prod \nolimits _{j = 1}^p {{a_{{i_j}}}} } \right) }^{{1 / p}}}} }}{{C_m^p}} \end{aligned}$$
(25)
then is called the Hamy mean (HM) operator, where \(\left( {{i_1},{i_2},\ldots ,{i_p}} \right) \) traversal all the p-tuple combinations of \(\left( {1,2,\ldots .,m} \right) \) and \(C_m^p\) is the binomial coefficient. It is clear that the HM satisfies the following properties:
(1)
\(H{M^{\left( p \right) }}\left( {0,0,\ldots ,0} \right) = 0.\)
 
(2)
\(H{M^{\left( p \right) }}\left( {a,a,\ldots ,a} \right) = a.\)
 
(3)
\(H{M^{\left( p \right) }}\left( {{a_1},{a_2},\ldots ,{a_m}} \right) \le H{M^{\left( p \right) }}\left( {{b_1},{b_2},\ldots ,{b_m}} \right) \), if \({a_i} \le {b_i}\) for all i.
 
(4)
\({\min _i}\left( {{a_i}} \right) \le H{M^{\left( p \right) }}\left( {{a_1},{a_2},\ldots ,{a_m}} \right) \le {\max _i}\left( {{a_i}} \right) .\)
 
Definition 8
Suppose \(R\left( {{{{{\tilde{B}}}}_i}} \right) \left( {i = 1,2,\ldots ,m} \right) \) be a family of R-numbers, a mapping \(RNWHM:R{\left( {{{\tilde{B}}}} \right) ^m} \rightarrow R\left( {{{\tilde{B}}}} \right) \) is called R-numbers weighted Hamy mean operator, if
$$\begin{aligned} \begin{aligned}&RNWH{M^{\left( p \right) }}\left( {R\left( {{{{{\tilde{B}}}}_1}} \right) ,R\left( {{{{{\tilde{B}}}}_2}} \right) ,\ldots ,R\left( {{{{{\tilde{B}}}}_m}} \right) } \right) \\&\quad = \frac{{{ \oplus _{1 \le {i_1}< \cdots < {i_p} \le m}}{{\left( { \otimes _{j = 1}^p{{\left( {R{{\left( {{{{{\tilde{B}}}}_i}} \right) }_j}} \right) }^{{w_{{i_j}}}}}} \right) }^{{1 / p}}}}}{{C_m^p}} \end{aligned} \end{aligned}$$
(26)
where \(w = {\left( {{w_1},{w_2},\ldots ,{w_m}} \right) ^T}\) is weight vector of \(R\left( {{{{{\tilde{B}}}}_i}} \right) \) , with \({w_i} \in \left[ {0,1} \right] \) and \(\sum \nolimits _{i = 1}^m {{w_i}} = 1, p = 1,2,\ldots ,m\). \(\left( {{i_1},{i_2},\ldots ,{i_p}} \right) \) traversal all the p-tuple combinations of \(\left( {1,2,\ldots .,m} \right) \) and \(C_m^p\) is the binomial coefficient.
Theorem 1
The RNWHM operator aggregates all the input values and yields an R-number given by
$$\begin{aligned} \begin{aligned}&RNWH{M^{\left( p \right) }}\left( {R\left( {{{{{\tilde{B}}}}_1}} \right) ,R\left( {{{{{\tilde{B}}}}_2}} \right) ,\ldots ,R\left( {{{{{\tilde{B}}}}_m}} \right) } \right) \\&\quad = \left( {\left( {{\mu _{11}},{\mu _{12}},{\mu _{13}}} \right) ,\left( {{\mu _{21}},{\mu _{22}},{\mu _{23}}} \right) ,\left( {{\mu _{31}},{\mu _{32}},{\mu _{33}}} \right) } \right) \end{aligned}\nonumber \\ \end{aligned}$$
(27)
where \(\left( {{\mu _{11}},{\mu _{12}},{\mu _{13}}} \right) = \left( \begin{array}{l} \frac{{\sum \nolimits _{1 \le {i_1}< \cdots< {i_p} \le n} {{{\left( {\prod \nolimits _{j = 1}^p {\left( {{\mu _{11}}} \right) _{{i_j}}^{{w_{{i_j}}}}} } \right) }^{{1 / p}}}} }}{{C_n^p}},\\ \frac{{\sum \nolimits _{1 \le {i_1}< \cdots< {i_p} \le m} {{{\left( {\prod \nolimits _{j = 1}^p {\left( {{\mu _{12}}} \right) _{{i_j}}^{{w_{{i_j}}}}} } \right) }^{{1 / p}}}} }}{{C_m^p}},\\ \frac{{\sum \nolimits _{1 \le {i_1}< \cdots < {i_p} \le m} {{{\left( {\prod \nolimits _{j = 1}^p {\left( {{\mu _{13}}} \right) _{{i_j}}^{{w_{{i_j}}}}} } \right) }^{{1 / p}}}} }}{{C_m^p}} \end{array} \right) \), \(\left( {{\mu _{21}},{\mu _{22}},{\mu _{23}}} \right) = \left( \begin{array}{l} \frac{{\sum \nolimits _{1 \le {i_1}< \cdots< {i_p} \le n} {{{\left( {\prod \nolimits _{j = 1}^p {\left( {{\mu _{21}}} \right) _{{i_j}}^{{w_{{i_j}}}}} } \right) }^{{1/ p}}}} }}{{C_m^p}},\\ \frac{{\sum \nolimits _{1 \le {i_1}< \cdots< {i_p} \le m} {{{\left( {\prod \nolimits _{j = 1}^p {\left( {{\mu _{22}}} \right) _{{i_j}}^{{w_{{i_j}}}}} } \right) }^{{1 / p}}}} }}{{C_m^p}},\\ \frac{{\sum \nolimits _{1 \le {i_1}< \cdots < {i_p} \le n} {{{\left( {\prod \nolimits _{j = 1}^p {\left( {{\mu _{23}}} \right) _{{i_j}}^{{w_{{i_j}}}}} } \right) }^{{1 /p}}}} }}{{C_m^p}} \end{array} \right) \), \(\begin{array}{l} \left( {{\mu _{31}},{\mu _{32}},{\mu _{33}}} \right) = \left( \begin{array}{l} \frac{{\sum \nolimits _{1 \le {i_1}< \cdots< {i_p} \le n} {{{\left( {\prod \nolimits _{j = 1}^p {\left( {{\mu _{31}}} \right) _{{i_j}}^{{w_{{i_j}}}}} } \right) }^{{1 / k}}}} }}{{C_n^p}},\\ \frac{{\sum \nolimits _{1 \le {i_1}< \cdots< {i_p} \le n} {{{\left( {\prod \nolimits _{j = 1}^p {\left( {{\mu _{32}}} \right) _{{i_j}}^{{w_{{i_j}}}}} } \right) }^{{1 /p}}}} }}{{C_m^p}},\\ \frac{{\sum \nolimits _{1 \le {i_1}< \cdots < {i_p} \le m} {{{\left( {\prod \nolimits _{j = 1}^p {\left( {{\mu _{33}}} \right) _{{i_j}}^{{w_{{i_j}}}}} } \right) }^{{1 / p}}}} }}{{C_m^p}} \end{array} \right) \end{array}\), and \(w = {\left( {{w_1},{w_2},\ldots ,{w_m}} \right) ^T}\) is weight vector of \(R\left( {{{{{\tilde{B}}}}_i}} \right) \), with \({w_i} \in \left[ {0,1} \right] \) and \(\sum \nolimits _{i = 1}^m {{w_i}} = 1, p = 1,2,\ldots ,m\). \(\left( {{i_1},{i_2},\ldots ,{i_p}} \right) \) traversal all the p-tuple combinations of \(\left( {1,2,\ldots .,m} \right) \) and \(C_m^p\) is the binomial coefficient.
Example 1
Let \(R\left( {{{{{\tilde{B}}}}_1}} \right) = ( ( 0.1,0.2,0.3 ),( 0.2,0.3,0.4 ),( 0.3,0.4, 0.5 ) )\), \(R\left( {{{{{\tilde{B}}}}_2}} \right) = \big (\left( {0.2,0.3,0.4} \right) ,\left( {0.3,0.4,0.5} \right) ,\left( {0.4,0.5,0.6} \right) \big )\), \(R\left( {{{{{\tilde{B}}}}_3}} \right) = \big ( \left( {0.3,0.4,0.5} \right) ,\left( {0.4,0.5,0.6} \right) ,\left( {0.5,0.6,0.7} \right) \big )\) be three R-numbers, \(w = {\left( {{w_1},{w_2},{w_3}} \right) ^T} = \left( {0.2,0.3,0.5} \right) \), and suppose \(p = 2\). Then we have
$$\begin{aligned} \begin{aligned}&RNWH{M^{\left( 2 \right) }}\left( {R\left( {{{{{\tilde{B}}}}_1}} \right) ,R\left( {{{{{\tilde{B}}}}_2}} \right) ,R\left( {{{{{\tilde{B}}}}_3}} \right) } \right) \\&\quad = \left( {\left( {{\mu _{11}},{\mu _{12}},{\mu _{13}}} \right) ,\left( {{\mu _{21}},{\mu _{22}},{\mu _{23}}} \right) , \left( {{\mu _{31}},{\mu _{32}},{\mu _{33}}} \right) } \right) , \end{aligned} \end{aligned}$$
where
\( \left( {{\mu _{11}},{\mu _{12}},{\mu _{13}}} \right) \quad \quad = \left( \begin{array}{l} \frac{{\sum \nolimits _{1 \le {i_1}< \cdots< {i_p} \le 3} {{{\left( {\prod \nolimits _{j = 1}^2 {\left( {{\mu _{11}}} \right) _{{i_j}}^{{w_{{i_j}}}}} } \right) }^{{1 / 2}}}} }}{{C_3^2}},\\ \frac{{\sum \nolimits _{1 \le {i_1}< \cdots< {i_p} \le 3} {{{\left( {\prod \nolimits _{j = 1}^2 {\left( {{\mu _{12}}} \right) _{{i_j}}^{{w_{{i_j}}}}} } \right) }^{{1 / 2}}}} }}{{C_3^2}},\\ \frac{{\sum \nolimits _{1 \le {i_1}< \cdots < {i_p} \le 3} {{{\left( {\prod \nolimits _{j = 1}^2 {\left( {{\mu _{13}}} \right) _{{i_j}}^{{w_{{i_j}}}}} } \right) }^{{1 / 2}}}} }}{{C_3^2}} \end{array} \right) \quad \quad = \left( \frac{1}{3} \times \left( \begin{array}{l} {\left( {{{0.1}^{0.2}} \times {{0.2}^{0.3}}} \right) ^{{1 / 2}}} + {\left( {{{0.1}^{0.2}} \times {{0.3}^{0.5}}} \right) ^{{1 / 2}}}\\ + {\left( {{{0.2}^{0.3}} \times {{0.3}^{0.5}}} \right) ^{{1 / 2}}} \end{array} \right) ,\right. \) \(\quad \quad \left. \frac{1}{3} \times \left( \begin{array}{l} {\left( {{{0.2}^{0.2}} \times {{0.3}^{0.3}}} \right) ^{{1 / 2}}} + {\left( {{{0.2}^{0.2}} \times {{0.4}^{0.5}}} \right) ^{{1 / 2}}}\\ + {\left( {{{0.3}^{0.3}} \times {{0.4}^{0.5}}} \right) ^{{1 / 2}}} \end{array} \right) ,\right. \) \(\left. \quad \quad \frac{1}{3} \times \left( \begin{array}{l} {\left( {{{0.3}^{0.2}} \times {{0.4}^{0.3}}} \right) ^{{1 / 2}}} + {\left( {{{0.3}^{0.2}} \times {{0.5}^{0.5}}} \right) ^{{1 / 2}}}\\ + {\left( {{{0.4}^{0.3}} \times {{0.5}^{0.5}}} \right) ^{{1 /2}}} \end{array} \right) \right) \quad \quad = \left( {0.5977,0.6839,0.7504} \right) ,\) \(\left( {{\mu _{21}},{\mu _{22}},{\mu _{23}}} \right) \quad \quad = \left( \begin{array}{l} \frac{{\sum \nolimits _{1 \le {i_1}< \cdots< {i_p} \le 3} {{{\left( {\prod \nolimits _{j = 1}^2 {\left( {{\mu _{21}}} \right) _{{i_j}}^{{w_{{i_j}}}}} } \right) }^{{1 / 2}}}} }}{{C_3^2}},\\ \frac{{\sum \nolimits _{1 \le {i_1}< \cdots< {i_p} \le n} {{{\left( {\prod \nolimits _{j = 1}^2 {\left( {{\mu _{22}}} \right) _{{i_j}}^{{w_{{i_j}}}}} } \right) }^{{1 /2}}}} }}{{C_3^2}},\\ \frac{{\sum \nolimits _{1 \le {i_1}< \cdots < {i_p} \le 3} {{{\left( {\prod \nolimits _{j = 1}^2 {\left( {{\mu _{23}}} \right) _{{i_j}}^{{w_{{i_j}}}}} } \right) }^{{1 / 2}}}} }}{{C_3^2}} \end{array} \right) \quad \quad = \left( \frac{1}{3} \times \left( \begin{array}{l} {\left( {{{0.2}^{0.2}} \times {{0.3}^{0.3}}} \right) ^{{1 /2}}} + {\left( {{{0.2}^{0.2}} \times {{0.4}^{0.5}}} \right) ^{{1 / 2}}}\\ + {\left( {{{0.3}^{0.3}} \times {{0.4}^{0.5}}} \right) ^{{1 / 2}}} \end{array} \right) ,\right. \) \(\quad \quad \left. \frac{1}{3} \times \left( \begin{array}{l} {\left( {{{0.3}^{0.2}} \times {{0.4}^{0.3}}} \right) ^{{1 /2}}} + {\left( {{{0.3}^{0.2}} \times {{0.5}^{0.5}}} \right) ^{{1 /2}}}\\ + {\left( {{{0.4}^{0.3}} \times {{0.5}^{0.5}}} \right) ^{{1 / 2}}} \end{array} \right) ,\right. \) \(\quad \quad \left. \frac{1}{3} \times \left( \begin{array}{l} {\left( {{{0.4}^{0.2}} \times {{0.5}^{0.3}}} \right) ^{{1 /2}}} + {\left( {{{0.4}^{0.2}} \times {{0.6}^{0.5}}} \right) ^{{1 /2}}}\\ + {\left( {{{0.5}^{0.3}} \times {{0.6}^{0.5}}} \right) ^{{1 /2}}} \end{array} \right) \right) \quad \quad = \left( {0.6839,0.7504,0.8062} \right) ,\) \(\left( {{\mu _{31}},{\mu _{32}},{\mu _{33}}} \right) \quad \quad = \left( \begin{array}{l} \frac{{\sum \nolimits _{1 \le {i_1}< \cdots< {i_p} \le 3} {{{\left( {\prod \nolimits _{j = 1}^2 {\left( {{\mu _{31}}} \right) _{{i_j}}^{{w_{{i_j}}}}} } \right) }^{{1 /2}}}} }}{{C_3^2}},\\ \frac{{\sum \nolimits _{1 \le {i_1}< \cdots< {i_p} \le 3} {{{\left( {\prod \nolimits _{j = 1}^2 {\left( {{\mu _{32}}} \right) _{{i_j}}^{{w_{{i_j}}}}} } \right) }^{{1 /2}}}} }}{{C_3^2}},\\ \frac{{\sum \nolimits _{1 \le {i_1}< \cdots < {i_p} \le 3} {{{\left( {\prod \nolimits _{j = 1}^2 {\left( {{\mu _{33}}} \right) _{{i_j}}^{{w_{{i_j}}}}} } \right) }^{{1 / 2}}}} }}{{C_3^2}} \end{array} \right) \quad \quad = \left( \frac{1}{3} \times \left( \begin{array}{l} {\left( {{{0.3}^{0.2}} \times {{0.4}^{0.3}}} \right) ^{{1 / 2}}} + {\left( {{{0.3}^{0.2}} \times {{0.5}^{0.5}}} \right) ^{{1 / 2}}}\\ + {\left( {{{0.4}^{0.3}} \times {{0.5}^{0.5}}} \right) ^{{1 /2}}} \end{array} \right) ,\right. \) \(\quad \quad \left. \frac{1}{3} \times \left( \begin{array}{l} {\left( {{{0.4}^{0.2}} \times {{0.5}^{0.3}}} \right) ^{{1 / 2}}} + {\left( {{{0.4}^{0.2}} \times {{0.6}^{0.5}}} \right) ^{{1 / 2}}}\\ + {\left( {{{0.5}^{0.3}} \times {{0.6}^{0.5}}} \right) ^{{1 / 2}}} \end{array} \right) ,\right. \) \(\quad \quad \left. \frac{1}{3} \times \left( \begin{array}{l} {\left( {{{0.5}^{0.2}} \times {{0.6}^{0.3}}} \right) ^{{1 / 2}}} + {\left( {{{0.5}^{0.2}} \times {{0.7}^{0.5}}} \right) ^{{1 /2}}}\\ + {\left( {{{0.6}^{0.3}} \times {{0.7}^{0.5}}} \right) ^{{1 /2}}} \end{array} \right) \right) \quad \quad = \left( {0.7504,0.8062,0.8550} \right) .\)
Theorem 2
Suppose \(R\left( {{{{{\tilde{B}}}}_i}} \right) \left( {i = 1,2,\ldots ,m} \right) \) be a family of R-numbers,
(1)
(Idempotency) If \(R\left( {{{{{\tilde{B}}}}_i}} \right) \left( {i = 1,2,\ldots ,m} \right) = R\left( {{{\tilde{B}}}} \right) =\) \(\left( {\left( {{\mu _{11}},{\mu _{12}},{\mu _{13}}} \right) ,\left( {{\mu _{21}},{\mu _{22}},{\mu _{23}}} \right) ,\left( {{\mu _{31}},{\mu _{32}},{\mu _{33}}} \right) } \right) \), then
$$\begin{aligned}{} & {} RNWH{M^{\left( p \right) }}\left( {R\left( {{{{{\tilde{B}}}}_1}} \right) ,R\left( {{{{{\tilde{B}}}}_2}} \right) ,\ldots ,R\left( {{{{{\tilde{B}}}}_m}} \right) } \right) \nonumber \\{} & {} \quad = R\left( {{{\tilde{B}}}} \right) . \end{aligned}$$
(28)
 
(2)
(Boundedness)
If \(R{\left( {{{{{\tilde{B}}}}_i}} \right) ^ + } = \left( \begin{array}{l} \left( {\mathop {\max }\limits _i {\mu _{11i}},\mathop {\max }\limits _i {\mu _{12i}},\mathop {\max }\limits _i {\mu _{13i}}} \right) ,\\ \left( {\mathop {\max }\limits _i {\mu _{21i}},\mathop {\max }\limits _i {\mu _{22i}},\mathop {\max }\limits _i {\mu _{23i}}} \right) ,\\ \left( {\mathop {\max }\limits _i {\mu _{31i}},\mathop {\max }\limits _i {\mu _{32i}},\mathop {\max }\limits _i {\mu _{33i}}} \right) \end{array} \right) \),
\(R{\left( {{{{{\tilde{B}}}}_i}} \right) ^ - } = \left( \begin{array}{l} \left( {\mathop {\min }\limits _i {\mu _{11i}},\mathop {\min }\limits _i {\mu _{12i}},\mathop {\min }\limits _i {\mu _{13i}}} \right) ,\\ \left( {\mathop {\min }\limits _i {\mu _{21i}},\mathop {\min }\limits _i {\mu _{22i}},\mathop {\min }\limits _i {\mu _{23i}}} \right) ,\\ \left( {\mathop {\min }\limits _i {\mu _{31i}},\mathop {\min }\limits _i {\mu _{32i}},\mathop {\min }\limits _i {\mu _{33i}}} \right) \end{array} \right) \), then
$$\begin{aligned} \begin{aligned}&R{\left( {{{{{\tilde{B}}}}_i}} \right) ^ - } \le RNWH{M^{\left( p \right) }}\\&\quad \left( {R\left( {{{{{\tilde{B}}}}_1}} \right) ,R\left( {{{{{\tilde{B}}}}_2}} \right) ,\ldots ,R\left( {{{{{\tilde{B}}}}_m}} \right) } \right) \\&\quad \quad \le R{\left( {{{{{\tilde{B}}}}_i}} \right) ^ + }. \end{aligned}\nonumber \\ \end{aligned}$$
(29)
 
(3)
(Monotonicity) If \(R\left( {{{{{\tilde{B}}}}_i}^\prime } \right) \left( {i = 1,2,\ldots ,m} \right) \) be also a family of R-numbers, and \({\mu _{11i}} \le {\mu '_{11i}},{\mu _{12i}} \le {\mu '_{12i}},{\mu _{13i}} \le {\mu '_{13i}},{\mu _{21i}} \le {\mu '_{21i}},{\mu _{22i}} \le {\mu '_{22i}},{\mu _{23i}} \le {\mu '_{23i}},{\mu _{31i}} \le {\mu '_{31i}},{\mu _{32i}} \le {\mu '_{32i}},{\mu _{33i}} \le {\mu '_{33i}}\), then
$$\begin{aligned} \begin{aligned}&RNWH{M^{\left( p \right) }}\left( {R\left( {{{{{\tilde{B}}}}_1}} \right) ,R\left( {{{{{\tilde{B}}}}_2}} \right) ,\ldots ,R\left( {{{{{\tilde{B}}}}_m}} \right) } \right) \\&\quad \le RNWH{M^{\left( p \right) }}\left( {R\left( {{{{{\tilde{B}}}'}_1}} \right) ,R\left( {{{{{\tilde{B}}}'}_2}} \right) ,\ldots ,R\left( {{{{{\tilde{B}}}'}_m}} \right) } \right) . \end{aligned}\nonumber \\ \end{aligned}$$
(30)
 
(4)
(Commutativity) If \(R\left( {{{{{\tilde{B}}}}_i}^\prime } \right) \left( {i = 1,2,\ldots ,m} \right) \) be any permutation of \(R\left( {{{{{\tilde{B}}}}_i}} \right) \left( {i = 1,2,\ldots ,m} \right) \), then
$$\begin{aligned} \begin{aligned}&RNWH{M^{\left( p \right) }}\left( {R\left( {{{{{\tilde{B}}}}_1}} \right) ,R\left( {{{{{\tilde{B}}}}_2}} \right) ,\ldots ,R\left( {{{{{\tilde{B}}}}_m}} \right) } \right) \\&\quad =RNWH{M^{\left( p \right) }}\left( {R\left( {{{{{\tilde{B}}}'}_1}} \right) ,R\left( {{{{{\tilde{B}}}'}_2}} \right) ,\ldots ,R\left( {{{{{\tilde{B}}}'}_m}} \right) } \right) . \end{aligned} \end{aligned}$$
(31)
 
The proof of the above theorem can be found in “Appendix A”.
Based on the adjusted subgroup opinions and the weights of the subgroups, the aggregated opinions \(D = {\left( {{x_{ij}}} \right) _{m \times n}}\) of all experts are obtained by RNWHM operator, where
$$\begin{aligned} \begin{aligned} {x_{ij}}&= RNWH{M^{\left( p \right) }}\left( {g{{_{ij}^1}^\prime },g{{_{ij}^2}^\prime },\ldots ,g{{_{ij}^k}^\prime }} \right) \\&= \frac{{{ \oplus _{1 \le {l_1}< \cdots < {l_p} \le k}}{{\left( { \otimes _{o = 1}^p{{\left( {{{\left( {g{{_{ij}^l}^\prime }} \right) }_o}} \right) }^{{\lambda _{{l_o}}}}}} \right) }^{{1 / p}}}}}{{C_k^p}},p = 2. \end{aligned} \end{aligned}$$
(32)

Selection process

Attribute weighting

The calculation of attribute weights is performed using the LOPCOW method [16]. The specific calculation steps are as follows.
  • Step 1. This step converts the elements of the aggregated expert decision matrix into clear values using the R-numbers defuzzification formula, and the transformed matrix values are obtained in Eq. (33).
    $$\begin{aligned} {z_{ij}}= & {} COA\left( {{x_{ij}}} \right) \nonumber \\= & {} \frac{1}{9}\left( \begin{array}{l} {x_{ij}}_{11} + {x_{ij}}_{12} + {x_{ij}}_{13} + {x_{ij}}_{21} + \\ {x_{ij}}_{22} + {x_{ij}}_{23} + {x_{ij}}_{31} + {x_{ij}}_{32} + {x_{ij}}_{33} \end{array} \right) \end{aligned}$$
    (33)
  • Step 2. Normalize the matrix obtained in Step 1.
    $$\begin{aligned} {z_{ij}}= & {} \frac{{{z_{ij}} - {z_{\min }}}}{{{z_{\max }} - {z_{\min }}}},j \in Benefit; \end{aligned}$$
    (34)
    $$\begin{aligned} {z_{ij}}= & {} \frac{{{z_{\max }} - {z_{ij}}}}{{{z_{\max }} - {z_{\min }}}},j \in Cost. \end{aligned}$$
    (35)
  • Step 3. The percentage value corresponding to each attribute is calculated by Eq. (36).
    $$\begin{aligned} P{V_j} = \left| {\ln \left( {\frac{{\sqrt{\frac{{\sum \nolimits _{i = 1}^m {z_{ij}^2} }}{m}} }}{{{\sigma _j}}}} \right) \cdot 100} \right| , \end{aligned}$$
    (36)
    where \(\sigma \) and m denote the standard deviation and the number of alternatives, respectively.
  • Step 4. Calculate attribute weights. The weight of each attribute is calculated by Eq. (37).
    $$\begin{aligned} {\psi _j} = \frac{{P{V_j}}}{{\sum \nolimits _{j = 1}^n {P{V_j}} }}. \end{aligned}$$
    (37)

CRADIS approach for ranking

The CRADIS method [53] is used to rank the alternatives and obtain the optimal solution.
  • Step 1. Calculate the weighting matrix https://static-content.springer.com/image/art%3A10.1007%2Fs40747-024-01437-9/MediaObjects/40747_2024_1437_IEq170_HTML.gif by considering the data from “Aggregation process” section and the attribute weight vectors from “Attribute weighting” section, where https://static-content.springer.com/image/art%3A10.1007%2Fs40747-024-01437-9/MediaObjects/40747_2024_1437_IEq171_HTML.gif .
  • Step 2. Identify the ideal and anti-ideal solutions for each attribute by applying Eqs. (38)–(39).
    https://static-content.springer.com/image/art%3A10.1007%2Fs40747-024-01437-9/MediaObjects/40747_2024_1437_Equ38_HTML.png
    (38)
    https://static-content.springer.com/image/art%3A10.1007%2Fs40747-024-01437-9/MediaObjects/40747_2024_1437_Equ39_HTML.png
    (39)
  • Step 3: Evaluate the deviation of the preference values from the ideal and anti-ideal points using Eqs. (40)–(41).
    https://static-content.springer.com/image/art%3A10.1007%2Fs40747-024-01437-9/MediaObjects/40747_2024_1437_Equ40_HTML.png
    (40)
    https://static-content.springer.com/image/art%3A10.1007%2Fs40747-024-01437-9/MediaObjects/40747_2024_1437_Equ41_HTML.png
    (41)
    where https://static-content.springer.com/image/art%3A10.1007%2Fs40747-024-01437-9/MediaObjects/40747_2024_1437_IEq172_HTML.gif , https://static-content.springer.com/image/art%3A10.1007%2Fs40747-024-01437-9/MediaObjects/40747_2024_1437_IEq173_HTML.gif .
  • Step 4: Apply Eqs. (42)–(43) to compute the utility measures used to estimate the normalized values for each alternative.
    $$\begin{aligned} U_i^ += & {} \frac{{\min VD_i^ + }}{{VD_i^ + }},\end{aligned}$$
    (42)
    $$\begin{aligned} U_i^ -= & {} \frac{{VD_i^ - }}{{\max VD_i^ - }}. \end{aligned}$$
    (43)
  • Step 5: Identify the ranked values of the alternatives by applying Eq. (44) based on the values from Step 4.
    $$\begin{aligned} R{A_i} = \frac{{U_i^ - + U_i^ + }}{2}. \end{aligned}$$
    (44)
The best alternative is the one with the with the minimum value of \(R{A_i}\).

Case study

In this section, the selection of hydrogen fuel cell logistics paths is shown as a validation example of the proposed LSMAGDM method. In recent years, with the urgency of climate change and sustainable development, hydrogen energy has attracted much attention as a low-carbon and environmentally friendly form of energy. Its high energy density, storability, and combination with technologies such as fuel cells have enabled hydrogen energy to show great potential for application in the fields of electricity, transportation, and industry. The process of research and application of hydrogen energy is accelerating in today’s world, and more and more countries and institutions are investing resources to promote the development and commercial application of hydrogen energy. Hydrogen fuel cells utilize hydrogen energy to provide fuel and achieve energy conversion and utilization. It has been applied in many fields, such as transportation, energy storage, and mobile power. Hydrogen fuel cell vehicles have become an important direction in the field of electric vehicles, with advantages such as long-range, fast refueling, and zero emissions. Toyota Motor Corporation has been a leader in the field of hydrogen fuel cell vehicles and has introduced vehicle models based on hydrogen fuel cell technology. For Toyota’s hydrogen fuel cell technology, supply chain logistics routing plays a key role in ensuring efficient and reliable production and delivery of hydrogen fuel cell systems. Now Toyota needs to deliver a shipment of fuel cells, so it needs to evaluate the risk of multiple logistics paths and select the appropriate transportation path. Therefore, 20 experts were invited, and after discussion, five logistics routes were selected.
\({A_1}\).
Direct transportation scheme: Direct transportation from the production location to the target location, omitting transit links.
\({A_2}\).
Multi-modal Transportation Solution: Adopt the combination of various transportation modes (e.g. land, sea, air) and select the optimal transportation combination according to the specific situation.
\({A_3}\).
Specific supplier cooperation program: Establish close cooperation with one or more reliable suppliers to realize regular supply and priority transportation.
\({A_4}\).
Regional Logistics Center Program: Establish regional logistics centers to centralize the storage and distribution of hydrogen fuel cells for more efficient transportation and distribution.
\({A_5}\).
Third-party logistics solution: Entrust a professional third-party logistics company to be responsible for the transportation and distribution of goods to provide more reliable logistics services.
The four attributes are evaluated:
\({C_1}\).
Reliability: Evaluate the reliability of the supply chain for different logistics paths, including the delivery on-time performance of suppliers, and the performance record of logistics and transportation service providers [27]. The reliability of the logistics supply chain has a direct impact on the efficiency and on-time production and delivery of products. A reliable supply chain ensures the timely delivery of required fuel cells and reduces the risk of production delays and supply interruptions.
\({C_2}\).
Cost-effectiveness: Considering transportation costs, warehousing costs, cross-border and other costs of logistics paths [28, 31, 44]. A cost-benefit assessment can help select cost-effective logistics paths to reduce operating costs and increase business profits. The most cost-effective logistics solution can be found by taking into account transportation costs, warehousing costs, and cross-border and other expenses.
\({C_3}\).
Consider the environmental impacts of logistics pathways, such as carbon emissions, air quality, and other environmental indicators [38, 63]. With the growing importance of sustainable development, it is important to consider the environmental impact of logistics paths. Choosing logistics methods with low carbon emissions and minimizing adverse environmental impacts can help companies comply with environmental protection regulations while enhancing their corporate image and sustainability credentials.
\({C_4}\).
Security risks: Assessing the security risks of different logistics routes is an important consideration in ensuring the safe transportation and delivery of goods [28, 31, 44]. Security risks include the risk of loss, breakage, and theft of goods during logistics transportation. Security risks can be mitigated by choosing a reputable third-party logistics company or a supplier with whom a close working relationship has been established. In addition, the adoption of appropriate insurance measures and monitoring measures for transportation security are also important means of reducing security risks.
After the risk indicators and alternatives are identified, the experts perform linguistic evaluations of the selected risk indicators and alternatives on a five-point scale (see Table 3). Similarly pessimistic risk \({{{\tilde{r}}}^ - }\) , optimistic risk \({{{\tilde{r}}}^ + }\), pessimistic acceptable risk \({{\widetilde{AR}}^ - }\), and optimistic acceptable risk \({{\widetilde{AR}}^ + }\) are also obtained on the five-point scale (see Table 3). In the expert assessment, appropriate risk perceptions \({\widetilde{RP}}\) are considered in Table 4 and positive risk perceptions and negative risk perceptions are considered to be equal, i.e \({{\widetilde{RP}}^ + } = {{\widetilde{RP}}^ - }\).
Table 3
Linguistic terms for evaluating alternatives, optimistic risk, pessimistic risk, optimistic acceptable risk, and pessimistic acceptable risk
Linguistic terms
Triangular fuzzy numbers
Very low (VL)
(0, 0, 0.3)
Low (L)
(0.1, 0.3, 0.5)
Medium (M)
(0.3, 0.5, 0.7)
High (H)
(0.5, 0.7, 0.9)
Very high (VH)
(0.7, 0.99, 0.99)
Table 4
Linguistic terms for evaluating risk perception
Linguistic terms
Triangular fuzzy numbers
Optimistic expert
Very low (VL)
(0, 0, 0.3)
Low (L)
(0.1, 0.3, 0.5)
Medium (M)
(0.3, 0.5, 0.7)
High (H)
(0.5, 0.7, 0.9)
Very high (VH)
(0.7, 0.99, 0.99)
Pessimistic expert
Very low (VL)
(\(-\) 0.3, 0, 0)
Low (L)
(\(-\) 0.5, \(-\) 0.3, \(-\) 0.1)
Medium (M)
(\(-\) 0.7, \(-\) 0.5, \(-\) 0.3)
High (H)
(\(-\) 0.9, \(-\) 0.7, \(-\) 0.5)
Very high (VH)
(\(-\) 0.99, \(-\) 0.99, \(-\) 0.7)

Application of the proposed method in the selection of hydrogen fuel cell logistics paths

Stage 1. Construct the initial matrix.
The results of the experts’ assessments are provided in Table 14. Based on Tables 3 and 4, these evaluations are then converted into comparison scales corresponding to triangular fuzzy numbers (see Table 14). Next, The R-numbers matrix is obtained according to the definition of R-numbers, as shown in Table 16. The detailed decision making steps are as follows. The normalized matrix is shown in Table 17.
Stage 2. Sub-group management.
(1)
Sub-group detection: we use k-means clustering method for sub-group detection. According to the expert information distribution plot in Fig. 2, we determine the number of clusters \(k = 3\). And the cluster results obtained based on the calculations are shown in Table 5. The clustering results are shown in Fig. 3.
 
(2)
Sub-group aggregation: aggregate the opinions of the experts in the subgroups using the simple arithmetic mean operator, and the aggregation results are shown in Table 5.
 
(3)
Sub-group weighing determination: the weights of sub-groups are calculated by considering the sub-group dimensions and entropy.
 
Table 5
Cluster results
Sub-groups
Cluster Label
Cluster 1
\(\left\{ {{e_3},{e_9},{e_{12}},{e_{15}},{e_{18}}} \right\} \)
Cluster 2
\(\left\{ {{e_1},{e_4},{e_7},{e_{10}},{e_{13}},{e_{16}},{e_{19}}} \right\} \)
Cluster 3
\(\left\{ {{e_2},{e_5},{e_6},{e_8},{e_{11}},{e_{14}},{e_{17}},{e_{20}}} \right\} \)
According to Eq. (21), the information entropy of each sub-group is computed to obtain:
$$\begin{aligned}{} & {} {\theta ^1}\left( {C{C_1}} \right) =2.8631, {\theta ^1}\left( {C{C_2}} \right) =2.7503,\\{} & {} \quad {\theta ^1}\left( {C{C_3}} \right) =3.2365. \end{aligned}$$
The importance of the sub-group is determined according to the majority principle, and applying Eq. (22) leads to:
$$\begin{aligned} {\theta ^2}\left( {C{C_1}} \right) =0.25, {\theta ^2}\left( {C{C_2}} \right) =0.35, {\theta ^2}\left( {C{C_3}} \right) =0.4. \end{aligned}$$
According to the majority principle and the entropy values of the subgroups, use Eq. (23) to derive the weights of the sub-groups:
$$\begin{aligned} \theta \left( {C{C_l}} \right) =0.078, \theta \left( {C{C_2}} \right) =0.1280, \theta \left( {C{C_3}} \right) =0.1294. \end{aligned}$$
According to Eq. (24), the normalization is performed to obtain the final subgroup weights:
$$\begin{aligned} {\lambda _l}=0.2319, {\lambda _2}=0.3820,{\lambda _3}=0.3862. \end{aligned}$$
Stage 3. Consensus reaching process.
According to the improved MCC-HM model, the adjusted sub-group aggregation results are obtained, as shown in Table 19.
Stage 4. Aggregation process.
The adjusted subgroup aggregation matrices are aggregated using the RNWHM operator to obtain the final expert information aggregation results, as shown in Table 20.
Stage 5. Selection process.
A. Attribute weighting
Use LOPCOW to calculate attribute weights:
  • Step 1. Use Eq. (33) to convert the elements of the expert decision aggregation matrix to clear values, and the results of the conversion are shown in Table 6.
    Table 6
    Clear value matrix
     
    \({C_1}\)
    \({C_2}\)
    \({C_3}\)
    \({C_4}\)
    \({A_1}\)
    0.436
    0.389
    0.389
    0.4541
    \({A_2}\)
    0.4531
    0.3992
    0.3992
    0.4246
    \({A_3}\)
    0.3757
    0.4286
    0.4286
    0.4568
    \({A_4}\)
    0.404
    0.4025
    0.4025
    0.4258
    \({A_5}\)
    0.3826
    0.4342
    0.4342
    0.4554
  • Step 2. Use Eqs. (34)–(35) to normalize the clear value matrix, and the results are shown in Table 7.
    Table 7
    Normalized clear value matrix
     
    \({C_1}\)
    \({C_2}\)
    \({C_3}\)
    \({C_4}\)
    \({A_1}\)
    0.7791
    1
    1
    0.0839
    \({A_2}\)
    1
    0.7743
    0.0894
    1
    \({A_3}\)
    0
    0.1239
    0
    0
    \({A_4}\)
    0.3656
    0.7013
    0.9771
    0.9627
    \({A_5}\)
    0.0891
    0
    0.2017
    0.0435
  • Step 3. Use Eq. (36) to calculate the percentage value corresponding to each attribute to get
    $$\begin{aligned}{} & {} P{V_1}=30.8215,P{V_2}=12.3087,\\{} & {} P{V_3}=41.8336,P{V_4}=44.9247. \end{aligned}$$
  • Step 4. Use the Eq. (37) to compute the weights of each attribute to get
    $$\begin{aligned} {\psi _1}=0.2373,{\psi _2}=0.0948,{\psi _3}=0.3221,{\psi _4}=0.3459. \end{aligned}$$
B. CRADIS approach for ranking.
Use CRADIS methodology to rank the alternatives:
  • Step 1. Calculate the weighting matrix as shown in Table 21.
  • Step 2. Apply the Eqs. (38)–(39) to determine the ideal and anti-ideal solutions for each attribute, as shown in Table 21.
  • Step 3: Use Eqs. (40)–(41) to evaluate the deviation of the preference values from the ideal and anti-ideal solutions, and the results are shown in Table 8.
  • Step 4: Apply Eqs. (42)–(43) to compute the utility measures used to estimate the normalized values for each alternative, and the results are shown in Table 8.
  • Step 5: Applying Eq. (44), the ranking values of the alternatives are determined and the results are shown in Table 8.
From Table 8, we can see that the best alternative is \({A_3}\) with the minimum value of RA.
Table 8
Results
 
Deviation
Utility measures
Ranked values
Ranking
I
\({A_1}\)
0.0443
0.987
0.9886
4
\({A_2}\)
0.0452
0.9692
0.9728
3
\({A_3}\)
0.0788
0.5554
0.4789
1
\({A_4}\)
0.0438
1
1
5
\({A_5}\)
0.0741
0.5906
0.5735
2
AI
\({A_1}\)
0.0581
0.9902
  
\({A_2}\)
0.0572
0.9763
  
\({A_3}\)
0.0236
0.4025
  
\({A_4}\)
0.0586
1
  
\({A_5}\)
0.0326
0.5564
  
Table 9
Comparative analysis of sub-group weighting
Methods
Sub-group weighting
Ranking
Optimal solution
\({\lambda _1}\)
\({\lambda _2}\)
\({\lambda _3}\)
Proposed
0.2319
0.3861
0.3820
\({A_3} \succ {A_5} \succ {A_2} \succ {A_1} \succ {A_4}\)
\({A_3}\)
Literature 1 [71]
0.25
0.35
0.4
\({A_5} \succ {A_4} \succ {A_1} \succ {A_3} \succ {A_2}\)
\({A_5}\)
Literature 2 [34]
0.2520
0.4094
0.3385
\({A_5} \succ {A_4} \succ {A_1} \succ {A_3} \succ {A_2}\)
\({A_5}\)

Comparative analysis

In the following, we compare and analyze four perspectives, including sub-group weighting, operators, attribute weighting, and methods.

Comparative analysis of sub-group weighting

In this part, we compare the proposed method for determining sub-group weights with those in Literature 1 [71] and Literature 2 [34], respectively. Wu and Xu [71] set that all decision makers are assigned the same importance weight. Li et al. [34] assume that the deviation of a sub-group indicates the level of togetherness (LOT) between expert opinions in the sub-groups. The smaller the deviation, the higher the LOT, indicating that the sub-group plays an important role and should be given a larger weight. The weights are also determined by combining the majority principle. Table 4 gives the sub-group weights and final selection results applied to the above examples calculated.
From the results in Table 9, it can be seen that the weights calculated using the three methods are different. The core ideas of the three methods are different, so the calculation results obtained are different, when selecting the calculation method of subgroup weights, we should take full account of the objective reality and select the suitable method.

Comparative analysis of operators

In this paper, we use the RNWHM operator in the consensus reaching process and the aggregation process.To illustrate the robustness of the results of this model, we compare it with the results using the R-numbers weighted average (RNWA) operator [36] to obtain the computational and ranking results in Table 10. A more intuitive presentation of the results is shown in Fig. 4. Spearman correlation coefficient (SCC) analysis is done for both methods simultaneously and the results obtained are shown in Fig. 5. As we can see, the SCC of the results obtained from these two aggregation methods is 0.8, which is highly correlated, while the optimal solutions obtained are all \({A_3}\). It shows the feasibility of the proposed operator.
Table 10
Results of comparative analysis of operators
 
By RNWHM operator
By RNWM operator
RA
Ranking
RA
Ranking
\({A_1}\)
0.9886
4
0.7973
4
\({A_2}\)
0.9728
3
1
5
\({A_3}\)
0.4789
1
0.5245
1
\({A_4}\)
1
5
0.6397
3
\({A_5}\)
0.5735
2
0.5948
2

Comparative analysis of attribute weighting

Table 11 presents the results of ranking the attributes using the same attribute weights and the attribute weights calculated by the LOPCOW method. Figure 6 illustrates the visual result of sorting all attributes using the same weights. Specifically, attribute weights are generated by assigning a value of 1 to each of the four attributes. This means that each attribute is given a weight of 0.25. The attribute weights calculated using the LOPCOW method are \({\psi _1}=0.2373 ,{\psi _2}=0.0948 ,{\psi _3}=0.3221 ,{\psi _4}=0.3459\). The comparison shown in Fig. 6 highlights the difference between the ranking of alternatives calculated using the same weights and the attribute weights derived from the LOPCOW method. Therefore, the decision model needs to use the LOPCOW method to determine the exact criteria weights.
Table 11
Results of comparative analysis for attribute weighting
 
Average weighting
Ranking
LOPCOW method
Ranking
\({A_1}\)
1
5
0.9886
4
\({A_2}\)
0.8302
4
0.9728
3
\({A_3}\)
0.5939
1
0.4789
1
\({A_4}\)
0.8237
3
1
5
\({A_5}\)
0.6778
2
0.5735
2

Comparative analysis of methods

It is checked in this section the robustness of the results of the present model by comparing with other multi-criteria techniques tools from the literature. The initial results are compared with, R-SWA method [61] results and R-TOPSIS method [60] results, R-VIKOR method [43] results, R-MARICA method [10] results. These multi-criteria techniques are applied to the same case study data. It is shown in Table 12 that the ranking results using the above-mentioned multi-criteria methods. Figure 7 presents a more intuitive ranking result.
On balance, the optimal choices derived using different multi-attribute techniques are either \({A_3}\) or \({A_4}\), with the exception of the R-SAW method, which yields an optimal choice of \({A_2}\). We do a detailed analysis below.
We analyze the results of different techniques with the proposed method by SCC, and the results are shown in Fig. 8.
  • Different fuzzy sample additive weighting (FSAW) methods have been developed earlier [11] for multi-attribute decision making problems. Seiti and Hafezalkotob extended the FSAW method by proposing the R-SWA method [61]. In this paper, the sorting results in Table 12 are obtained for the case study using the R-SAW method and the proposed method, respectively. As can be seen in Fig. 8, there is a strong correlation between R-SAW and the proposed method.
  • Seiti and Hafezalkotob [60] proposed the R-TOPSIS method. From Table 12 and Fig. 7, it can be seen that the computational results of the proposed method and the R-TOPSIS method are in perfect consistency. This is because both methods have the same ordering idea, i.e. closest to the positive ideal solution and farthest from the negative ideal solution. Since the TOPSIS method is the most classical MADM method, it also illustrates the reliability of the proposed method.
  • Mousavi et al. [43] proposed the R-VIKOR method. The results of using the R-VIKOR method for the case study are shown in Table 12, and also it can be seen from Fig. 7 that the proposed method and the R-VIKOR method have negative correlation coefficients. This is because the R-VIKOR method relies on the group utility maximization and individual regret minimization ideas, which are completely different ideas from the proposed method, so different ranking results are obtained.
  • R-MARICA method [10] was proposed by Cheng et al. As can be seen from Table 12 and Fig. 7, its sorting results are different from those of the proposed method. The core idea of its sorting is nearest to the mean.
Table 12
Results of comparative analysis
Methods
Ranking
Optimal solution
Proposed method
\({A_3} \succ {A_5} \succ {A_2} \succ {A_1} \succ {A_4}\)
\({A_3}\)
R-SAW method [61]
\({A_2} \succ {A_3} \succ {A_5} \succ {A_1} \succ {A_4}\)
\({A_2}\)
R-TOPSIS method [60]
\({A_3} \succ {A_5} \succ {A_4} \succ {A_2} \succ {A_1}\)
\({A_3}\)
R-VIKOR method [43]
\({A_4} \succ {A_2} \succ {A_1} \succ {A_5} \succ {A_3}\)
\({A_4}\)
R-MARICA method [10]
\({A_4} \succ {A_1} \succ {A_3} \succ {A_5} \succ {A_2}\)
\({A_4}\)

Sensitivity analysis

It is possible that a change in self-defined parameters in the model could lead to a change in outcome preferences. In the case study above, we take the parameter value \(p = 2\) in the RNWHM operator. p will be limited by the number of aggregation matrices, and in the case study, the number of cluster labels, both for consensus reaching and in the aggregation process, is 3, so the maximum value of p can be taken to be 3. Therefore, the effect of a change in the parameter p on the value of RA and on the ordering of the alternatives is analyzed next. In this case, we consider different p values: 1, 2, 3. The preference results for the alternatives are shown in Table 13.
As we can see from the Fig. 9, the ordering of the alternatives is not exactly consistent as the parameter p is varied. This is because \(p=1,3\) are particular states:
  • When \(p=1\), it is a state of combination of all elements singly, so there is specificity in the result.
  • When \(p=3\), then \(C_3^3 = 1\), the RNWHM operator changes to a geometric weighted average operator, a result in a special state.
Therefore, it is important to optimize the choice of parameter p values in the use of this model in conjunction with the number of clustering labels, avoiding particular values to achieve credible results.
Table 13
Results of sensitivity analysis
 
\(p=1\)
\(p=2\)
\(p=3\)
RA
Ranking
RA
Ranking
RA
Ranking
\({A_1}\)
0.5726
3
0.9886
5
0.6493
4
\({A_2}\)
1
5
0.9728
3
1
5
\({A_3}\)
0.6128
4
0.4789
1
0.6182
3
\({A_4}\)
0.5379
2
1
4
0.5143
1
\({A_5}\)
0.4955
1
0.5735
2
0.549
2

Discussion

In this paper, a new R-numbers-based LSMAGDM method is proposed. The method combines the R-numbers theory, uniquely considers the risk psychology factor of decision-makers in the evaluation process, and validates the feasibility and effectiveness of the method by taking the example of the logistics path selection of Toyota’s hydrogen fuel cell land. Finally, the comprehensive comparative values of the five paths of the resulting case are calculated. Based on the evaluation of the four attributes (reliability, cost-effectiveness, environmental impact, and safety risk), it is determined that option \({A_3}\), the supplier-specific cooperation program, is the best logistics path that meets the established criteria. A supplier-specific partnership program involves establishing a close working relationship with one or more reliable suppliers to ensure regular supply and priority transportation. This approach improves supply chain reliability by maintaining a consistent and efficient flow of goods. In terms of cost-effectiveness, supplier-specific partnering programs can offer advantages such as volume purchasing discounts and reduced transportation costs through priority shipping. All of these factors contribute to the cost-effectiveness of logistics solutions. Considering the environmental impact, focusing on specific suppliers allows for better control and monitoring of carbon emissions and other environmental indicators related to transportation. This approach is in line with the goal of achieving sustainable and low-carbon logistics operations. Security risks are also an important consideration in logistics. Vendor-specific partnering programs allow for better control and tracking of shipments, reducing the likelihood of theft, damage, or other security breaches during transportation. Taken together, the Supplier-Specific Partnership Program (Program \({A_3}\)) is the most appropriate logistics pathway, balancing reliability, cost-effectiveness, environmental considerations, and security risks.
In addition, a number of potentially important results arise from the analysis. First, the result of the case calculation, i.e., \({A_3}\), is, relatively speaking, the best choice in terms of reliability, cost-effectiveness, environmental impact, and safety risk. However, the other paths in the case are not good enough only in some criteria, such as \({A_1}\) performs relatively poorly in terms of cost-effectiveness, and environmental impact, \({A_2}\) performs relatively poorly in terms of reliability and safety risk, \({A_4}\) performs poorly in terms of cost-effectiveness and safety risk, and there is unsatisfactory performance of \({A_5}\) in terms of environmental impact. Integrating factors such as reliability, cost-effectiveness, environmental impact, and safety risk aspects are key to selecting a hydrogen cell logistics path. This process helps to determine the reliability, low cost, environmental protection, and safety of carrying out the logistics transportation of hydrogen batteries. Reliability, cost-effectiveness, environmental impacts, and safety risks will continue to exist as technology changes with the challenges associated with selecting and implementing the most feasible logistics path for hydrogen batteries. Finally, while the decision-making framework presented in this study was specifically designed for selecting a hydrogen battery logistics path, the generalizability of its decision-making framework means that it can also be extended to other high-technology product logistics systems, such as electronic product logistics systems, new energy product logistics systems, and life science product logistics systems.

Conclusion

In this study, we presented a novel LSMAGDM model, termed R-LSMAGDM, to tackle decision-making challenges in risky environments. Our approach extends the scope of MAGDM methods by handling scenarios with a greater number of decision-makers, specifically equal to or exceeding 20. By utilizing R-numbers, our model effectively incorporates expert evaluation information that captures risk factors and enhances the accuracy of decision-making outcomes.
To determine sub-group weights, we proposed a new model based on entropy and majority principles, ensuring a more robust and representative decision-making process. Additionally, we optimized the MCC model by introducing the RNWHM operator, which outperforms the traditional arithmetic average aggregation method in aggregating R-numbers information. Through the construction and characterization of the RNWHM operator, we established an innovative approach to aggregating R-numbers information, contributing to more reliable decision outcomes. Attribute weights were determined using the LOPCOW method, and the CRADIS method prioritized relationships between alternatives, leading to the final preference results. As a case study, we applied the R-LSMAGDM model to the problem of hydrogen fuel cell logistics path selection, demonstrating its practical efficiency.
However, our study has some limitations. The evaluation matrix in the R-numbers form requires specialized expertise, highlighting the need for more resources in this regard. Furthermore, our focus on risk assessment in hydrogen fuel cell logistics paths calls for further extension across various application domains.
In future investigations, it is important to consider several prospective pathways for research to improve the adaptability of the R-LSMAGDM technique and to extend its application to alternative models of information representation. First and foremost, there is potential for future enhancement and optimization of the R-LSMAGDM algorithm to enhance its efficiency and accuracy. Furthermore, there is potential for investigating the applicability of the R-LSMAGDM approach in dynamic contexts, to effectively address decision-making scenarios that undergo changes over time. Furthermore, the use of R-LSMAGDM can be expanded to encompass many sectors such as healthcare, environmental protection, and transport, hence broadening its practical applicability. Moreover, an area of research that holds potential is the incorporation of machine learning methods into R-LSMAGDM models, with the aim of enhancing their predictive capacities and their capacity to handle intricate decision-making scenarios. Further investigation and progress in these domains will contribute to the advancement and practical implementation of R-LSMAGDM methodologies.
Overall, the R-LSMAGDM model holds promise in addressing the challenges presented by large-scale decision-making problems in risky environments, and future advancements have the potential to enhance its effectiveness and broaden its applications.

Declarations

Conflict of interest

All the authors declare that they do not have any conflict of interest.

Ethical approval

This article does not contain any studies with human participants or animals performed by any of the authors.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Anhänge

Appendix A

Proof
(1)
Clearly, we can get \(R{\left( {{{{{\tilde{B}}}}_i}} \right) ^ - } \le R\left( {{{{{\tilde{B}}}}_i}} \right) \le R{\left( {{{{{\tilde{B}}}}_i}} \right) ^ + }.\) Thus, based on Theorems 1 and 2, we have
$$\begin{aligned}{} & {} RNWH{M^{\left( p \right) }}\left( {R\left( {{{{{\tilde{B}}}}_1}} \right) ,R\left( {{{{{\tilde{B}}}}_2}} \right) ,\ldots ,R\left( {{{{{\tilde{B}}}}_m}} \right) } \right) \\{} & {} \quad = R\left( {{{{{\tilde{B}}}}_i}} \right) \\{} & {} \quad \ge RNWH{M^{\left( p \right) }}\left( {R{{\left( {{{{{\tilde{B}}}}_i}} \right) }^ - },R{{\left( {{{{{\tilde{B}}}}_i}} \right) }^ - },\ldots ,R{{\left( {{{{{\tilde{B}}}}_i}} \right) }^ - }} \right) \\{} & {} \quad = R{\left( {{{{{\tilde{B}}}}_i}} \right) ^ - },\\{} & {} RNWH{M^{\left( p \right) }}\left( {R\left( {{{{{\tilde{B}}}}_1}} \right) ,R\left( {{{{{\tilde{B}}}}_2}} \right) ,\ldots ,R\left( {{{{{\tilde{B}}}}_m}} \right) } \right) \\{} & {} \quad = R\left( {{{{{\tilde{B}}}}_i}} \right) \\{} & {} \quad \le RNWH{M^{\left( p \right) }}\left( {R{{\left( {{{{{\tilde{B}}}}_i}} \right) }^ + },R{{\left( {{{{{\tilde{B}}}}_i}} \right) }^ + },\ldots ,R{{\left( {{{{{\tilde{B}}}}_i}} \right) }^ + }} \right) \\{} & {} \quad = R{\left( {{{{{\tilde{B}}}}_i}} \right) ^ + }. \end{aligned}$$
 
(2)
The above theorem obviously holds.
 
(3)
The above theorem obviously holds.
 
(4)
Based on the Definition 8, we have
$$\begin{aligned} \begin{aligned}&RNWH{M^{\left( p \right) }}\left( {R\left( {{{{{\tilde{B}}}'}_1}} \right) ,R\left( {{{{{\tilde{B}}}'}_2}} \right) ,\ldots ,R\left( {{{{{\tilde{B}}}'}_m}} \right) } \right) \\&\quad = \frac{{{ \oplus _{1 \le {i_1}< \cdots< {i_p} \le m}}{{\left( { \otimes _{j = 1}^p{{\left( {R{{\left( {{{{{\tilde{B}}}'}_i}} \right) }_j}} \right) }^{{w_{{i_j}}}}}} \right) }^{{1 / p}}}}}{{C_m^p}}\\&\quad = \frac{{{ \oplus _{1 \le {i_1}< \cdots < {i_p} \le m}}{{\left( { \otimes _{j = 1}^p{{\left( {R{{\left( {{{{{\tilde{B}}}}_i}} \right) }_j}} \right) }^{{w_{{i_j}}}}}} \right) }^{{1 / p}}}}}{{C_m^p}}\\&\quad = RNWH{M^{\left( p \right) }}\left( {R\left( {{{{{\tilde{B}}}}_1}} \right) ,R\left( {{{{{\tilde{B}}}}_2}} \right) ,\ldots ,R\left( {{{{{\tilde{B}}}}_m}} \right) } \right) . \end{aligned} \end{aligned}$$
 
\(\square \)

Appendix B

See Tables 14, 15, 16, 17, 18, 19, 20, 21.
Table 14
Initial matrix of the case study
 
C1
C2
C3
C4
 
\(Expert1,\; A{R^ + } = A{R^ - } = 0,R{P^ - } = R{P^ + } = Pessimist\left( VL \right) \)
\({A_1}\)
H
M
L
H
M
VL
M
VH
VH
VL
VH
VH
\({A_2}\)
VH
VH
H
VH
VH
M
VL
VH
H
VL
VH
H
\({A_3}\)
L
H
VL
VL
VH
VH
L
H
M
VH
H
M
\({A_4}\)
L
VH
H
M
M
L
VH
VH
VH
L
VH
VH
\({A_5}\)
M
H
L
VL
VH
VH
M
VH
H
L
VH
H
 
\(Expert2,\;A{R^ + } = A{R^ - } = 0,R{P^ - } = R{P^ + } = Pessimist\left( M \right) \)
\({A_1}\)
VH
VH
H
VH
VH
H
H
M
VL
VH
VH
H
\({A_2}\)
H
M
L
M
M
L
H
VH
M
H
M
L
\({A_3}\)
L
VH
H
H
VH
H
L
VH
VH
L
VH
H
\({A_4}\)
VH
H
VL
VH
H
VL
H
M
L
VH
H
VL
\({A_5}\)
L
VH
H
VH
VH
H
M
VH
VH
H
VH
H
 
\(Expert3,\; A{R^ + } = A{R^ - } = 0,R{P^ - } = R{P^ + } = Pessimist\left( M \right) \)
\({A_1}\)
VL
M
VL
L
M
L
VL
M
L
VL
M
L
\({A_2}\)
VL
VH
M
VH
VH
H
L
VH
H
H
VH
H
\({A_3}\)
VH
VH
VH
M
H
VL
L
H
VL
H
H
VL
\({A_4}\)
VH
M
L
VH
VH
H
H
VH
H
VH
VH
H
\({A_5}\)
VL
VH
VH
L
H
L
VL
H
L
VL
H
L
 
\(Expert4,\; A{R^ + } = A{R^ - } = 0,R{P^ - } = R{P^ + } = Pessimist\left( L \right) \)
\({A_1}\)
H
M
VL
H
M
VL
M
VH
H
VL
M
VL
\({A_2}\)
VH
VH
M
VH
VH
M
M
M
L
M
VH
M
\({A_3}\)
L
VH
VH
M
VH
VH
VL
VH
H
VH
VH
VH
\({A_4}\)
L
M
L
M
M
L
VL
H
VL
L
M
L
\({A_5}\)
M
VH
VH
VL
VH
VH
M
VH
H
VH
VH
VH
 
\(Expert5, \;A{R^ + } = A{R^ - } = 0,R{P^ - } = R{P^ + } = Pessimist\left( H \right) \)
\({A_1}\)
L
VH
H
VH
L
M
H
L
M
M
L
M
\({A_2}\)
L
M
L
VH
VH
M
H
VH
M
H
VH
M
\({A_3}\)
L
VH
H
M
M
L
L
M
L
L
M
L
\({A_4}\)
VL
H
VL
VH
VH
H
M
VH
H
VH
VH
H
\({A_5}\)
M
VH
H
M
VH
H
VL
VH
H
H
VH
H
 
\(Expert6,\; A{R^ + } = A{R^ - } = 0,R{P^ - } = R{P^ + } = Pessimist\left( VH \right) \)
\({A_1}\)
VH
VH
VH
VH
VH
VH
M
VH
VH
VH
VH
VH
\({A_2}\)
H
VH
H
H
VH
H
VL
VH
H
H
VH
H
\({A_3}\)
M
H
M
M
H
M
L
H
M
M
H
M
\({A_4}\)
VH
VH
VH
VH
VH
VH
VH
VH
VH
VH
VH
VH
\({A_5}\)
H
L
VL
L
L
VL
VL
L
VL
VL
L
VL
 
\(Expert7,\; A{R^ + } = A{R^ - } = 0,R{P^ - } = R{P^ + } = Pessimist\left( M \right) \)
\({A_1}\)
L
VH
H
H
M
L
M
VH
H
VL
VH
H
\({A_2}\)
L
VH
M
M
VH
H
VL
VH
M
M
VH
M
\({A_3}\)
VH
L
M
VL
H
VL
VL
L
M
VL
L
M
\({A_4}\)
H
VH
VH
M
VH
H
VL
VH
VH
VL
VH
VH
\({A_5}\)
H
VH
H
VL
H
L
M
VH
H
L
VH
H
 
\(Expert8,\; A{R^ + } = A{R^ - } = 0,R{P^ - } = R{P^ + } = Pessimist\left( M \right) \)
\({A_1}\)
M
VH
H
VH
M
L
H
VH
H
VH
VH
H
\({A_2}\)
VH
VH
M
M
VH
H
L
VH
M
VL
VH
M
\({A_3}\)
VH
M
L
M
H
VL
H
M
L
L
M
L
\({A_4}\)
L
VH
H
VH
VH
H
VL
VH
H
M
VH
H
\({A_5}\)
VH
H
M
VH
H
L
H
H
M
H
H
M
 
\(Expert9,\; A{R^ + } = A{R^ - } = 0,R{P^ - } = R{P^ + } = Pessimist\left( L \right) \)
\({A_1}\)
M
M
VL
L
M
VL
VL
M
VL
VL
M
VL
\({A_2}\)
L
VH
VL
M
VH
VL
L
VH
VL
H
VH
VL
\({A_3}\)
VH
M
L
VL
M
L
VL
M
L
VL
M
L
\({A_4}\)
L
VH
H
VH
VH
H
H
VH
H
VH
VH
H
\({A_5}\)
L
M
VL
L
M
VL
VL
M
VL
VL
M
VL
 
\(Expert10,\; A{R^ + } = A{R^ - } = 0,R{P^ - } = R{P^ + } = Pessimist\left( L \right) \)
\({A_1}\)
M
VH
L
H
M
VL
M
M
L
VL
H
M
\({A_2}\)
VH
VH
M
VL
VH
VL
VL
VH
H
VL
VH
VL
\({A_3}\)
VH
H
M
VL
M
L
M
H
VL
M
H
M
\({A_4}\)
L
VH
M
VL
VH
H
VL
VH
H
VL
M
VL
\({A_5}\)
VH
VH
H
VL
M
VL
M
H
L
L
H
M
 
\(Expert11,\; A{R^ + } = A{R^ - } = 0,R{P^ - } = R{P^ + } = Pessimist\left( M \right) \)
\({A_1}\)
M
H
M
VH
VH
M
H
VH
M
M
VH
M
\({A_2}\)
VH
VH
VL
M
VH
H
H
VH
H
H
VH
H
\({A_3}\)
VL
VH
VL
H
H
M
M
H
M
L
H
M
\({A_4}\)
VL
M
VL
VH
VH
VH
H
VH
VH
VH
VH
VH
\({A_5}\)
M
VH
M
M
VH
H
M
VH
H
H
VH
H
 
\(Expert12, \;A{R^ + } = A{R^ - } = 0,R{P^ - } = R{P^ + } = Pessimist\left( H \right) \)
\({A_1}\)
M
H
M
L
VH
L
VL
VH
L
VL
VH
L
\({A_2}\)
VH
VH
VL
VL
VH
M
L
VH
M
H
VH
M
\({A_3}\)
M
VH
VL
M
H
M
L
H
M
H
H
M
\({A_4}\)
VL
M
VL
VH
VH
M
H
VH
M
VH
VH
M
\({A_5}\)
M
VH
M
L
VH
H
VL
VH
H
VL
VH
H
 
\(Expert13,\; A{R^ + } = A{R^ - } = 0,R{P^ - } = R{P^ + } = Pessimist\left( M \right) \)
\({A_1}\)
H
VH
VL
M
H
M
VL
VH
L
VL
VH
VL
\({A_2}\)
H
VH
H
M
VH
VL
M
VH
M
M
VH
H
\({A_3}\)
L
H
M
VL
VH
VL
L
H
M
VL
H
M
\({A_4}\)
VL
H
M
M
M
VL
VL
VH
M
L
H
M
\({A_5}\)
VL
VH
H
VL
VH
M
M
VH
H
L
VH
H
 
\(Expert14,\; A{R^ + } = A{R^ - } = 0,R{P^ - } = R{P^ + } = Pessimist\left( VL \right) \)
\({A_1}\)
M
VH
VL
VH
M
L
H
VH
VH
VH
VH
VH
\({A_2}\)
VH
VH
H
M
VH
H
H
VH
H
M
VH
H
\({A_3}\)
VL
H
M
H
H
VL
L
H
M
L
H
M
\({A_4}\)
L
H
M
VH
VH
H
M
VH
VH
VH
VH
VH
\({A_5}\)
VL
VH
H
VH
H
L
M
VH
H
H
VH
H
 
\(Expert15,\; A{R^ + } = A{R^ - } = 0,R{P^ - } = R{P^ + } = Pessimist\left( L \right) \)
\({A_1}\)
M
H
M
L
VH
VL
VL
VH
VL
VL
H
M
\({A_2}\)
M
VH
L
H
VH
H
L
VH
H
H
VH
L
\({A_3}\)
L
H
M
VL
H
M
M
H
M
M
H
M
\({A_4}\)
L
H
L
M
H
M
H
H
M
VH
H
L
\({A_5}\)
M
L
VL
L
VH
H
VL
VH
H
VL
L
VL
 
\(Expert16,\; A{R^ + } = A{R^ - } = 0,R{P^ - } = R{P^ + } = Pessimist\left( M \right) \)
\({A_1}\)
VH
M
VL
M
H
M
M
VH
VL
VL
M
VL
\({A_2}\)
VH
VH
VL
VL
VH
L
VL
VH
H
M
VH
VL
\({A_3}\)
VL
VH
H
VL
H
M
VL
H
M
VL
VH
H
\({A_4}\)
VL
VH
M
M
H
L
VL
H
M
L
VH
M
\({A_5}\)
M
L
VL
VL
L
VL
M
VH
H
VL
L
VL
 
\(Expert17,\; A{R^ + } = A{R^ - } = 0,R{P^ - } = R{P^ + } = Pessimist\left( H \right) \)
\({A_1}\)
VH
VH
VH
VH
VH
VL
VL
VH
VH
VH
VH
VH
\({A_2}\)
VH
VH
H
M
VH
H
H
VH
H
H
VH
H
\({A_3}\)
VL
H
M
VL
H
M
L
H
M
L
H
M
\({A_4}\)
L
VH
L
VH
H
M
H
VH
L
VL
VH
L
\({A_5}\)
L
VH
H
VH
VH
H
M
VH
H
H
VH
H
 
\(Expert18,\; A{R^ + } = A{R^ - } = 0,R{P^ - } = R{P^ + } = Pessimist\left( M \right) \)
\({A_1}\)
H
M
VL
L
H
M
VL
H
M
VL
M
VL
\({A_2}\)
H
L
VL
H
VH
VL
L
VH
VL
H
L
VL
\({A_3}\)
L
VH
H
M
VH
VL
L
VH
VL
H
VH
H
\({A_4}\)
M
H
M
VH
M
VL
VL
M
VL
VH
H
M
\({A_5}\)
M
L
VL
L
VH
M
VL
VH
M
VL
L
VL
 
\(Expert19,\; A{R^ + } = A{R^ - } = 0,R{P^ - } = R{P^ + } = Pessimist\left( M \right) \)
\({A_1}\)
H
H
M
M
H
M
M
VH
VL
VL
H
M
\({A_2}\)
VH
VH
VL
M
VH
VL
VL
VH
H
VL
VH
VL
\({A_3}\)
M
VH
VL
VL
VH
VL
VL
H
M
VL
VH
VL
\({A_4}\)
L
M
VL
M
M
VL
VL
H
M
L
M
VL
\({A_5}\)
M
VH
M
VL
VH
M
M
VH
H
L
VH
M
 
\(Expert20,\; A{R^ + } = A{R^ - } = 0,R{P^ - } = R{P^ + } = Pessimist\left( L \right) \)
\({A_1}\)
L
VH
VL
H
H
M
M
M
VL
M
VH
H
\({A_2}\)
VH
VH
H
M
VH
VL
VL
L
VL
H
H
M
\({A_3}\)
M
H
M
H
VH
VL
L
VH
H
L
VH
H
\({A_4}\)
M
H
M
VL
M
VL
H
H
M
VH
M
VL
\({A_5}\)
M
VH
H
M
VH
M
M
L
VL
H
VH
M
Table 15
Triangular fuzzy number matrix
 
\(C_1\)
\(C_2\)
\(C_3\)
\(C_4\)
\(Expert1,\;{AR^ + }={AR^ - }=0, {RP^ - }=(-0.3, 0, 0), {RP^ + }=(0, 0, 0.3)\)
\({A_1}\)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0.1, 0.3, 0.5)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0, 0, 0.3)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
\({A_2}\)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.3, 0.5, 0.7)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
\({A_3}\)
(0.1, 0.3, 0.5)
(0.5, 0.7, 0.9)
(0, 0, 0.3)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.1, 0.3, 0.5)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
\({A_4}\)
(0.1, 0.3, 0.5)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0.3, 0.5, 0.7)
(0.1, 0.3, 0.5)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.1, 0.3, 0.5)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
\({A_5}\)
(0.3, 0.5, 0.7)
(0.5, 0.7, 0.9)
(0.1, 0.3, 0.5)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.1, 0.3, 0.5)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
\(Expert2,\;{AR^ + }={AR^ - }=0, {RP^ - }=(-0.7, -0.5, -0.3), {RP^ + }=(0.3, 0.5, 0.7)\)
\({A_1}\)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
\({A_2}\)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0.1, 0.3, 0.5)
(0.3, 0.5, 0.7)
(0.3, 0.5, 0.7)
(0.1, 0.3, 0.5)
(0.5, 0.7, 0.9)
(0.7, 0.99, 0.99)
(0.3, 0.5, 0.7)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0.1, 0.3, 0.5)
\({A_3}\)
(0.1, 0.3, 0.5)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.5, 0.7, 0.9)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.1, 0.3, 0.5)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.1, 0.3, 0.5)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
\({A_4}\)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0, 0, 0.3)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0.1, 0.3, 0.5)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0, 0, 0.3)
\({A_5}\)
(0.1, 0.3, 0.5)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
\(Expert3,\;{AR^ + }={AR^ - }=0, {RP^ - }=(-0.7, -0.5, -0.3), {RP^ + }=(0.3, 0.5, 0.7)\)
\({A_1}\)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
\({A_2}\)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0.1, 0.3, 0.5)
(0.3, 0.5, 0.7)
(0.3, 0.5, 0.7)
(0.1, 0.3, 0.5)
(0.5, 0.7, 0.9)
(0.7, 0.99, 0.99)
(0.3, 0.5, 0.7)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0.1, 0.3, 0.5)
\({A_3}\)
(0.1, 0.3, 0.5)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.5, 0.7, 0.9)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.1, 0.3, 0.5)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.1, 0.3, 0.5)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
\({A_4}\)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0, 0, 0.3)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0.1, 0.3, 0.5)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0, 0, 0.3)
\({A_5}\)
(0.1, 0.3, 0.5)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
\(Expert4,\;{AR^ + }={AR^ - }=0, {RP^ - }=(-0.5, -0.3, -0.1), {RP^ + }=(0.1, 0.3, 0.5)\)
\({A_1}\)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0, 0, 0.3)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0, 0, 0.3)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0, 0, 0.3)
(0.3, 0.5, 0.7)
(0, 0, 0.3)
\({A_2}\)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.3, 0.5, 0.7)
(0.3, 0.5, 0.7)
(0.3, 0.5, 0.7)
(0.1, 0.3, 0.5)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0.3, 0.5, 0.7)
\({A_3}\)
(0.1, 0.3, 0.5)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
\({A_4}\)
(0.1, 0.3, 0.5)
(0.3, 0.5, 0.7)
(0.1, 0.3, 0.5)
(0.3, 0.5, 0.7)
(0.3, 0.5, 0.7)
(0.1, 0.3, 0.5)
(0, 0, 0.3)
(0.5, 0.7, 0.9)
(0, 0, 0.3)
(0.1, 0.3, 0.5)
(0.3, 0.5, 0.7)
(0.1, 0.3, 0.5)
\({A_5}\)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
\(Expert5,\; {AR^ + }={AR^ - }=0, {RP^ - }=(-0.9, -0.7, -0.5), {RP^ + }=(0.5, 0.7, 0.9)\)
\({A_1}\)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0, 0, 0.3)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0, 0, 0.3)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0, 0, 0.3)
(0.3, 0.5, 0.7)
(0, 0, 0.3)
\({A_2}\)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.3, 0.5, 0.7)
(0.3, 0.5, 0.7)
(0.3, 0.5, 0.7)
(0.1, 0.3, 0.5)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0.3, 0.5, 0.7)
\({A_3}\)
(0.1, 0.3, 0.5)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
\({A_4}\)
(0.1, 0.3, 0.5)
(0.3, 0.5, 0.7)
(0.1, 0.3, 0.5)
(0.3, 0.5, 0.7)
(0.3, 0.5, 0.7)
(0.1, 0.3, 0.5)
(0, 0, 0.3)
(0.5, 0.7, 0.9)
(0, 0, 0.3)
(0.1, 0.3, 0.5)
(0.3, 0.5, 0.7)
(0.1, 0.3, 0.5)
\({A_5}\)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
\(Expert6,\; {AR^ + }={AR^ - }=0, {RP^ - }=(-0.99, -0.99, -0.7), {RP^ + }=(0.7, 0.99, 0.99)\)
\({A_1}\)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
\({A_2}\)
(0.5, 0.7, 0.9)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.5, 0.7, 0.9)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.5, 0.7, 0.9)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
\({A_3}\)
(0.3, 0.5, 0.7)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0.3, 0.5, 0.7)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0.1, 0.3, 0.5)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0.3, 0.5, 0.7)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
\({A_4}\)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
\({A_5}\)
(0.5, 0.7, 0.9)
(0.1, 0.3, 0.5)
(0, 0, 0.3)
(0.1, 0.3, 0.5)
(0.1, 0.3, 0.5)
(0, 0, 0.3)
(0, 0, 0.3)
(0.1, 0.3, 0.5)
(0, 0, 0.3)
(0, 0, 0.3)
(0.1, 0.3, 0.5)
(0, 0, 0.3)
\(Expert7,\;{AR^ + }={AR^ - }=0, {RP^ - }=(-0.7, -0.5, -0.3), {RP^ + }=(0.3, 0.5, 0.7)\)
\({A_1}\)
(0.1, 0.3, 0.5)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0.1, 0.3, 0.5)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
\({A_2}\)
(0.1, 0.3, 0.5)
(0.7, 0.99, 0.99)
(0.3, 0.5, 0.7)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0.3, 0.5, 0.7)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0.3, 0.5, 0.7)
\({A_3}\)
(0.7, 0.99, 0.99)
(0.1, 0.3, 0.5)
(0.3, 0.5, 0.7)
(0, 0, 0.3)
(0.5, 0.7, 0.9)
(0, 0, 0.3)
(0, 0, 0.3)
(0.1, 0.3, 0.5)
(0.3, 0.5, 0.7)
(0, 0, 0.3)
(0.1, 0.3, 0.5)
(0.3, 0.5, 0.7)
\({A_4}\)
(0.5, 0.7, 0.9)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
\({A_5}\)
(0.5, 0.7, 0.9)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0, 0, 0.3)
(0.5, 0.7, 0.9)
(0.1, 0.3, 0.5)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.1, 0.3, 0.5)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
\(Expert8,\;{AR^ + }={AR^ - }=0, {RP^ - }=(-0.7, -0.5, -0.3), {RP^ + }=(0.3, 0.5, 0.7)\)
\({A_1}\)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.7, 0.99, 0.99)
(0.3, 0.5, 0.7)
(0.1, 0.3, 0.5)
(0.5, 0.7, 0.9)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
\({A_2}\)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.3, 0.5, 0.7)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.1, 0.3, 0.5)
(0.7, 0.99, 0.99)
(0.3, 0.5, 0.7)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0.3, 0.5, 0.7)
\({A_3}\)
(0.7, 0.99, 0.99)
(0.3, 0.5, 0.7)
(0.1, 0.3, 0.5)
(0.3, 0.5, 0.7)
(0.5, 0.7, 0.9)
(0, 0, 0.3)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0.1, 0.3, 0.5)
(0.1, 0.3, 0.5)
(0.3, 0.5, 0.7)
(0.1, 0.3, 0.5)
\({A_4}\)
(0.1, 0.3, 0.5)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
\({A_5}\)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.1, 0.3, 0.5)
(0.5, 0.7, 0.9)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0.5, 0.7, 0.9)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
\(Expert9,\; {AR^ + }={AR^ - }=0, {RP^ - }=(-0.5, -0.3, -0.1), {RP^ + }=(0.1, 0.3, 0.5)\)
\({A_1}\)
(0.3, 0.5, 0.7)
(0.3, 0.5, 0.7)
(0, 0, 0.3)
(0.1, 0.3, 0.5)
(0.3, 0.5, 0.7)
(0, 0, 0.3)
(0, 0, 0.3)
(0.3, 0.5, 0.7)
(0, 0, 0.3)
(0, 0, 0.3)
(0.3, 0.5, 0.7)
(0, 0, 0.3)
\({A_2}\)
(0.1, 0.3, 0.5)
(0.7, 0.99, 0.99)
(0, 0, 0.3)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0, 0, 0.3)
(0.1, 0.3, 0.5)
(0.7, 0.99, 0.99)
(0, 0, 0.3)
(0.5, 0.7, 0.9)
(0.7, 0.99, 0.99)
(0, 0, 0.3)
\({A_3}\)
(0.7, 0.99, 0.99)
(0.3, 0.5, 0.7)
(0.1, 0.3, 0.5)
(0, 0, 0.3)
(0.3, 0.5, 0.7)
(0.1, 0.3, 0.5)
(0, 0, 0.3)
(0.3, 0.5, 0.7)
(0.1, 0.3, 0.5)
(0, 0, 0.3)
(0.3, 0.5, 0.7)
(0.1, 0.3, 0.5)
\({A_4}\)
(0.1, 0.3, 0.5)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.5, 0.7, 0.9)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
\({A_5}\)
(0.1, 0.3, 0.5)
(0.3, 0.5, 0.7)
(0, 0, 0.3)
(0.1, 0.3, 0.5)
(0.3, 0.5, 0.7)
(0, 0, 0.3)
(0, 0, 0.3)
(0.3, 0.5, 0.7)
(0, 0, 0.3)
(0, 0, 0.3)
(0.3, 0.5, 0.7)
(0, 0, 0.3)
\(Expert10,\;{AR^ + }={AR^ - }=0, {RP^ - }=(-0.5, -0.3, -0.1), {RP^ + }=(0.1, 0.3, 0.5)\)
\({A_1}\)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0.1, 0.3, 0.5)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0, 0, 0.3)
(0.3, 0.5, 0.7)
(0.3, 0.5, 0.7)
(0.1, 0.3, 0.5)
(0, 0, 0.3)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
\({A_2}\)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.3, 0.5, 0.7)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0, 0, 0.3)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0, 0, 0.3)
\({A_3}\)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0, 0, 0.3)
(0.3, 0.5, 0.7)
(0.1, 0.3, 0.5)
(0.3, 0.5, 0.7)
(0.5, 0.7, 0.9)
(0, 0, 0.3)
(0.3, 0.5, 0.7)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
\({A_4}\)
(0.1, 0.3, 0.5)
(0.7, 0.99, 0.99)
(0.3, 0.5, 0.7)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0, 0, 0.3)
(0.3, 0.5, 0.7)
(0, 0, 0.3)
\({A_5}\)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0, 0, 0.3)
(0.3, 0.5, 0.7)
(0, 0, 0.3)
(0.3, 0.5, 0.7)
(0.5, 0.7, 0.9)
(0.1, 0.3, 0.5)
(0.1, 0.3, 0.5)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
\(Expert11,\; {AR^ + }={AR^ - }=0, {RP^ - }=(-0.7, -0.5, -0.3), {RP^ + }=(0.3, 0.5, 0.7)\)
\({A_1}\)
(0.3, 0.5, 0.7)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.3, 0.5, 0.7)
(0.5, 0.7, 0.9)
(0.7, 0.99, 0.99)
(0.3, 0.5, 0.7)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0.3, 0.5, 0.7)
\({A_2}\)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0, 0, 0.3)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.5, 0.7, 0.9)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.5, 0.7, 0.9)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
\({A_3}\)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0, 0, 0.3)
(0.5, 0.7, 0.9)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0.3, 0.5, 0.7)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0.1, 0.3, 0.5)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
\({A_4}\)
(0, 0, 0.3)
(0.3, 0.5, 0.7)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
\({A_5}\)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0.3, 0.5, 0.7)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.5, 0.7, 0.9)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
\(Expert12,\; {AR^ + }={AR^ - }=0, {RP^ - }=(-0.9, -0.7, -0.5), {RP^ + }=(0.5, 0.7, 0.9)\)
\({A_1}\)
(0.3, 0.5, 0.7)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0.1, 0.3, 0.5)
(0.7, 0.99, 0.99)
(0.1, 0.3, 0.5)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0.1, 0.3, 0.5)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0.1, 0.3, 0.5)
\({A_2}\)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0, 0, 0.3)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0.3, 0.5, 0.7)
(0.1, 0.3, 0.5)
(0.7, 0.99, 0.99)
(0.3, 0.5, 0.7)
(0.5, 0.7, 0.9)
(0.7, 0.99, 0.99)
(0.3, 0.5, 0.7)
\({A_3}\)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0, 0, 0.3)
(0.3, 0.5, 0.7)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0.1, 0.3, 0.5)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0.5, 0.7, 0.9)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
\({A_4}\)
(0, 0, 0.3)
(0.3, 0.5, 0.7)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.3, 0.5, 0.7)
(0.5, 0.7, 0.9)
(0.7, 0.99, 0.99)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.3, 0.5, 0.7)
\({A_5}\)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0.3, 0.5, 0.7)
(0.1, 0.3, 0.5)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
\(Expert13,\; {AR^ + }={AR^ - }=0, {RP^ - }=(-0.7, -0.5, -0.3), {RP^ + }=(0.3, 0.5, 0.7)\)
\({A_1}\)
(0.5, 0.7, 0.9)
(0.7, 0.99, 0.99)
(0, 0, 0.3)
(0.3, 0.5, 0.7)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0.1, 0.3, 0.5)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0, 0, 0.3)
\({A_2}\)
(0.5, 0.7, 0.9)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0, 0, 0.3)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0.3, 0.5, 0.7)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
\({A_3}\)
(0.1, 0.3, 0.5)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0, 0, 0.3)
(0.1, 0.3, 0.5)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0, 0, 0.3)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
\({A_4}\)
(0, 0, 0.3)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0.3, 0.5, 0.7)
(0.3, 0.5, 0.7)
(0, 0, 0.3)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0.3, 0.5, 0.7)
(0.1, 0.3, 0.5)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
\({A_5}\)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0.3, 0.5, 0.7)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.1, 0.3, 0.5)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
\(Expert14,\; {AR^ + }={AR^ - }=0, {RP^ - }=(-0.3, 0, 0), {RP^ + }=(0, 0, 0.3)\)
\({A_1}\)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0.3, 0.5, 0.7)
(0.1, 0.3, 0.5)
(0.5, 0.7, 0.9)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
\({A_2}\)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.5, 0.7, 0.9)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
\({A_3}\)
(0, 0, 0.3)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0.5, 0.7, 0.9)
(0.5, 0.7, 0.9)
(0, 0, 0.3)
(0.1, 0.3, 0.5)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0.1, 0.3, 0.5)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
\({A_4}\)
(0.1, 0.3, 0.5)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
\({A_5}\)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.1, 0.3, 0.5)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.5, 0.7, 0.9)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
\(Expert15,\; {AR^ + }={AR^ - }=0, {RP^ - }=(-0.5, -0.3, -0.1), {RP^ + }=(0.1, 0.3, 0.5)\)
\({A_1}\)
(0.3, 0.5, 0.7)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0.1, 0.3, 0.5)
(0.7, 0.99, 0.99)
(0, 0, 0.3)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0, 0, 0.3)
(0, 0, 0.3)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
\({A_2}\)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0.1, 0.3, 0.5)
(0.5, 0.7, 0.9)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.1, 0.3, 0.5)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.5, 0.7, 0.9)
(0.7, 0.99, 0.99)
(0.1, 0.3, 0.5)
\({A_3}\)
(0.1, 0.3, 0.5)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0, 0, 0.3)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0.3, 0.5, 0.7)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0.3, 0.5, 0.7)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
\({A_4}\)
(0.1, 0.3, 0.5)
(0.5, 0.7, 0.9)
(0.1, 0.3, 0.5)
(0.3, 0.5, 0.7)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0.5, 0.7, 0.9)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.1, 0.3, 0.5)
\({A_5}\)
(0.3, 0.5, 0.7)
(0.1, 0.3, 0.5)
(0, 0, 0.3)
(0.1, 0.3, 0.5)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0, 0, 0.3)
(0.1, 0.3, 0.5)
(0, 0, 0.3)
\(Expert16,\; {AR^ + }={AR^ - }=0, {RP^ - }=(-0.7, -0.5, -0.3), {RP^ + }=(0.3, 0.5, 0.7)\)
\({A_1}\)
(0.7, 0.99, 0.99)
(0.3, 0.5, 0.7)
(0, 0, 0.3)
(0.3, 0.5, 0.7)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0, 0, 0.3)
(0, 0, 0.3)
(0.3, 0.5, 0.7)
(0, 0, 0.3)
\({A_2}\)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0, 0, 0.3)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0.1, 0.3, 0.5)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0, 0, 0.3)
\({A_3}\)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0, 0, 0.3)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0, 0, 0.3)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
\({A_4}\)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0.3, 0.5, 0.7)
(0.3, 0.5, 0.7)
(0.5, 0.7, 0.9)
(0.1, 0.3, 0.5)
(0, 0, 0.3)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0.1, 0.3, 0.5)
(0.7, 0.99, 0.99)
(0.3, 0.5, 0.7)
\({A_5}\)
(0.3, 0.5, 0.7)
(0.1, 0.3, 0.5)
(0, 0, 0.3)
(0, 0, 0.3)
(0.1, 0.3, 0.5)
(0, 0, 0.3)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0, 0, 0.3)
(0.1, 0.3, 0.5)
(0, 0, 0.3)
\(Expert17,\; {AR^ + }={AR^ - }=0, {RP^ - }=(-0.9, -0.7, -0.5), {RP^ + }=(0.5, 0.7, 0.9)\)
\({A_1}\)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0, 0, 0.3)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
\({A_2}\)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.5, 0.7, 0.9)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.5, 0.7, 0.9)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
\({A_3}\)
(0, 0, 0.3)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0, 0, 0.3)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0.1, 0.3, 0.5)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0.1, 0.3, 0.5)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
\({A_4}\)
(0.1, 0.3, 0.5)
(0.7, 0.99, 0.99)
(0.1, 0.3, 0.5)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0.5, 0.7, 0.9)
(0.7, 0.99, 0.99)
(0.1, 0.3, 0.5)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0.1, 0.3, 0.5)
\({A_5}\)
(0.1, 0.3, 0.5)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.5, 0.7, 0.9)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
\(Expert18,\; {AR^ + }={AR^ - }=0, {RP^ - }=(-0.7, -0.5, -0.3), {RP^ + }=(0.3, 0.5, 0.7)\)
\({A_1}\)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0, 0, 0.3)
(0.1, 0.3, 0.5)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0, 0, 0.3)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0, 0, 0.3)
(0.3, 0.5, 0.7)
(0, 0, 0.3)
\({A_2}\)
(0.5, 0.7, 0.9)
(0.1, 0.3, 0.5)
(0, 0, 0.3)
(0.5, 0.7, 0.9)
(0.7, 0.99, 0.99)
(0, 0, 0.3)
(0.1, 0.3, 0.5)
(0.7, 0.99, 0.99)
(0, 0, 0.3)
(0.5, 0.7, 0.9)
(0.1, 0.3, 0.5)
(0, 0, 0.3)
\({A_3}\)
(0.1, 0.3, 0.5)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0, 0, 0.3)
(0.1, 0.3, 0.5)
(0.7, 0.99, 0.99)
(0, 0, 0.3)
(0.5, 0.7, 0.9)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
\({A_4}\)
(0.3, 0.5, 0.7)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0.3, 0.5, 0.7)
(0, 0, 0.3)
(0, 0, 0.3)
(0.3, 0.5, 0.7)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
\({A_5}\)
(0.3, 0.5, 0.7)
(0.1, 0.3, 0.5)
(0, 0, 0.3)
(0.1, 0.3, 0.5)
(0.7, 0.99, 0.99)
(0.3, 0.5, 0.7)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0.3, 0.5, 0.7)
(0, 0, 0.3)
(0.1, 0.3, 0.5)
(0, 0, 0.3)
\(Expert19,\; {AR^ + }={AR^ - }=0, {RP^ - }=(-0.7, -0.5, -0.3), {RP^ + }=(0.3, 0.5, 0.7)\)
\({A_1}\)
(0.5, 0.7, 0.9)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0.3, 0.5, 0.7)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0, 0, 0.3)
(0, 0, 0.3)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
\({A_2}\)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0, 0, 0.3)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0, 0, 0.3)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0, 0, 0.3)
\({A_3}\)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0, 0, 0.3)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0, 0, 0.3)
(0, 0, 0.3)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0, 0, 0.3)
\({A_4}\)
(0.1, 0.3, 0.5)
(0.3, 0.5, 0.7)
(0, 0, 0.3)
(0.3, 0.5, 0.7)
(0.3, 0.5, 0.7)
(0, 0, 0.3)
(0, 0, 0.3)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0.1, 0.3, 0.5)
(0.3, 0.5, 0.7)
(0, 0, 0.3)
\({A_5}\)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0.3, 0.5, 0.7)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0.3, 0.5, 0.7)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.1, 0.3, 0.5)
(0.7, 0.99, 0.99)
(0.3, 0.5, 0.7)
\(Expert20,\;{AR^ + }={AR^ - }=0, {RP^ - }=(-0.5, -0.3, -0.1), {RP^ + }=(0.1, 0.3, 0.5)\)
\({A_1}\)
(0.5, 0.7, 0.9)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0.3, 0.5, 0.7)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0, 0, 0.3)
(0, 0, 0.3)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
\({A_2}\)
(0.7, 0.99, 0.99)
(0.7, 0.99, 0.99)
(0, 0, 0.3)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0, 0, 0.3)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0, 0, 0.3)
\({A_3}\)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0, 0, 0.3)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0, 0, 0.3)
(0, 0, 0.3)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0, 0, 0.3)
\({A_4}\)
(0.1, 0.3, 0.5)
(0.3, 0.5, 0.7)
(0, 0, 0.3)
(0.3, 0.5, 0.7)
(0.3, 0.5, 0.7)
(0, 0, 0.3)
(0, 0, 0.3)
(0.5, 0.7, 0.9)
(0.3, 0.5, 0.7)
(0.1, 0.3, 0.5)
(0.3, 0.5, 0.7)
(0, 0, 0.3)
\({A_5}\)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0.3, 0.5, 0.7)
(0, 0, 0.3)
(0.7, 0.99, 0.99)
(0.3, 0.5, 0.7)
(0.3, 0.5, 0.7)
(0.7, 0.99, 0.99)
(0.5, 0.7, 0.9)
(0.1, 0.3, 0.5)
(0.7, 0.99, 0.99)
(0.3, 0.5, 0.7)
Table 16
R-numbers matrix
 
\({C_1}\)
\({C_2}\)
\({C_3}\)
\({C_4}\)
Expert1
\({A_1}\)
((0.15, 0.35, 0.692), (0.5, 0.7, 0.9), (0.55, 0.91, 0.99))
((0.286, 0.7, 0.9), (0.5, 0.7, 0.9), (0.615, 0.99, 0.99))
((0.003, 0.005, 0.21), (0.3, 0.5, 0.7), (0.462, 0.85, 0.99))
((0, 0, 0.09), (0, 0, 0.3), (0, 0, 0.597))
\({A_2}\)
((0.007, 0.01, 0.457), (0.7, 0.99, 0.99), (0.99, 0.99, 0.99))
((0.007, 0.495, 0.693), (0.7, 0.99, 0.99), (0.99, 0.99, 0.99))
((0, 0, 0.15), (0, 0, 0.3), (0, 0, 0.51))
((0, 0, 0.15), (0, 0, 0.3), (0, 0, 0.597))
\({A_3}\)
((0.01, 0.09, 0.308), (0.1, 0.3, 0.5), (0.1, 0.3, 0.714))
((0, 0, 0.09), (0, 0, 0.3), (0, 0, 0.597))
((0.001, 0.15, 0.35), (0.1, 0.3, 0.5), (0.1, 0.39, 0.75))
((0.007, 0.495, 0.693), (0.7, 0.99, 0.99), (0.969, 0.99, 0.99))
\({A_4}\)
((0.001, 0.003, 0.231), (0.1, 0.3, 0.5), (0.15, 0.51, 0.99))
((0.086, 0.35, 0.63), (0.3, 0.5, 0.7), (0.369, 0.75, 0.99))
((0.007, 0.01, 0.297), (0.7, 0.99, 0.99), (0.7, 0.99, 0.99))
((0.001, 0.003, 0.15), (0.1, 0.3, 0.5), (0.154, 0.597, 0.99))
\({A_5}\)
((0.03, 0.15, 0.431), (0.3, 0.5, 0.7), (0.33, 0.65, 0.99))
((0, 0, 0.09), (0, 0, 0.3), (0, 0, 0.597))
((0.003, 0.15, 0.35), (0.3, 0.5, 0.7), (0.3, 0.75, 0.99))
((0.001, 0.09, 0.25), (0.1, 0.3, 0.5), (0.154, 0.597, 0.99))
Expert2
\({A_1}\)
((0.167, 0.337, 0.582), (0.7, 0.99, 0.99), (0.99, 0.99, 0.99))
((0.007, 0.01, 0.283), (0.7, 0.99, 0.99), (0.988, 0.99, 0.99))
((0.005, 0.7, 0.9), (0.5, 0.7, 0.9), (0.588, 0.7, 0.9))
((0.007, 0.01, 0.283), (0.7, 0.99, 0.99), (0.988, 0.99, 0.99))
\({A_2}\)
((0.231, 0.467, 0.741), (0.5, 0.7, 0.9), (0.571, 0.99, 0.99))
((0.003, 0.2, 0.6), (0.3, 0.5, 0.7), (0.353, 0.667, 0.99))
((0.005, 0.007, 0.514), (0.5, 0.7, 0.9), (0.5, 0.84, 0.99))
((0.005, 0.28, 0.771), (0.5, 0.7, 0.9), (0.588, 0.933, 0.99))
\({A_3}\)
((0.024, 0.102, 0.294), (0.1, 0.3, 0.5), (0.171, 0.597, 0.99))
((0.005, 0.007, 0.257), (0.5, 0.7, 0.9), (0.706, 0.99, 0.99))
((0.001, 0.003, 0.005), (0.1, 0.3, 0.5), (0.1, 0.44, 0.881))
((0.001, 0.003, 0.143), (0.1, 0.3, 0.5), (0.141, 0.498, 0.881))
\({A_4}\)
((0.215, 0.528, 0.699), (0.7, 0.99, 0.99), (0.7, 0.99, 0.99))
((0.007, 0.99, 0.99), (0.7, 0.99, 0.99), (0.906, 0.99, 0.99))
((0.005, 0.28, 0.771), (0.5, 0.7, 0.9), (0.5, 0.747, 0.99))
((0.007, 0.99, 0.99), (0.7, 0.99, 0.99), (0.906, 0.99, 0.99))
\({A_5}\)
((0.024, 0.102, 0.294), (0.1, 0.3, 0.5), (0.171, 0.597, 0.99))
((0.007, 0.01, 0.283), (0.7, 0.99, 0.99), (0.988, 0.99, 0.99))
((0.003, 0.005, 0.007), (0.3, 0.5, 0.7), (0.3, 0.733, 0.99))
((0.005, 0.007, 0.257), (0.5, 0.7, 0.9), (0.706, 0.99, 0.99))
Expert3
2
\({A_1}\)
((0, 0, 0.247), (0, 0, 0.3), (0, 0, 0.597))
((0.001, 0.12, 0.429), (0.1, 0.3, 0.5), (0.118, 0.4, 0.769))
((0, 0, 0.257), (0, 0, 0.3), (0, 0, 0.369))
((0, 0, 0.257), (0, 0, 0.3), (0, 0, 0.462))
\({A_2}\)
((0, 0, 0.176), (0, 0, 0.3), (0, 0, 0.597))
((0.007, 0.01, 0.283), (0.7, 0.99, 0.99), (0.988, 0.99, 0.99))
((0.001, 0.003, 0.143), (0.1, 0.3, 0.5), (0.1, 0.4, 0.769))
((0.005, 0.007, 0.257), (0.5, 0.7, 0.9), (0.706, 0.99, 0.99))
\({A_3}\)
((0.167, 0.337, 0.582), (0.7, 0.99, 0.99), (0.99, 0.99, 0.99))
((0.003, 0.5, 0.7), (0.3, 0.5, 0.7), (0.388, 0.733, 0.99))
((0.001, 0.3, 0.5), (0.1, 0.3, 0.5), (0.1, 0.3, 0.5))
((0.005, 0.7, 0.9), (0.5, 0.7, 0.9), (0.647, 0.99, 0.99))
\({A_4}\)
((0.323, 0.66, 0.815), (0.7, 0.99, 0.99), (0.8, 0.99, 0.99))
((0.007, 0.01, 0.283), (0.7, 0.99, 0.99), (0.988, 0.99, 0.99))
((0.005, 0.007, 0.257), (0.5, 0.7, 0.9), (0.5, 0.933, 0.99))
((0.007, 0.01, 0.283), (0.7, 0.99, 0.99), (0.988, 0.99, 0.99))
\({A_5}\)
((0, 0, 0.176), (0, 0, 0.3), (0, 0, 0.597))
((0.001, 0.12, 0.429), (0.1, 0.3, 0.5), (0.129, 0.44, 0.846))
((0, 0, 0.257), (0, 0, 0.3), (0, 0, 0.369))
((0, 0, 0.257), (0, 0, 0.3), (0, 0, 0.508))
Expert4
2
\({A_1}\)
((0.182, 0.431, 0.72), (0.5, 0.7, 0.9), (0.5, 0.7, 0.99))
((0.2, 0.7, 0.9), (0.5, 0.7, 0.9), (0.6, 0.969, 0.99))
((0.003, 0.005, 0.311), (0.3, 0.5, 0.7), (0.44, 0.692, 0.99))
((0, 0, 0.3), (0, 0, 0.3), (0, 0, 0.491))
\({A_2}\)
((0.07, 0.236, 0.528), (0.7, 0.99, 0.99), (0.933, 0.99, 0.99))
((0.007, 0.283, 0.66), (0.7, 0.99, 0.99), (0.99, 0.99, 0.99))
((0.003, 0.286, 0.622), (0.3, 0.5, 0.7), (0.3, 0.538, 0.891))
((0.003, 0.143, 0.467), (0.3, 0.5, 0.7), (0.44, 0.881, 0.99))
\({A_3}\)
((0.01, 0.072, 0.267), (0.1, 0.3, 0.5), (0.178, 0.597, 0.99))
((0.003, 0.005, 0.156), (0.3, 0.5, 0.7), (0.44, 0.881, 0.99))
((0, 0, 0.133), (0, 0, 0.3), (0, 0, 0.491))
((0.007, 0.01, 0.22), (0.7, 0.99, 0.99), (0.99, 0.99, 0.99))
\({A_4}\)
((0.036, 0.185, 0.4), (0.1, 0.3, 0.5), (0.111, 0.429, 0.99))
((0.003, 0.286, 0.622), (0.3, 0.5, 0.7), (0.36, 0.692, 0.99))
((0, 0, 0.3), (0, 0, 0.3), (0, 0, 0.3))
((0.001, 0.171, 0.444), (0.1, 0.3, 0.5), (0.12, 0.415, 0.818))
\({A_5}\)
((0.03, 0.119, 0.373), (0.3, 0.5, 0.7), (0.533, 0.99, 0.99))
((0, 0, 0.067), (0, 0, 0.3), (0, 0, 0.57))
((0.003, 0.005, 0.311), (0.3, 0.5, 0.7), (0.3, 0.692, 0.99))
((0.007, 0.01, 0.22), (0.7, 0.99, 0.99), (0.99, 0.99, 0.99))
Expert5
\({A_1}\)
((0.048, 0.125, 0.267), (0.1, 0.3, 0.5), (0.199, 0.597, 0.99))
((0.007, 0.01, 0.396), (0.7, 0.99, 0.99), (0.747, 0.99, 0.99))
((0.005, 0.007, 0.36), (0.5, 0.7, 0.9), (0.533, 0.824, 0.99))
((0.003, 0.005, 0.28), (0.3, 0.5, 0.7), (0.32, 0.588, 0.884))
\({A_2}\)
((0.063, 0.212, 0.4), (0.1, 0.3, 0.5), (0.12, 0.597, 0.99))
((0.007, 0.01, 0.396), (0.7, 0.99, 0.99), (0.99, 0.99, 0.99))
((0.005, 0.007, 0.36), (0.5, 0.7, 0.9), (0.5, 0.824, 0.99))
((0.005, 0.007, 0.36), (0.5, 0.7, 0.9), (0.733, 0.99, 0.99))
\({A_3}\)
((0.048, 0.125, 0.267), (0.1, 0.3, 0.5), (0.199, 0.597, 0.99))
((0.003, 0.005, 0.56), (0.3, 0.5, 0.7), (0.36, 0.647, 0.958))
((0.001, 0.003, 0.4), (0.1, 0.3, 0.5), (0.1, 0.318, 0.579))
((0.001, 0.003, 0.4), (0.1, 0.3, 0.5), (0.12, 0.388, 0.684))
\({A_4}\)
((0, 0, 0.2), (0, 0, 0.3), (0, 0, 0.597))
((0.007, 0.01, 0.01), (0.7, 0.99, 0.99), (0.99, 0.99, 0.99))
((0.003, 0.005, 0.007), (0.3, 0.5, 0.7), (0.3, 0.647, 0.958))
((0.007, 0.01, 0.01), (0.7, 0.99, 0.99), (0.99, 0.99, 0.99))
\({A_5}\)
((0.144, 0.209, 0.373), (0.3, 0.5, 0.7), (0.597, 0.99, 0.99))
((0.003, 0.005, 0.007), (0.3, 0.5, 0.7), (0.44, 0.791, 0.99))
((0, 0, 0.003), (0, 0, 0.3), (0, 0, 0.411))
((0.005, 0.007, 0.009), (0.5, 0.7, 0.9), (0.733, 0.99, 0.99))
Expert6
\({A_1}\)
((0.292, 0.497, 0.642), (0.7, 0.99, 0.99), (0.99, 0.99, 0.99))
((0.007, 0.01, 0.01), (0.7, 0.99, 0.99), (0.946, 0.99, 0.99))
((0.003, 0.005, 0.007), (0.3, 0.5, 0.7), (0.406, 0.676, 0.99))
((0.007, 0.01, 0.01), (0.7, 0.99, 0.99), (0.946, 0.99, 0.99))
\({A_2}\)
((0.209, 0.352, 0.583), (0.5, 0.7, 0.9), (0.99, 0.99, 0.99))
((0.005, 0.007, 0.009), (0.5, 0.7, 0.9), (0.676, 0.99, 0.99))
((0, 0, 0.003), (0, 0, 0.3), (0, 0, 0.424))
((0.005, 0.007, 0.009), (0.5, 0.7, 0.9), (0.676, 0.99, 0.99))
\({A_3}\)
((0.141, 0.324, 0.524), (0.3, 0.5, 0.7), (0.597, 0.99, 0.99))
((0.003, 0.005, 0.007), (0.3, 0.5, 0.7), (0.375, 0.676, 0.99))
((0.001, 0.003, 0.005), (0.1, 0.3, 0.5), (0.1, 0.345, 0.647))
((0.003, 0.005, 0.007), (0.3, 0.5, 0.7), (0.375, 0.676, 0.99))
\({A_4}\)
((0.292, 0.497, 0.642), (0.7, 0.99, 0.99), (0.99, 0.99, 0.99))
((0.007, 0.01, 0.01), (0.7, 0.99, 0.99), (0.946, 0.99, 0.99))
((0.007, 0.01, 0.01), (0.7, 0.99, 0.99), (0.7, 0.99, 0.99))
((0.007, 0.01, 0.01), (0.7, 0.99, 0.99), (0.946, 0.99, 0.99))
\({A_5}\)
((0.353, 0.594, 0.855), (0.5, 0.7, 0.9), (0.5, 0.7, 0.99))
((0.001, 0.3, 0.5), (0.1, 0.3, 0.5), (0.105, 0.345, 0.647))
((0, 0, 0.3), (0, 0, 0.3), (0, 0, 0.3))
((0, 0, 0.3), (0, 0, 0.3), (0, 0, 0.388))
Expert7
\({A_1}\)
((0.024, 0.102, 0.294), (0.1, 0.3, 0.5), (0.171, 0.597, 0.99))
((0.005, 0.28, 0.771), (0.5, 0.7, 0.9), (0.588, 0.933, 0.99))
((0.003, 0.005, 0.2), (0.3, 0.5, 0.7), (0.424, 0.667, 0.99))
((0, 0, 0.086), (0, 0, 0.3), (0, 0, 0.528))
\({A_2}\)
((0.024, 0.102, 0.294), (0.1, 0.3, 0.5), (0.143, 0.597, 0.99))
((0.003, 0.005, 0.2), (0.3, 0.5, 0.7), (0.424, 0.83, 0.99))
((0, 0, 0.171), (0, 0, 0.3), (0, 0, 0.415))
((0.003, 0.005, 0.4), (0.3, 0.5, 0.7), (0.424, 0.83, 0.99))
\({A_3}\)
((0.431, 0.792, 0.932), (0.7, 0.99, 0.99), (0.99, 0.99, 0.99))
((0, 0, 0.3), (0, 0, 0.3), (0, 0, 0.508))
((0, 0, 0.171), (0, 0, 0.3), (0, 0, 0.415))
((0, 0, 0.171), (0, 0, 0.3), (0, 0, 0.415))
\({A_4}\)
((0.119, 0.238, 0.529), (0.5, 0.7, 0.9), (0.99, 0.99, 0.99))
((0.003, 0.005, 0.2), (0.3, 0.5, 0.7), (0.424, 0.83, 0.99))
((0, 0, 0.003), (0, 0, 0.3), (0, 0, 0.528))
((0, 0, 0.003), (0, 0, 0.3), (0, 0, 0.528))
\({A_5}\)
((0.119, 0.238, 0.529), (0.5, 0.7, 0.9), (0.857, 0.99, 0.99))
((0, 0, 0.257), (0, 0, 0.3), (0, 0, 0.508))
((0.003, 0.005, 0.2), (0.3, 0.5, 0.7), (0.3, 0.667, 0.99))
((0.001, 0.003, 0.143), (0.1, 0.3, 0.5), (0.141, 0.498, 0.881))
Expert8
\({A_1}\)
((0.072, 0.17, 0.412), (0.3, 0.5, 0.7), (0.514, 0.99, 0.99))
((0.007, 0.396, 0.849), (0.7, 0.99, 0.99), (0.824, 0.99, 0.99))
((0.005, 0.007, 0.257), (0.5, 0.7, 0.9), (0.706, 0.933, 0.99))
((0.007, 0.01, 0.283), (0.7, 0.99, 0.99), (0.988, 0.99, 0.99))
\({A_2}\)
((0.167, 0.337, 0.582), (0.7, 0.99, 0.99), (0.99, 0.99, 0.99))
((0.003, 0.005, 0.2), (0.3, 0.5, 0.7), (0.424, 0.83, 0.99))
((0.001, 0.003, 0.286), (0.1, 0.3, 0.5), (0.1, 0.36, 0.692))
((0, 0, 0.171), (0, 0, 0.3), (0, 0, 0.528))
\({A_3}\)
((0.323, 0.66, 0.815), (0.7, 0.99, 0.99), (0.8, 0.99, 0.99))
((0.003, 0.5, 0.7), (0.3, 0.5, 0.7), (0.388, 0.733, 0.99))
((0.005, 0.28, 0.771), (0.5, 0.7, 0.9), (0.5, 0.747, 0.99))
((0.001, 0.12, 0.429), (0.1, 0.3, 0.5), (0.118, 0.4, 0.769))
\({A_4}\)
((0.024, 0.102, 0.294), (0.1, 0.3, 0.5), (0.171, 0.597, 0.99))
((0.007, 0.01, 0.283), (0.7, 0.99, 0.99), (0.988, 0.99, 0.99))
((0, 0, 0.086), (0, 0, 0.3), (0, 0, 0.462))
((0.003, 0.005, 0.2), (0.3, 0.5, 0.7), (0.424, 0.83, 0.99))
\({A_5}\)
((0.215, 0.528, 0.699), (0.7, 0.99, 0.99), (0.99, 0.99, 0.99))
((0.007, 0.396, 0.849), (0.7, 0.99, 0.99), (0.906, 0.99, 0.99))
((0.005, 0.007, 0.514), (0.5, 0.7, 0.9), (0.5, 0.84, 0.99))
((0.005, 0.007, 0.514), (0.5, 0.7, 0.9), (0.647, 0.99, 0.99))
Expert9
\({A_1}\)
((0.109, 0.308, 0.56), (0.3, 0.5, 0.7), (0.3, 0.5, 0.99))
((0.04, 0.3, 0.5), (0.1, 0.3, 0.5), (0.12, 0.415, 0.818))
((0, 0, 0.3), (0, 0, 0.3), (0, 0, 0.3))
((0, 0, 0.3), (0, 0, 0.3), (0, 0, 0.491))
\({A_2}\)
((0.01, 0.072, 0.267), (0.1, 0.3, 0.5), (0.1, 0.3, 0.8))
((0.12, 0.5, 0.7), (0.3, 0.5, 0.7), (0.44, 0.881, 0.99))
((0.04, 0.3, 0.5), (0.1, 0.3, 0.5), (0.1, 0.3, 0.5))
((0.2, 0.7, 0.9), (0.5, 0.7, 0.9), (0.733, 0.99, 0.99))
\({A_3}\)
((0.255, 0.609, 0.792), (0.7, 0.99, 0.99), (0.778, 0.99, 0.99))
((0, 0, 0.267), (0, 0, 0.3), (0, 0, 0.491))
((0, 0, 0.267), (0, 0, 0.3), (0, 0, 0.382))
((0, 0, 0.267), (0, 0, 0.3), (0, 0, 0.491))
\({A_4}\)
((0.01, 0.072, 0.267), (0.1, 0.3, 0.5), (0.156, 0.597, 0.99))
((0.007, 0.01, 0.44), (0.7, 0.99, 0.99), (0.99, 0.99, 0.99))
((0.005, 0.007, 0.4), (0.5, 0.7, 0.9), (0.5, 0.969, 0.99))
((0.007, 0.01, 0.44), (0.7, 0.99, 0.99), (0.99, 0.99, 0.99))
\({A_5}\)
((0.036, 0.185, 0.4), (0.1, 0.3, 0.5), (0.1, 0.3, 0.8))
((0.04, 0.3, 0.5), (0.1, 0.3, 0.5), (0.12, 0.415, 0.818))
((0, 0, 0.3), (0, 0, 0.3), (0, 0, 0.3))
((0, 0, 0.3), (0, 0, 0.3), (0, 0, 0.491))
Expert10
\({A_1}\)
((0.03, 0.119, 0.373), (0.3, 0.5, 0.7), (0.333, 0.714, 0.99))
((0.2, 0.7, 0.9), (0.5, 0.7, 0.9), (0.6, 0.969, 0.99))
((0.003, 0.286, 0.622), (0.3, 0.5, 0.7), (0.36, 0.538, 0.891))
((0, 0, 0.2), (0, 0, 0.3), (0, 0, 0.545))
\({A_2}\)
((0.07, 0.236, 0.528), (0.7, 0.99, 0.99), (0.933, 0.99, 0.99))
((0, 0, 0.3), (0, 0, 0.3), (0, 0, 0.57))
((0, 0, 0.133), (0, 0, 0.3), (0, 0, 0.491))
((0, 0, 0.3), (0, 0, 0.3), (0, 0, 0.57))
\({A_3}\)
((0.127, 0.457, 0.66), (0.7, 0.99, 0.99), (0.933, 0.99, 0.99))
((0, 0, 0.267), (0, 0, 0.3), (0, 0, 0.491))
((0.12, 0.5, 0.7), (0.3, 0.5, 0.7), (0.3, 0.5, 0.7))
((0.003, 0.143, 0.467), (0.3, 0.5, 0.7), (0.4, 0.769, 0.99))
\({A_4}\)
((0.01, 0.072, 0.267), (0.1, 0.3, 0.5), (0.133, 0.514, 0.99))
((0, 0, 0.133), (0, 0, 0.3), (0, 0, 0.57))
((0, 0, 0.133), (0, 0, 0.3), (0, 0, 0.491))
((0, 0, 0.3), (0, 0, 0.3), (0, 0, 0.491))
\({A_5}\)
((0.07, 0.236, 0.528), (0.7, 0.99, 0.99), (0.99, 0.99, 0.99))
((0, 0, 0.3), (0, 0, 0.3), (0, 0, 0.491))
((0.003, 0.286, 0.622), (0.3, 0.5, 0.7), (0.3, 0.538, 0.891))
((0.001, 0.086, 0.333), (0.1, 0.3, 0.5), (0.133, 0.462, 0.909))
Expert11
\({A_1}\)
((0.092, 0.267, 0.494), (0.3, 0.5, 0.7), (0.429, 0.99, 0.99))
((0.007, 0.01, 0.566), (0.7, 0.99, 0.99), (0.988, 0.99, 0.99))
((0.005, 0.007, 0.514), (0.5, 0.7, 0.9), (0.706, 0.84, 0.99))
((0.003, 0.005, 0.4), (0.3, 0.5, 0.7), (0.424, 0.83, 0.99))
\({A_2}\)
((0.167, 0.337, 0.582), (0.7, 0.99, 0.99), (0.7, 0.99, 0.99))
((0.003, 0.005, 0.2), (0.3, 0.5, 0.7), (0.424, 0.83, 0.99))
((0.005, 0.007, 0.257), (0.5, 0.7, 0.9), (0.5, 0.933, 0.99))
((0.005, 0.007, 0.257), (0.5, 0.7, 0.9), (0.706, 0.99, 0.99))
\({A_3}\)
((0, 0, 0.176), (0, 0, 0.3), (0, 0, 0.597))
((0.005, 0.007, 0.514), (0.5, 0.7, 0.9), (0.647, 0.99, 0.99))
((0.003, 0.005, 0.4), (0.3, 0.5, 0.7), (0.3, 0.6, 0.969))
((0.001, 0.003, 0.286), (0.1, 0.3, 0.5), (0.129, 0.44, 0.846))
\({A_4}\)
((0, 0, 0.247), (0, 0, 0.3), (0, 0, 0.597))
((0.007, 0.01, 0.01), (0.7, 0.99, 0.99), (0.988, 0.99, 0.99))
((0.005, 0.007, 0.009), (0.5, 0.7, 0.9), (0.5, 0.99, 0.99))
((0.007, 0.01, 0.01), (0.7, 0.99, 0.99), (0.988, 0.99, 0.99))
\({A_5}\)
((0.072, 0.17, 0.412), (0.3, 0.5, 0.7), (0.429, 0.99, 0.99))
((0.003, 0.005, 0.2), (0.3, 0.5, 0.7), (0.424, 0.83, 0.99))
((0.003, 0.005, 0.2), (0.3, 0.5, 0.7), (0.3, 0.667, 0.99))
((0.005, 0.007, 0.257), (0.5, 0.7, 0.9), (0.706, 0.99, 0.99))
Expert12
\({A_1}\)
((0.158, 0.294, 0.467), (0.3, 0.5, 0.7), (0.48, 0.99, 0.99))
((0.001, 0.003, 0.4), (0.1, 0.3, 0.5), (0.147, 0.475, 0.761))
((0, 0, 0.24), (0, 0, 0.3), (0, 0, 0.347))
((0, 0, 0.24), (0, 0, 0.3), (0, 0, 0.456))
\({A_2}\)
((0.335, 0.413, 0.528), (0.7, 0.99, 0.99), (0.7, 0.99, 0.99))
((0, 0, 0.12), (0, 0, 0.3), (0, 0, 0.456))
((0.001, 0.003, 0.2), (0.1, 0.3, 0.5), (0.1, 0.353, 0.632))
((0.005, 0.007, 0.36), (0.5, 0.7, 0.9), (0.733, 0.99, 0.99))
\({A_3}\)
((0.144, 0.209, 0.373), (0.3, 0.5, 0.7), (0.3, 0.5, 0.99))
((0.003, 0.005, 0.28), (0.3, 0.5, 0.7), (0.4, 0.706, 0.99))
((0.001, 0.003, 0.2), (0.1, 0.3, 0.5), (0.1, 0.353, 0.632))
((0.005, 0.007, 0.36), (0.5, 0.7, 0.9), (0.667, 0.988, 0.99))
\({A_4}\)
((0, 0, 0.24), (0, 0, 0.3), (0, 0, 0.597))
((0.007, 0.01, 0.396), (0.7, 0.99, 0.99), (0.99, 0.99, 0.99))
((0.005, 0.007, 0.36), (0.5, 0.7, 0.9), (0.5, 0.824, 0.99))
((0.007, 0.01, 0.396), (0.7, 0.99, 0.99), (0.99, 0.99, 0.99))
\({A_5}\)
((0.144, 0.209, 0.373), (0.3, 0.5, 0.7), (0.48, 0.99, 0.99))
((0.001, 0.003, 0.005), (0.1, 0.3, 0.5), (0.147, 0.475, 0.761))
((0, 0, 0.003), (0, 0, 0.3), (0, 0, 0.411))
((0, 0, 0.003), (0, 0, 0.3), (0, 0, 0.456))
Expert13
\({A_1}\)
((0.119, 0.238, 0.529), (0.5, 0.7, 0.9), (0.5, 0.7, 0.99))
((0.003, 0.005, 0.4), (0.3, 0.5, 0.7), (0.388, 0.733, 0.99))
((0, 0, 0.257), (0, 0, 0.3), (0, 0, 0.369))
((0, 0, 0.3), (0, 0, 0.3), (0, 0, 0.528))
\({A_2}\)
((0.119, 0.238, 0.529), (0.5, 0.7, 0.9), (0.857, 0.99, 0.99))
((0.003, 0.5, 0.7), (0.3, 0.5, 0.7), (0.424, 0.83, 0.99))
((0.003, 0.005, 0.4), (0.3, 0.5, 0.7), (0.3, 0.6, 0.969))
((0.003, 0.005, 0.2), (0.3, 0.5, 0.7), (0.424, 0.83, 0.99))
\({A_3}\)
((0.031, 0.16, 0.353), (0.1, 0.3, 0.5), (0.143, 0.597, 0.99))
((0, 0, 0.3), (0, 0, 0.3), (0, 0, 0.528))
((0.001, 0.003, 0.286), (0.1, 0.3, 0.5), (0.1, 0.36, 0.692))
((0, 0, 0.171), (0, 0, 0.3), (0, 0, 0.508))
\({A_4}\)
((0, 0, 0.212), (0, 0, 0.3), (0, 0, 0.597))
((0.003, 0.5, 0.7), (0.3, 0.5, 0.7), (0.353, 0.667, 0.99))
((0, 0, 0.171), (0, 0, 0.3), (0, 0, 0.415))
((0.001, 0.003, 0.286), (0.1, 0.3, 0.5), (0.129, 0.44, 0.846))
\({A_5}\)
((0, 0, 0.176), (0, 0, 0.3), (0, 0, 0.597))
((0, 0, 0.171), (0, 0, 0.3), (0, 0, 0.528))
((0.003, 0.005, 0.2), (0.3, 0.5, 0.7), (0.3, 0.667, 0.99))
((0.001, 0.003, 0.143), (0.1, 0.3, 0.5), (0.141, 0.498, 0.881))
Expert14
\({A_1}\)
((0.003, 0.005, 0.323), (0.3, 0.5, 0.7), (0.3, 0.5, 0.99))
((0.2, 0.693, 0.891), (0.7, 0.99, 0.99), (0.862, 0.99, 0.99))
((0.005, 0.007, 0.27), (0.5, 0.7, 0.9), (0.769, 0.99, 0.99))
((0.007, 0.01, 0.297), (0.7, 0.99, 0.99), (0.99, 0.99, 0.99))
\({A_2}\)
((0.007, 0.01, 0.457), (0.7, 0.99, 0.99), (0.99, 0.99, 0.99))
((0.003, 0.15, 0.35), (0.3, 0.5, 0.7), (0.462, 0.99, 0.99))
((0.005, 0.21, 0.45), (0.5, 0.7, 0.9), (0.5, 0.99, 0.99))
((0.003, 0.15, 0.35), (0.3, 0.5, 0.7), (0.462, 0.99, 0.99))
\({A_3}\)
((0, 0, 0.185), (0, 0, 0.3), (0, 0, 0.597))
((0.286, 0.7, 0.9), (0.5, 0.7, 0.9), (0.692, 0.99, 0.99))
((0.001, 0.15, 0.35), (0.1, 0.3, 0.5), (0.1, 0.39, 0.75))
((0.001, 0.15, 0.35), (0.1, 0.3, 0.5), (0.138, 0.51, 0.95))
\({A_4}\)
((0.01, 0.09, 0.308), (0.1, 0.3, 0.5), (0.13, 0.45, 0.99))
((0.007, 0.297, 0.495), (0.7, 0.99, 0.99), (0.99, 0.99, 0.99))
((0.003, 0.005, 0.21), (0.3, 0.5, 0.7), (0.3, 0.85, 0.99))
((0.007, 0.01, 0.297), (0.7, 0.99, 0.99), (0.99, 0.99, 0.99))
\({A_5}\)
((0, 0, 0.138), (0, 0, 0.3), (0, 0, 0.597))
((0.2, 0.693, 0.891), (0.7, 0.99, 0.99), (0.969, 0.99, 0.99))
((0.003, 0.15, 0.35), (0.3, 0.5, 0.7), (0.3, 0.75, 0.99))
((0.005, 0.21, 0.45), (0.5, 0.7, 0.9), (0.769, 0.99, 0.99))
Expert15
\({A_1}\)
((0.055, 0.231, 0.467), (0.3, 0.5, 0.7), (0.4, 0.857, 0.99))
((0.04, 0.3, 0.5), (0.1, 0.3, 0.5), (0.147, 0.528, 0.95))
((0, 0, 0.3), (0, 0, 0.3), (0, 0, 0.3))
((0, 0, 0.2), (0, 0, 0.3), (0, 0, 0.545))
\({A_2}\)
((0.03, 0.119, 0.373), (0.3, 0.5, 0.7), (0.333, 0.714, 0.99))
((0.005, 0.007, 0.4), (0.5, 0.7, 0.9), (0.733, 0.99, 0.99))
((0.001, 0.003, 0.222), (0.1, 0.3, 0.5), (0.1, 0.415, 0.818))
((0.005, 0.4, 0.8), (0.5, 0.7, 0.9), (0.733, 0.99, 0.99))
\({A_3}\)
((0.018, 0.138, 0.333), (0.1, 0.3, 0.5), (0.133, 0.514, 0.99))
((0, 0, 0.2), (0, 0, 0.3), (0, 0, 0.545))
((0.003, 0.143, 0.467), (0.3, 0.5, 0.7), (0.3, 0.615, 0.99))
((0.003, 0.143, 0.467), (0.3, 0.5, 0.7), (0.4, 0.769, 0.99))
\({A_4}\)
((0.018, 0.138, 0.333), (0.1, 0.3, 0.5), (0.111, 0.429, 0.99))
((0.003, 0.143, 0.467), (0.3, 0.5, 0.7), (0.4, 0.769, 0.99))
((0.005, 0.2, 0.6), (0.5, 0.7, 0.9), (0.5, 0.862, 0.99))
((0.007, 0.566, 0.88), (0.7, 0.99, 0.99), (0.933, 0.99, 0.99))
\({A_5}\)
((0.164, 0.385, 0.653), (0.3, 0.5, 0.7), (0.3, 0.5, 0.99))
((0.001, 0.003, 0.222), (0.1, 0.3, 0.5), (0.147, 0.528, 0.95))
((0, 0, 0.133), (0, 0, 0.3), (0, 0, 0.491))
((0, 0, 0.3), (0, 0, 0.3), (0, 0, 0.436))
Expert16
\({A_1}\)
((0.323, 0.66, 0.815), (0.7, 0.99, 0.99), (0.7, 0.99, 0.99))
((0.003, 0.005, 0.4), (0.3, 0.5, 0.7), (0.388, 0.733, 0.99))
((0.003, 0.5, 0.7), (0.3, 0.5, 0.7), (0.424, 0.5, 0.7))
((0, 0, 0.3), (0, 0, 0.3), (0, 0, 0.462))
\({A_2}\)
((0.167, 0.337, 0.582), (0.7, 0.99, 0.99), (0.7, 0.99, 0.99))
((0, 0, 0.257), (0, 0, 0.3), (0, 0, 0.528))
((0, 0, 0.086), (0, 0, 0.3), (0, 0, 0.462))
((0.003, 0.5, 0.7), (0.3, 0.5, 0.7), (0.424, 0.83, 0.99))
\({A_3}\)
((0, 0, 0.176), (0, 0, 0.3), (0, 0, 0.597))
((0, 0, 0.171), (0, 0, 0.3), (0, 0, 0.508))
((0, 0, 0.171), (0, 0, 0.3), (0, 0, 0.415))
((0, 0, 0.086), (0, 0, 0.3), (0, 0, 0.528))
\({A_4}\)
((0, 0, 0.176), (0, 0, 0.3), (0, 0, 0.597))
((0.003, 0.2, 0.6), (0.3, 0.5, 0.7), (0.388, 0.733, 0.99))
((0, 0, 0.171), (0, 0, 0.3), (0, 0, 0.415))
((0.001, 0.003, 0.286), (0.1, 0.3, 0.5), (0.141, 0.498, 0.881))
\({A_5}\)
((0.185, 0.4, 0.659), (0.3, 0.5, 0.7), (0.3, 0.5, 0.99))
((0, 0, 0.3), (0, 0, 0.3), (0, 0, 0.415))
((0.003, 0.005, 0.2), (0.3, 0.5, 0.7), (0.3, 0.667, 0.99))
((0, 0, 0.3), (0, 0, 0.3), (0, 0, 0.415))
Expert17
\({A_1}\)
((0.335, 0.413, 0.528), (0.7, 0.99, 0.99), (0.99, 0.99, 0.99))
((0.007, 0.99, 0.99), (0.7, 0.99, 0.99), (0.99, 0.99, 0.99))
((0, 0, 0.003), (0, 0, 0.3), (0, 0, 0.456))
((0.007, 0.01, 0.01), (0.7, 0.99, 0.99), (0.99, 0.99, 0.99))
\({A_2}\)
((0.335, 0.413, 0.528), (0.7, 0.99, 0.99), (0.99, 0.99, 0.99))
((0.003, 0.005, 0.007), (0.3, 0.5, 0.7), (0.44, 0.791, 0.99))
((0.005, 0.007, 0.009), (0.5, 0.7, 0.9), (0.5, 0.906, 0.99))
((0.005, 0.007, 0.009), (0.5, 0.7, 0.9), (0.733, 0.99, 0.99))
\({A_3}\)
((0, 0, 0.2), (0, 0, 0.3), (0, 0, 0.597))
((0, 0, 0.12), (0, 0, 0.3), (0, 0, 0.442))
((0.001, 0.003, 0.2), (0.1, 0.3, 0.5), (0.1, 0.353, 0.632))
((0.001, 0.003, 0.2), (0.1, 0.3, 0.5), (0.133, 0.424, 0.737))
\({A_4}\)
((0.048, 0.125, 0.267), (0.1, 0.3, 0.5), (0.12, 0.597, 0.99))
((0.007, 0.01, 0.396), (0.7, 0.99, 0.99), (0.933, 0.99, 0.99))
((0.005, 0.007, 0.72), (0.5, 0.7, 0.9), (0.5, 0.741, 0.99))
((0, 0, 0.24), (0, 0, 0.3), (0, 0, 0.456))
\({A_5}\)
((0.048, 0.125, 0.267), (0.1, 0.3, 0.5), (0.199, 0.597, 0.99))
((0.007, 0.01, 0.01), (0.7, 0.99, 0.99), (0.99, 0.99, 0.99))
((0.003, 0.005, 0.007), (0.3, 0.5, 0.7), (0.3, 0.647, 0.958))
((0.005, 0.007, 0.009), (0.5, 0.7, 0.9), (0.733, 0.99, 0.99))
Expert18
\({A_1}\)
((0.231, 0.467, 0.741), (0.5, 0.7, 0.9), (0.5, 0.7, 0.99))
((0.001, 0.003, 0.286), (0.1, 0.3, 0.5), (0.129, 0.44, 0.846))
((0, 0, 0.171), (0, 0, 0.3), (0, 0, 0.415))
((0, 0, 0.3), (0, 0, 0.3), (0, 0, 0.462))
\({A_2}\)
((0.308, 0.56, 0.847), (0.5, 0.7, 0.9), (0.5, 0.7, 0.99))
((0.005, 0.7, 0.9), (0.5, 0.7, 0.9), (0.706, 0.99, 0.99))
((0.001, 0.3, 0.5), (0.1, 0.3, 0.5), (0.1, 0.3, 0.5))
((0.005, 0.7, 0.9), (0.5, 0.7, 0.9), (0.529, 0.84, 0.99))
\({A_3}\)
((0.024, 0.102, 0.294), (0.1, 0.3, 0.5), (0.171, 0.597, 0.99))
((0.003, 0.5, 0.7), (0.3, 0.5, 0.7), (0.424, 0.83, 0.99))
((0.001, 0.3, 0.5), (0.1, 0.3, 0.5), (0.1, 0.3, 0.5))
((0.005, 0.007, 0.257), (0.5, 0.7, 0.9), (0.706, 0.99, 0.99))
\({A_4}\)
((0.092, 0.267, 0.494), (0.3, 0.5, 0.7), (0.429, 0.99, 0.99))
((0.007, 0.99, 0.99), (0.7, 0.99, 0.99), (0.824, 0.99, 0.99))
((0, 0, 0.3), (0, 0, 0.3), (0, 0, 0.3))
((0.007, 0.01, 0.566), (0.7, 0.99, 0.99), (0.906, 0.99, 0.99))
\({A_5}\)
((0.185, 0.4, 0.659), (0.3, 0.5, 0.7), (0.3, 0.5, 0.99))
((0.001, 0.003, 0.286), (0.1, 0.3, 0.5), (0.141, 0.498, 0.881))
((0, 0, 0.171), (0, 0, 0.3), (0, 0, 0.415))
((0, 0, 0.3), (0, 0, 0.3), (0, 0, 0.415))
Expert19
\({A_1}\)
((0.154, 0.373, 0.635), (0.5, 0.7, 0.9), (0.714, 0.99, 0.99))
((0.003, 0.005, 0.4), (0.3, 0.5, 0.7), (0.388, 0.733, 0.99))
((0.003, 0.5, 0.7), (0.3, 0.5, 0.7), (0.424, 0.5, 0.7))
((0, 0, 0.171), (0, 0, 0.3), (0, 0, 0.508))
\({A_2}\)
((0.167, 0.337, 0.582), (0.7, 0.99, 0.99), (0.7, 0.99, 0.99))
((0.003, 0.5, 0.7), (0.3, 0.5, 0.7), (0.424, 0.83, 0.99))
((0, 0, 0.086), (0, 0, 0.3), (0, 0, 0.462))
((0, 0, 0.3), (0, 0, 0.3), (0, 0, 0.528))
\({A_3}\)
((0.072, 0.17, 0.412), (0.3, 0.5, 0.7), (0.3, 0.5, 0.99))
((0, 0, 0.3), (0, 0, 0.3), (0, 0, 0.528))
((0, 0, 0.171), (0, 0, 0.3), (0, 0, 0.415))
((0, 0, 0.3), (0, 0, 0.3), (0, 0, 0.528))
\({A_4}\)
((0.046, 0.2, 0.412), (0.1, 0.3, 0.5), (0.1, 0.3, 0.99))
((0.003, 0.5, 0.7), (0.3, 0.5, 0.7), (0.353, 0.667, 0.99))
((0, 0, 0.171), (0, 0, 0.3), (0, 0, 0.415))
((0.001, 0.3, 0.5), (0.1, 0.3, 0.5), (0.118, 0.4, 0.769))
\({A_5}\)
((0.072, 0.17, 0.412), (0.3, 0.5, 0.7), (0.429, 0.99, 0.99))
((0, 0, 0.171), (0, 0, 0.3), (0, 0, 0.528))
((0.003, 0.005, 0.2), (0.3, 0.5, 0.7), (0.3, 0.667, 0.99))
((0.001, 0.003, 0.286), (0.1, 0.3, 0.5), (0.141, 0.498, 0.881))
Expert20
\({A_1}\)
((0.01, 0.072, 0.267), (0.1, 0.3, 0.5), (0.1, 0.3, 0.8))
((0.005, 0.2, 0.6), (0.5, 0.7, 0.9), (0.667, 0.99, 0.99))
((0.12, 0.5, 0.7), (0.3, 0.5, 0.7), (0.36, 0.5, 0.7))
((0.003, 0.005, 0.311), (0.3, 0.5, 0.7), (0.44, 0.881, 0.99))
\({A_2}\)
((0.07, 0.236, 0.528), (0.7, 0.99, 0.99), (0.99, 0.99, 0.99))
((0.12, 0.5, 0.7), (0.3, 0.5, 0.7), (0.44, 0.881, 0.99))
((0, 0, 0.3), (0, 0, 0.3), (0, 0, 0.3))
((0.005, 0.2, 0.6), (0.5, 0.7, 0.9), (0.667, 0.99, 0.99))
\({A_3}\)
((0.055, 0.231, 0.467), (0.3, 0.5, 0.7), (0.4, 0.857, 0.99))
((0.2, 0.7, 0.9), (0.5, 0.7, 0.9), (0.733, 0.99, 0.99))
((0.001, 0.003, 0.222), (0.1, 0.3, 0.5), (0.1, 0.415, 0.818))
((0.001, 0.003, 0.222), (0.1, 0.3, 0.5), (0.147, 0.528, 0.95))
\({A_4}\)
((0.055, 0.231, 0.467), (0.3, 0.5, 0.7), (0.4, 0.857, 0.99))
((0, 0, 0.3), (0, 0, 0.3), (0, 0, 0.491))
((0.005, 0.2, 0.6), (0.5, 0.7, 0.9), (0.5, 0.862, 0.99))
((0.28, 0.99, 0.99), (0.7, 0.99, 0.99), (0.84, 0.99, 0.99))
\({A_5}\)
((0.03, 0.119, 0.373), (0.3, 0.5, 0.7), (0.467, 0.99, 0.99))
((0.003, 0.143, 0.467), (0.3, 0.5, 0.7), (0.44, 0.881, 0.99))
((0.12, 0.5, 0.7), (0.3, 0.5, 0.7), (0.3, 0.5, 0.7))
((0.005, 0.2, 0.6), (0.5, 0.7, 0.9), (0.733, 0.99, 0.99))
Table 17
The normalized matrix
 
\({C_1}\)
\({C_2}\)
\({C_3}\)
\({C_4}\)
Expert1
\({A_1}\)
((0.15, 0.35, 0.7), (0.5, 0.71, 0.91), (0.56, 0.92, 1))
((0, 0, 0.38), (0.09, 0.29, 0.49), (0.09, 0.29, 0.71))
((0, 0.14, 0.53), (0.29, 0.49, 0.7), (0.79, 0.99, 1))
((0.4, 1, 1), (0.7, 1, 1), (0.91, 1, 1))
\({A_2}\)
((0.01, 0.01, 0.46), (0.71, 1, 1), (1, 1, 1))
((0, 0, 0), (0, 0, 0.29), (0.3, 0.5, 0.99))
((0.48, 1, 1), (0.7, 1, 1), (0.85, 1, 1))
((0.4, 1, 1), (0.7, 1, 1), (0.85, 1, 1))
\({A_3}\)
((0.01, 0.09, 0.31), (0.1, 0.3, 0.5), (0.1, 0.3, 0.72))
((0.4, 1, 1), (0.7, 1, 1), (0.91, 1, 1))
((0.24, 0.61, 0.9), (0.49, 0.7, 0.9), (0.65, 0.85, 1))
((0, 0, 0.02), (0, 0, 0.29), (0.3, 0.5, 0.99))
\({A_4}\)
((0, 0, 0.23), (0.1, 0.3, 0.5), (0.15, 0.51, 1))
((0, 0.24, 0.63), (0.29, 0.49, 0.7), (0.36, 0.65, 0.91))
((0, 0, 0.29), (0, 0, 0.29), (0.7, 0.99, 0.99))
((0, 0.4, 0.84), (0.49, 0.7, 0.9), (0.85, 1, 1))
\({A_5}\)
((0.03, 0.15, 0.43), (0.3, 0.5, 0.71), (0.33, 0.66, 1))
((0.4, 1, 1), (0.7, 1, 1), (0.91, 1, 1))
((0, 0.24, 0.7), (0.29, 0.49, 0.7), (0.65, 0.85, 1))
((0, 0.4, 0.84), (0.49, 0.7, 0.9), (0.75, 0.91, 1))
Expert2
\({A_1}\)
((0.15, 0.32, 0.58), (0.7, 1, 1), (1, 1, 1))
((0, 0, 0), (0, 0, 0.29), (0.72, 0.99, 1))
((0.09, 0.29, 0.41), (0.09, 0.29, 0.5), (0.09, 0.29, 1))
((0, 0, 0), (0, 0, 0.29), (0.71, 0.99, 0.99))
\({A_2}\)
((0.21, 0.46, 0.74), (0.49, 0.7, 0.91), (0.57, 1, 1))
((0, 0.33, 0.65), (0.29, 0.5, 0.7), (0.4, 0.8, 1))
((0, 0.15, 0.5), (0.09, 0.29, 0.5), (0.48, 0.99, 1))
((0, 0.06, 0.41), (0.09, 0.29, 0.5), (0.22, 0.72, 1))
\({A_3}\)
((0, 0.08, 0.28), (0.08, 0.29, 0.49), (0.15, 0.59, 1))
((0, 0, 0.29), (0.09, 0.29, 0.5), (0.74, 1, 1))
((0.11, 0.56, 0.9), (0.5, 0.7, 0.9), (1, 1, 1))
((0.11, 0.5, 0.86), (0.5, 0.7, 0.9), (0.86, 1, 1))
\({A_4}\)
((0.2, 0.52, 0.7), (0.7, 1, 1), (0.7, 1, 1))
((0, 0, 0.09), (0, 0, 0.29), (0, 0, 1))
((0, 0.25, 0.5), (0.09, 0.29, 0.5), (0.22, 0.72, 1))
((0, 0, 0.08), (0, 0, 0.29), (0, 0, 0.99))
\({A_5}\)
((0, 0.08, 0.28), (0.08, 0.29, 0.49), (0.15, 0.59, 1))
((0, 0, 0), (0, 0, 0.29), (0.72, 0.99, 1))
((0, 0.26, 0.7), (0.29, 0.5, 0.7), (0.99, 1, 1))
((0, 0, 0.29), (0.09, 0.29, 0.5), (0.74, 0.99, 1))
Expert3
\({A_1}\)
((0, 0, 0.25), (0, 0, 0.3), (0, 0, 0.6))
((0.22, 0.6, 0.88), (0.5, 0.7, 0.9), (0.57, 0.88, 1))
((0.63, 1, 1), (0.7, 1, 1), (0.74, 1, 1))
((0.53, 1, 1), (0.7, 1, 1), (0.74, 1, 1))
\({A_2}\)
((0, 0, 0.18), (0, 0, 0.3), (0, 0, 0.6))
((0, 0, 0), (0, 0, 0.29), (0.71, 0.99, 0.99))
((0.22, 0.6, 0.9), (0.49, 0.7, 0.9), (0.86, 1, 1))
((0, 0, 0.29), (0.09, 0.29, 0.49), (0.74, 0.99, 0.99))
\({A_3}\)
((0.17, 0.34, 0.59), (0.71, 1, 1), (1, 1, 1))
((0, 0.26, 0.61), (0.29, 0.5, 0.7), (0.29, 0.5, 1))
((0.49, 0.7, 0.9), (0.49, 0.7, 0.9), (0.49, 0.7, 1))
((0, 0, 0.35), (0.09, 0.29, 0.49), (0.09, 0.29, 0.99))
\({A_4}\)
((0.33, 0.67, 0.82), (0.71, 1, 1), (0.81, 1, 1))
((0, 0, 0), (0, 0, 0.29), (0.71, 0.99, 0.99))
((0, 0.06, 0.49), (0.09, 0.29, 0.49), (0.74, 0.99, 0.99))
((0, 0, 0), (0, 0, 0.29), (0.71, 0.99, 0.99))
\({A_5}\)
((0, 0, 0.18), (0, 0, 0.3), (0, 0, 0.6))
((0.15, 0.56, 0.87), (0.5, 0.7, 0.9), (0.57, 0.88, 1))
((0.63, 1, 1), (0.7, 1, 1), (0.74, 1, 1))
((0.49, 1, 1), (0.7, 1, 1), (0.74, 1, 1))
Expert4
\({A_1}\)
((0.18, 0.43, 0.72), (0.5, 0.7, 0.91), (0.5, 0.7, 1))
((0, 0.02, 0.39), (0.09, 0.29, 0.49), (0.09, 0.29, 0.8))
((0, 0.3, 0.56), (0.29, 0.49, 0.7), (0.69, 0.99, 1))
((0.5, 1, 1), (0.7, 1, 1), (0.7, 1, 1))
\({A_2}\)
((0.06, 0.23, 0.53), (0.7, 1, 1), (0.94, 1, 1))
((0, 0, 0), (0, 0, 0.29), (0.33, 0.71, 0.99))
((0.1, 0.46, 0.7), (0.29, 0.49, 0.7), (0.37, 0.71, 1))
((0, 0.11, 0.56), (0.29, 0.49, 0.7), (0.53, 0.86, 1))
\({A_3}\)
((0, 0.06, 0.26), (0.09, 0.3, 0.5), (0.17, 0.6, 1))
((0, 0.11, 0.56), (0.29, 0.49, 0.7), (0.84, 0.99, 1))
((0.5, 1, 1), (0.7, 1, 1), (0.87, 1, 1))
((0, 0, 0), (0, 0, 0.29), (0.78, 0.99, 0.99))
\({A_4}\)
((0.03, 0.18, 0.4), (0.09, 0.3, 0.5), (0.1, 0.43, 1))
((0, 0.3, 0.64), (0.29, 0.49, 0.7), (0.37, 0.71, 1))
((0.7, 1, 1), (0.7, 1, 1), (0.7, 1, 1))
((0.17, 0.58, 0.88), (0.49, 0.7, 0.9), (0.55, 0.83, 1))
\({A_5}\)
((0.02, 0.11, 0.37), (0.3, 0.5, 0.7), (0.53, 1, 1))
((0.42, 1, 1), (0.7, 1, 1), (0.93, 1, 1))
((0, 0.3, 0.7), (0.29, 0.49, 0.7), (0.69, 0.99, 1))
((0, 0, 0), (0, 0, 0.29), (0.78, 0.99, 0.99))
Expert5
\({A_1}\)
((0.05, 0.13, 0.27), (0.1, 0.3, 0.51), (0.2, 0.6, 1))
((0, 0, 0.25), (0, 0, 0.29), (0.6, 0.99, 1))
((0, 0.17, 0.46), (0.09, 0.29, 0.49), (0.64, 0.99, 0.99))
((0.11, 0.41, 0.68), (0.29, 0.5, 0.7), (0.72, 1, 1))
\({A_2}\)
((0.06, 0.21, 0.4), (0.1, 0.3, 0.51), (0.12, 0.6, 1))
((0, 0, 0), (0, 0, 0.29), (0.6, 0.99, 1))
((0, 0.17, 0.49), (0.09, 0.29, 0.49), (0.64, 0.99, 0.99))
((0, 0, 0.26), (0.09, 0.29, 0.5), (0.64, 0.99, 1))
\({A_3}\)
((0.05, 0.13, 0.27), (0.1, 0.3, 0.51), (0.2, 0.6, 1))
((0.03, 0.35, 0.64), (0.29, 0.5, 0.7), (0.44, 1, 1))
((0.42, 0.68, 0.9), (0.49, 0.7, 0.9), (0.6, 1, 1))
((0.31, 0.61, 0.88), (0.5, 0.7, 0.9), (0.6, 1, 1))
\({A_4}\)
((0, 0, 0.2), (0, 0, 0.3), (0, 0, 0.6))
((0, 0, 0), (0, 0, 0.29), (0.99, 0.99, 1))
((0.03, 0.35, 0.7), (0.29, 0.49, 0.7), (0.99, 0.99, 1))
((0, 0, 0), (0, 0, 0.29), (0.99, 0.99, 0.99))
\({A_5}\)
((0.15, 0.21, 0.38), (0.3, 0.51, 0.71), (0.6, 1, 1))
((0, 0.2, 0.56), (0.29, 0.5, 0.7), (1, 1, 1))
((0.58, 1, 1), (0.7, 1, 1), (1, 1, 1))
((0, 0, 0.26), (0.09, 0.29, 0.5), (0.99, 0.99, 1))
Expert6
\({A_1}\)
((0.18, 0.42, 0.59), (0.66, 1, 1), (1, 1, 1))
((0, 0, 0.04), (0, 0, 0.29), (0.99, 0.99, 0.99))
((0, 0.32, 0.59), (0.29, 0.49, 0.7), (0.99, 0.99, 1))
((0, 0, 0.04), (0, 0, 0.29), (0.99, 0.99, 0.99))
\({A_2}\)
((0.08, 0.25, 0.52), (0.42, 0.66, 0.89), (1, 1, 1))
((0, 0, 0.32), (0.09, 0.29, 0.5), (0.99, 0.99, 1))
((0.57, 1, 1), (0.7, 1, 1), (1, 1, 1))
((0, 0, 0.32), (0.09, 0.29, 0.49), (0.99, 0.99, 0.99))
\({A_3}\)
((0, 0.22, 0.45), (0.19, 0.42, 0.66), (0.54, 1, 1))
((0, 0.32, 0.62), (0.29, 0.5, 0.7), (0.99, 1, 1))
((0.35, 0.65, 0.9), (0.49, 0.7, 0.9), (0.99, 1, 1))
((0, 0.32, 0.62), (0.29, 0.49, 0.7), (0.99, 0.99, 1))
\({A_4}\)
((0.18, 0.42, 0.59), (0.66, 1, 1), (1, 1, 1))
((0, 0, 0.04), (0, 0, 0.29), (0.99, 0.99, 0.99))
((0, 0, 0.29), (0, 0, 0.29), (0.99, 0.99, 0.99))
((0, 0, 0.04), (0, 0, 0.29), (0.99, 0.99, 0.99))
\({A_5}\)
((0.25, 0.53, 0.84), (0.42, 0.66, 0.89), (0.42, 0.66, 1))
((0.35, 0.65, 0.89), (0.5, 0.7, 0.9), (0.5, 0.7, 1))
((0.7, 1, 1), (0.7, 1, 1), (0.7, 1, 1))
((0.61, 1, 1), (0.7, 1, 1), (0.7, 1, 1))
Expert7
\({A_1}\)
((0, 0.08, 0.28), (0.08, 0.29, 0.49), (0.15, 0.59, 1))
((0, 0.06, 0.41), (0.09, 0.29, 0.49), (0.22, 0.72, 0.99))
((0, 0.33, 0.57), (0.29, 0.49, 0.7), (0.8, 0.99, 1))
((0.47, 1, 1), (0.7, 1, 1), (0.91, 1, 1))
\({A_2}\)
((0, 0.08, 0.28), (0.08, 0.29, 0.49), (0.12, 0.59, 1))
((0, 0.16, 0.57), (0.29, 0.49, 0.7), (0.8, 0.99, 1))
((0.58, 1, 1), (0.7, 1, 1), (0.83, 1, 1))
((0, 0.16, 0.57), (0.29, 0.49, 0.7), (0.6, 0.99, 1))
\({A_3}\)
((0.42, 0.8, 0.94), (0.7, 1, 1), (1, 1, 1))
((0.49, 1, 1), (0.7, 1, 1), (0.7, 1, 1))
((0.58, 1, 1), (0.7, 1, 1), (0.83, 1, 1))
((0.58, 1, 1), (0.7, 1, 1), (0.83, 1, 1))
\({A_4}\)
((0.1, 0.22, 0.52), (0.49, 0.7, 0.91), (1, 1, 1))
((0, 0.16, 0.57), (0.29, 0.49, 0.7), (0.8, 0.99, 1))
((0.47, 1, 1), (0.7, 1, 1), (1, 1, 1))
((0.47, 1, 1), (0.7, 1, 1), (1, 1, 1))
\({A_5}\)
((0.1, 0.22, 0.52), (0.49, 0.7, 0.91), (0.86, 1, 1))
((0.49, 1, 1), (0.7, 1, 1), (0.74, 1, 1))
((0, 0.33, 0.7), (0.29, 0.49, 0.7), (0.8, 0.99, 1))
((0.11, 0.5, 0.86), (0.49, 0.7, 0.9), (0.86, 1, 1))
Expert8
\({A_1}\)
((0.05, 0.15, 0.4), (0.29, 0.49, 0.7), (0.51, 1, 1))
((0, 0, 0.17), (0, 0, 0.29), (0.14, 0.6, 1))
((0, 0.06, 0.29), (0.09, 0.29, 0.49), (0.74, 0.99, 0.99))
((0, 0, 0), (0, 0, 0.29), (0.71, 0.99, 0.99))
\({A_2}\)
((0.15, 0.32, 0.58), (0.7, 1, 1), (1, 1, 1))
((0, 0.16, 0.57), (0.29, 0.5, 0.7), (0.8, 1, 1))
((0.3, 0.64, 0.9), (0.49, 0.7, 0.9), (0.71, 1, 1))
((0.47, 1, 1), (0.7, 1, 1), (0.83, 1, 1))
\({A_3}\)
((0.31, 0.66, 0.82), (0.7, 1, 1), (0.8, 1, 1))
((0, 0.26, 0.61), (0.29, 0.5, 0.7), (0.29, 0.5, 1))
((0, 0.25, 0.49), (0.09, 0.29, 0.49), (0.22, 0.72, 0.99))
((0.22, 0.6, 0.88), (0.49, 0.7, 0.9), (0.57, 0.88, 1))
\({A_4}\)
((0, 0.08, 0.28), (0.08, 0.29, 0.49), (0.15, 0.59, 1))
((0, 0, 0), (0, 0, 0.29), (0.72, 0.99, 1))
((0.53, 1, 1), (0.7, 1, 1), (0.91, 1, 1))
((0, 0.16, 0.57), (0.29, 0.49, 0.7), (0.8, 0.99, 1))
\({A_5}\)
((0.2, 0.52, 0.7), (0.7, 1, 1), (1, 1, 1))
((0, 0, 0.09), (0, 0, 0.29), (0.14, 0.6, 1))
((0, 0.15, 0.49), (0.09, 0.29, 0.49), (0.48, 0.99, 0.99))
((0, 0, 0.35), (0.09, 0.29, 0.49), (0.48, 0.99, 0.99))
Expert9
\({A_1}\)
((0.1, 0.3, 0.56), (0.3, 0.5, 0.7), (0.3, 0.5, 1))
((0.17, 0.58, 0.88), (0.49, 0.7, 0.9), (0.49, 0.7, 0.96))
((0.7, 1, 1), (0.7, 1, 1), (0.7, 1, 1))
((0.5, 1, 1), (0.7, 1, 1), (0.7, 1, 1))
\({A_2}\)
((0, 0.06, 0.26), (0.09, 0.3, 0.5), (0.09, 0.3, 0.81))
((0, 0.11, 0.56), (0.29, 0.49, 0.7), (0.29, 0.49, 0.88))
((0.49, 0.7, 0.9), (0.49, 0.7, 0.9), (0.49, 0.7, 0.96))
((0, 0, 0.26), (0.09, 0.29, 0.49), (0.09, 0.29, 0.8))
\({A_3}\)
((0.25, 0.61, 0.8), (0.7, 1, 1), (0.78, 1, 1))
((0.5, 1, 1), (0.7, 1, 1), (0.73, 1, 1))
((0.61, 1, 1), (0.7, 1, 1), (0.73, 1, 1))
((0.5, 1, 1), (0.7, 1, 1), (0.73, 1, 1))
\({A_4}\)
((0, 0.06, 0.26), (0.09, 0.3, 0.5), (0.15, 0.6, 1))
((0, 0, 0), (0, 0, 0.29), (0.56, 0.99, 0.99))
((0, 0.02, 0.49), (0.09, 0.29, 0.49), (0.6, 0.99, 0.99))
((0, 0, 0), (0, 0, 0.29), (0.56, 0.99, 0.99))
\({A_5}\)
((0.03, 0.18, 0.4), (0.09, 0.3, 0.5), (0.09, 0.3, 0.81))
((0.17, 0.58, 0.88), (0.49, 0.7, 0.9), (0.49, 0.7, 0.96))
((0.7, 1, 1), (0.7, 1, 1), (0.7, 1, 1))
((0.5, 1, 1), (0.7, 1, 1), (0.7, 1, 1))
Expert10
\({A_1}\)
((0.02, 0.11, 0.37), (0.3, 0.5, 0.7), (0.33, 0.72, 1))
((0, 0.02, 0.39), (0.09, 0.29, 0.49), (0.09, 0.29, 0.8))
((0.1, 0.46, 0.64), (0.29, 0.49, 0.7), (0.37, 0.71, 1))
((0.45, 1, 1), (0.7, 1, 1), (0.8, 1, 1))
\({A_2}\)
((0.06, 0.23, 0.53), (0.7, 1, 1), (0.94, 1, 1))
((0.42, 1, 1), (0.7, 1, 1), (0.7, 1, 1))
((0.5, 1, 1), (0.7, 1, 1), (0.87, 1, 1))
((0.42, 1, 1), (0.7, 1, 1), (0.7, 1, 1))
\({A_3}\)
((0.12, 0.46, 0.66), (0.7, 1, 1), (0.94, 1, 1))
((0.5, 1, 1), (0.7, 1, 1), (0.73, 1, 1))
((0.29, 0.49, 0.7), (0.29, 0.49, 0.7), (0.29, 0.49, 0.88))
((0, 0.22, 0.6), (0.29, 0.49, 0.7), (0.53, 0.86, 1))
\({A_4}\)
((0, 0.06, 0.26), (0.09, 0.3, 0.5), (0.13, 0.51, 1))
((0.42, 1, 1), (0.7, 1, 1), (0.87, 1, 1))
((0.5, 1, 1), (0.7, 1, 1), (0.87, 1, 1))
((0.5, 1, 1), (0.7, 1, 1), (0.7, 1, 1))
\({A_5}\)
((0.06, 0.23, 0.53), (0.7, 1, 1), (1, 1, 1))
((0.5, 1, 1), (0.7, 1, 1), (0.7, 1, 1))
((0.1, 0.46, 0.7), (0.29, 0.49, 0.7), (0.37, 0.71, 1))
((0.08, 0.53, 0.87), (0.49, 0.7, 0.9), (0.66, 0.91, 1))
Expert11
\({A_1}\)
((0.09, 0.27, 0.5), (0.3, 0.51, 0.71), (0.43, 1, 1))
((0, 0, 0), (0, 0, 0.29), (0.43, 0.99, 1))
((0, 0.15, 0.29), (0.09, 0.29, 0.5), (0.48, 1, 1))
((0, 0.16, 0.57), (0.29, 0.5, 0.7), (0.6, 1, 1))
\({A_2}\)
((0.17, 0.34, 0.59), (0.71, 1, 1), (0.71, 1, 1))
((0, 0.16, 0.57), (0.29, 0.5, 0.7), (0.8, 1, 1))
((0, 0.06, 0.5), (0.09, 0.29, 0.5), (0.74, 1, 1))
((0, 0, 0.29), (0.09, 0.29, 0.5), (0.74, 0.99, 1))
\({A_3}\)
((0, 0, 0.18), (0, 0, 0.3), (0, 0, 0.6))
((0, 0, 0.35), (0.09, 0.29, 0.5), (0.48, 1, 1))
((0.02, 0.4, 0.7), (0.29, 0.5, 0.7), (0.6, 1, 1))
((0.15, 0.56, 0.87), (0.5, 0.7, 0.9), (0.71, 1, 1))
\({A_4}\)
((0, 0, 0.25), (0, 0, 0.3), (0, 0, 0.6))
((0, 0, 0), (0, 0, 0.29), (0.99, 0.99, 1))
((0, 0, 0.5), (0.09, 0.29, 0.5), (0.99, 1, 1))
((0, 0, 0), (0, 0, 0.29), (0.99, 0.99, 0.99))
\({A_5}\)
((0.07, 0.17, 0.42), (0.3, 0.51, 0.71), (0.43, 1, 1))
((0, 0.16, 0.57), (0.29, 0.5, 0.7), (0.8, 1, 1))
((0, 0.33, 0.7), (0.29, 0.5, 0.7), (0.8, 1, 1))
((0, 0, 0.29), (0.09, 0.29, 0.5), (0.74, 0.99, 1))
Expert12
\({A_1}\)
((0.16, 0.3, 0.47), (0.3, 0.51, 0.71), (0.48, 1, 1))
((0.23, 0.52, 0.85), (0.49, 0.7, 0.9), (0.6, 1, 1))
((0.65, 1, 1), (0.7, 1, 1), (0.76, 1, 1))
((0.54, 1, 1), (0.7, 1, 1), (0.76, 1, 1))
\({A_2}\)
((0.34, 0.42, 0.53), (0.71, 1, 1), (0.71, 1, 1))
((0.54, 1, 1), (0.7, 1, 1), (0.88, 1, 1))
((0.36, 0.64, 0.9), (0.49, 0.7, 0.9), (0.8, 1, 1))
((0, 0, 0.26), (0.09, 0.29, 0.49), (0.64, 0.99, 0.99))
\({A_3}\)
((0.15, 0.21, 0.38), (0.3, 0.51, 0.71), (0.3, 0.51, 1))
((0, 0.29, 0.6), (0.29, 0.49, 0.7), (0.72, 0.99, 1))
((0.36, 0.64, 0.9), (0.49, 0.7, 0.9), (0.8, 1, 1))
((0, 0, 0.33), (0.09, 0.29, 0.49), (0.64, 0.99, 0.99))
\({A_4}\)
((0, 0, 0.24), (0, 0, 0.3), (0, 0, 0.6))
((0, 0, 0), (0, 0, 0.29), (0.6, 0.99, 0.99))
((0, 0.17, 0.49), (0.09, 0.29, 0.49), (0.64, 0.99, 0.99))
((0, 0, 0), (0, 0, 0.29), (0.6, 0.99, 0.99))
\({A_5}\)
((0.15, 0.21, 0.38), (0.3, 0.51, 0.71), (0.48, 1, 1))
((0.23, 0.52, 0.85), (0.49, 0.7, 0.9), (0.99, 1, 1))
((0.58, 1, 1), (0.7, 1, 1), (1, 1, 1))
((0.54, 1, 1), (0.7, 1, 1), (1, 1, 1))
Expert13
\({A_1}\)
((0.12, 0.24, 0.53), (0.51, 0.71, 0.91), (0.51, 0.71, 1))
((0, 0.26, 0.61), (0.29, 0.49, 0.7), (0.6, 0.99, 1))
((0.63, 1, 1), (0.7, 1, 1), (0.74, 1, 1))
((0.47, 1, 1), (0.7, 1, 1), (0.7, 1, 1))
\({A_2}\)
((0.12, 0.24, 0.53), (0.51, 0.71, 0.91), (0.87, 1, 1))
((0, 0.16, 0.57), (0.29, 0.49, 0.7), (0.29, 0.49, 1))
((0.02, 0.39, 0.7), (0.29, 0.49, 0.7), (0.6, 0.99, 1))
((0, 0.16, 0.57), (0.29, 0.49, 0.7), (0.8, 0.99, 1))
\({A_3}\)
((0.03, 0.16, 0.36), (0.1, 0.3, 0.51), (0.14, 0.6, 1))
((0.47, 1, 1), (0.7, 1, 1), (0.7, 1, 1))
((0.3, 0.64, 0.9), (0.49, 0.7, 0.9), (0.71, 1, 1))
((0.49, 1, 1), (0.7, 1, 1), (0.83, 1, 1))
\({A_4}\)
((0, 0, 0.21), (0, 0, 0.3), (0, 0, 0.6))
((0, 0.33, 0.64), (0.29, 0.49, 0.7), (0.29, 0.49, 1))
((0.58, 1, 1), (0.7, 1, 1), (0.83, 1, 1))
((0.15, 0.56, 0.87), (0.49, 0.7, 0.9), (0.71, 1, 1))
\({A_5}\)
((0, 0, 0.18), (0, 0, 0.3), (0, 0, 0.6))
((0.47, 1, 1), (0.7, 1, 1), (0.83, 1, 1))
((0, 0.33, 0.7), (0.29, 0.49, 0.7), (0.8, 0.99, 1))
((0.11, 0.5, 0.86), (0.49, 0.7, 0.9), (0.86, 1, 1))
Expert14
\({A_1}\)
((0, 0.01, 0.33), (0.3, 0.51, 0.71), (0.3, 0.51, 1))
((0, 0, 0.13), (0, 0, 0.29), (0.1, 0.3, 0.8))
((0, 0, 0.22), (0.09, 0.29, 0.5), (0.73, 0.99, 1))
((0, 0, 0), (0, 0, 0.29), (0.7, 0.99, 0.99))
\({A_2}\)
((0.01, 0.01, 0.46), (0.71, 1, 1), (1, 1, 1))
((0, 0, 0.53), (0.29, 0.5, 0.7), (0.65, 0.85, 1))
((0, 0, 0.5), (0.09, 0.29, 0.5), (0.55, 0.79, 1))
((0, 0, 0.53), (0.29, 0.5, 0.7), (0.65, 0.85, 1))
\({A_3}\)
((0, 0, 0.19), (0, 0, 0.3), (0, 0, 0.6))
((0, 0, 0.3), (0.09, 0.29, 0.5), (0.09, 0.29, 0.71))
((0.24, 0.61, 0.9), (0.5, 0.7, 0.9), (0.65, 0.85, 1))
((0.04, 0.49, 0.86), (0.5, 0.7, 0.9), (0.65, 0.85, 1))
\({A_4}\)
((0.01, 0.09, 0.31), (0.1, 0.3, 0.51), (0.13, 0.45, 1))
((0, 0, 0), (0, 0, 0.29), (0.5, 0.7, 1))
((0, 0.14, 0.7), (0.29, 0.5, 0.7), (0.79, 1, 1))
((0, 0, 0), (0, 0, 0.29), (0.7, 0.99, 0.99))
\({A_5}\)
((0, 0, 0.14), (0, 0, 0.3), (0, 0, 0.6))
((0, 0, 0.02), (0, 0, 0.29), (0.1, 0.3, 0.8))
((0, 0.24, 0.7), (0.29, 0.5, 0.7), (0.65, 0.85, 1))
((0, 0, 0.22), (0.09, 0.29, 0.5), (0.55, 0.79, 1))
Expert15
\({A_1}\)
((0.04, 0.22, 0.46), (0.29, 0.5, 0.7), (0.39, 0.86, 1))
((0.04, 0.47, 0.85), (0.49, 0.7, 0.9), (0.49, 0.7, 0.96))
((0.7, 1, 1), (0.7, 1, 1), (0.7, 1, 1))
((0.45, 1, 1), (0.7, 1, 1), (0.8, 1, 1))
\({A_2}\)
((0.01, 0.1, 0.37), (0.29, 0.5, 0.7), (0.32, 0.72, 1))
((0, 0, 0.26), (0.09, 0.29, 0.49), (0.6, 0.99, 0.99))
((0.17, 0.58, 0.9), (0.49, 0.7, 0.9), (0.78, 1, 1))
((0, 0, 0.26), (0.09, 0.29, 0.49), (0.19, 0.6, 0.99))
\({A_3}\)
((0, 0.12, 0.32), (0.08, 0.29, 0.5), (0.12, 0.51, 1))
((0.45, 1, 1), (0.7, 1, 1), (0.8, 1, 1))
((0, 0.38, 0.7), (0.29, 0.49, 0.7), (0.53, 0.86, 1))
((0, 0.22, 0.6), (0.29, 0.49, 0.7), (0.53, 0.86, 1))
\({A_4}\)
((0, 0.12, 0.32), (0.08, 0.29, 0.5), (0.1, 0.42, 1))
((0, 0.22, 0.6), (0.29, 0.49, 0.7), (0.53, 0.86, 1))
((0, 0.13, 0.49), (0.09, 0.29, 0.49), (0.39, 0.8, 0.99))
((0, 0, 0.06), (0, 0, 0.29), (0.11, 0.43, 0.99))
\({A_5}\)
((0.15, 0.38, 0.65), (0.29, 0.5, 0.7), (0.29, 0.5, 1))
((0.04, 0.47, 0.85), (0.49, 0.7, 0.9), (0.78, 1, 1))
((0.5, 1, 1), (0.7, 1, 1), (0.87, 1, 1))
((0.56, 1, 1), (0.7, 1, 1), (0.7, 1, 1))
Expert16
\({A_1}\)
((0.33, 0.67, 0.82), (0.71, 1, 1), (0.71, 1, 1))
((0, 0.26, 0.61), (0.29, 0.49, 0.7), (0.6, 0.99, 1))
((0.29, 0.49, 0.57), (0.29, 0.49, 0.7), (0.29, 0.49, 1))
((0.53, 1, 1), (0.7, 1, 1), (0.7, 1, 1))
\({A_2}\)
((0.17, 0.34, 0.59), (0.71, 1, 1), (0.71, 1, 1))
((0.47, 1, 1), (0.7, 1, 1), (0.74, 1, 1))
((0.53, 1, 1), (0.7, 1, 1), (0.91, 1, 1))
((0, 0.16, 0.57), (0.29, 0.49, 0.7), (0.29, 0.49, 1))
\({A_3}\)
((0, 0, 0.18), (0, 0, 0.3), (0, 0, 0.6))
((0.49, 1, 1), (0.7, 1, 1), (0.83, 1, 1))
((0.58, 1, 1), (0.7, 1, 1), (0.83, 1, 1))
((0.47, 1, 1), (0.7, 1, 1), (0.91, 1, 1))
\({A_4}\)
((0, 0, 0.18), (0, 0, 0.3), (0, 0, 0.6))
((0, 0.26, 0.61), (0.29, 0.49, 0.7), (0.39, 0.8, 1))
((0.58, 1, 1), (0.7, 1, 1), (0.83, 1, 1))
((0.11, 0.5, 0.86), (0.49, 0.7, 0.9), (0.71, 1, 1))
\({A_5}\)
((0.19, 0.4, 0.67), (0.3, 0.51, 0.71), (0.3, 0.51, 1))
((0.58, 1, 1), (0.7, 1, 1), (0.7, 1, 1))
((0, 0.33, 0.7), (0.29, 0.49, 0.7), (0.8, 0.99, 1))
((0.58, 1, 1), (0.7, 1, 1), (0.7, 1, 1))
Expert17
\({A_1}\)
((0.34, 0.42, 0.53), (0.71, 1, 1), (1, 1, 1))
((0, 0, 0), (0, 0, 0.29), (0, 0, 0.99))
((0.54, 1, 1), (0.7, 1, 1), (1, 1, 1))
((0, 0, 0), (0, 0, 0.29), (0.99, 0.99, 0.99))
\({A_2}\)
((0.34, 0.42, 0.53), (0.71, 1, 1), (1, 1, 1))
((0, 0.2, 0.56), (0.29, 0.49, 0.7), (0.99, 0.99, 1))
((0, 0.08, 0.49), (0.09, 0.29, 0.49), (0.99, 0.99, 0.99))
((0, 0, 0.26), (0.09, 0.29, 0.49), (0.99, 0.99, 0.99))
\({A_3}\)
((0, 0, 0.2), (0, 0, 0.3), (0, 0, 0.6))
((0.55, 1, 1), (0.7, 1, 1), (0.88, 1, 1))
((0.36, 0.64, 0.9), (0.49, 0.7, 0.9), (0.8, 1, 1))
((0.26, 0.57, 0.87), (0.49, 0.7, 0.9), (0.8, 1, 1))
\({A_4}\)
((0.05, 0.13, 0.27), (0.1, 0.3, 0.51), (0.12, 0.6, 1))
((0, 0, 0.06), (0, 0, 0.29), (0.6, 0.99, 0.99))
((0, 0.25, 0.49), (0.09, 0.29, 0.49), (0.27, 0.99, 0.99))
((0.54, 1, 1), (0.7, 1, 1), (0.76, 1, 1))
\({A_5}\)
((0.05, 0.13, 0.27), (0.1, 0.3, 0.51), (0.2, 0.6, 1))
((0, 0, 0), (0, 0, 0.29), (0.99, 0.99, 0.99))
((0.03, 0.35, 0.7), (0.29, 0.49, 0.7), (0.99, 0.99, 1))
((0, 0, 0.26), (0.09, 0.29, 0.49), (0.99, 0.99, 0.99))
Expert18
\({A_1}\)
((0.21, 0.46, 0.74), (0.49, 0.7, 0.91), (0.49, 0.7, 1))
((0.15, 0.56, 0.87), (0.5, 0.7, 0.9), (0.71, 1, 1))
((0.58, 1, 1), (0.7, 1, 1), (0.83, 1, 1))
((0.53, 1, 1), (0.7, 1, 1), (0.7, 1, 1))
\({A_2}\)
((0.29, 0.55, 0.85), (0.49, 0.7, 0.91), (0.49, 0.7, 1))
((0, 0, 0.29), (0.09, 0.29, 0.5), (0.09, 0.29, 1))
((0.49, 0.7, 0.9), (0.49, 0.7, 0.9), (0.49, 0.7, 1))
((0, 0.15, 0.47), (0.09, 0.29, 0.49), (0.09, 0.29, 0.99))
\({A_3}\)
((0, 0.08, 0.28), (0.08, 0.29, 0.49), (0.15, 0.59, 1))
((0, 0.16, 0.57), (0.29, 0.5, 0.7), (0.29, 0.5, 1))
((0.49, 0.7, 0.9), (0.49, 0.7, 0.9), (0.49, 0.7, 1))
((0, 0, 0.29), (0.09, 0.29, 0.49), (0.74, 0.99, 0.99))
\({A_4}\)
((0.07, 0.25, 0.49), (0.29, 0.49, 0.7), (0.42, 1, 1))
((0, 0, 0.17), (0, 0, 0.29), (0, 0, 0.99))
((0.7, 1, 1), (0.7, 1, 1), (0.7, 1, 1))
((0, 0, 0.08), (0, 0, 0.29), (0.43, 0.99, 0.99))
\({A_5}\)
((0.17, 0.39, 0.66), (0.29, 0.49, 0.7), (0.29, 0.49, 1))
((0.11, 0.5, 0.86), (0.5, 0.7, 0.9), (0.71, 1, 1))
((0.58, 1, 1), (0.7, 1, 1), (0.83, 1, 1))
((0.58, 1, 1), (0.7, 1, 1), (0.7, 1, 1))
Expert19
\({A_1}\)
((0.11, 0.35, 0.62), (0.48, 0.69, 0.9), (0.71, 1, 1))
((0, 0.26, 0.61), (0.29, 0.49, 0.7), (0.6, 0.99, 1))
((0.29, 0.49, 0.57), (0.29, 0.49, 0.7), (0.29, 0.49, 1))
((0.49, 1, 1), (0.7, 1, 1), (0.83, 1, 1))
\({A_2}\)
((0.13, 0.31, 0.57), (0.69, 1, 1), (0.69, 1, 1))
((0, 0.16, 0.57), (0.29, 0.49, 0.7), (0.29, 0.49, 1))
((0.53, 1, 1), (0.7, 1, 1), (0.91, 1, 1))
((0.47, 1, 1), (0.7, 1, 1), (0.7, 1, 1))
\({A_3}\)
((0.03, 0.13, 0.39), (0.27, 0.48, 0.69), (0.27, 0.48, 1))
((0.47, 1, 1), (0.7, 1, 1), (0.7, 1, 1))
((0.58, 1, 1), (0.7, 1, 1), (0.83, 1, 1))
((0.47, 1, 1), (0.7, 1, 1), (0.7, 1, 1))
\({A_4}\)
((0, 0.16, 0.39), (0.06, 0.27, 0.48), (0.06, 0.27, 1))
((0, 0.33, 0.64), (0.29, 0.49, 0.7), (0.29, 0.49, 1))
((0.58, 1, 1), (0.7, 1, 1), (0.83, 1, 1))
((0.22, 0.6, 0.88), (0.49, 0.7, 0.9), (0.49, 0.7, 1))
\({A_5}\)
((0.03, 0.13, 0.39), (0.27, 0.48, 0.69), (0.41, 1, 1))
((0.47, 1, 1), (0.7, 1, 1), (0.83, 1, 1))
((0, 0.33, 0.7), (0.29, 0.49, 0.7), (0.8, 0.99, 1))
((0.11, 0.5, 0.86), (0.49, 0.7, 0.9), (0.71, 1, 1))
Expert20
\({A_1}\)
((0, 0.06, 0.26), (0.09, 0.3, 0.5), (0.09, 0.3, 0.81))
((0, 0, 0.33), (0.09, 0.29, 0.49), (0.39, 0.8, 0.99))
((0.29, 0.49, 0.64), (0.29, 0.49, 0.7), (0.29, 0.49, 0.88))
((0, 0.11, 0.56), (0.29, 0.5, 0.7), (0.69, 1, 1))
\({A_2}\)
((0.06, 0.23, 0.53), (0.7, 1, 1), (1, 1, 1))
((0, 0.11, 0.56), (0.29, 0.49, 0.7), (0.29, 0.49, 0.88))
((0.7, 1, 1), (0.7, 1, 1), (0.7, 1, 1))
((0, 0, 0.33), (0.09, 0.29, 0.5), (0.39, 0.8, 1))
\({A_3}\)
((0.05, 0.23, 0.47), (0.3, 0.5, 0.7), (0.4, 0.86, 1))
((0, 0, 0.26), (0.09, 0.29, 0.49), (0.09, 0.29, 0.8))
((0.17, 0.58, 0.9), (0.49, 0.7, 0.9), (0.78, 1, 1))
((0.04, 0.47, 0.85), (0.5, 0.7, 0.9), (0.78, 1, 1))
\({A_4}\)
((0.05, 0.23, 0.47), (0.3, 0.5, 0.7), (0.4, 0.86, 1))
((0.5, 1, 1), (0.7, 1, 1), (0.7, 1, 1))
((0, 0.13, 0.49), (0.09, 0.29, 0.49), (0.39, 0.8, 0.99))
((0, 0, 0.15), (0, 0, 0.29), (0, 0, 0.72))
\({A_5}\)
((0.02, 0.11, 0.37), (0.3, 0.5, 0.7), (0.47, 1, 1))
((0, 0.11, 0.56), (0.29, 0.49, 0.7), (0.53, 0.86, 1))
((0.29, 0.49, 0.7), (0.29, 0.49, 0.7), (0.29, 0.49, 0.88))
((0, 0, 0.26), (0.09, 0.29, 0.5), (0.39, 0.8, 1))
Table 18
Sub-group aggregation
 
\({C_1}\)
\({C_2}\)
\({C_3}\)
\({C_4}\)
 
1
\({A_1}\)
((0.102, 0.256, 0.496), (0.276, 0.442, 0.664), (0.332, 0.612, 0.92))
((0.162, 0.546, 0.866), (0.494, 0.7, 0.9), (0.572, 0.856, 0.984))
((0.652, 1, 1), (0.7, 1, 1), (0.746, 1, 1))
((0.51, 1, 1), (0.7, 1, 1), (0.74, 1, 1))
\({A_2}\)
((0.128, 0.226, 0.438), (0.316, 0.5, 0.682), (0.322, 0.544, 0.882))
((0.108, 0.222, 0.422), (0.234, 0.414, 0.596), (0.514, 0.752, 0.972))
((0.346, 0.644, 0.9), (0.49, 0.7, 0.9), (0.684, 0.88, 0.992))
((0, 0.03, 0.308), (0.09, 0.29, 0.49), (0.35, 0.632, 0.952))
\({A_3}\)
((0.114, 0.272, 0.474), (0.374, 0.618, 0.74), (0.47, 0.722, 1))
((0.19, 0.542, 0.756), (0.454, 0.698, 0.82), (0.566, 0.798, 1))
((0.39, 0.684, 0.88), (0.492, 0.718, 0.88), (0.608, 0.852, 1))
((0.1, 0.244, 0.514), (0.252, 0.472, 0.634), (0.546, 0.826, 0.994))
\({A_4}\)
((0.08, 0.22, 0.426), (0.234, 0.416, 0.6), (0.296, 0.604, 0.92))
((0, 0.044, 0.154), (0.058, 0.098, 0.372), (0.48, 0.766, 0.992))
((0.14, 0.276, 0.592), (0.212, 0.432, 0.592), (0.614, 0.954, 0.992))
((0, 0, 0.028), (0, 0, 0.29), (0.482, 0.878, 0.99))
\({A_5}\)
((0.1, 0.232, 0.454), (0.194, 0.36, 0.582), (0.23, 0.458, 0.882))
((0.14, 0.526, 0.862), (0.494, 0.7, 0.9), (0.708, 0.916, 0.992))
((0.598, 1, 1), (0.7, 1, 1), (0.828, 1, 1))
((0.534, 1, 1), (0.7, 1, 1), (0.768, 1, 1))
 
2
\({A_1}\)
((0.13, 0.319, 0.577), (0.44, 0.657, 0.831), (0.496, 0.806, 1))
((0, 0.126, 0.486), (0.176, 0.376, 0.58), (0.327, 0.651, 0.9))
((0.187, 0.459, 0.634), (0.349, 0.563, 0.743), (0.567, 0.809, 1))
((0.473, 1, 1), (0.7, 1, 1), (0.793, 1, 1))
\({A_2}\)
((0.079, 0.206, 0.499), (0.586, 0.857, 0.914), (0.753, 0.941, 1))
((0.127, 0.354, 0.53), (0.324, 0.496, 0.669), (0.493, 0.74, 0.997))
((0.391, 0.836, 0.914), (0.583, 0.854, 0.914), (0.763, 0.957, 1))
((0.184, 0.513, 0.753), (0.466, 0.709, 0.829), (0.639, 0.904, 1))
\({A_3}\)
((0.087, 0.243, 0.443), (0.28, 0.483, 0.643), (0.374, 0.569, 0.903))
((0.403, 0.873, 0.937), (0.641, 0.927, 0.957), (0.773, 0.999, 1))
((0.439, 0.82, 0.929), (0.581, 0.841, 0.929), (0.716, 0.906, 0.983))
((0.287, 0.603, 0.66), (0.441, 0.641, 0.754), (0.697, 0.907, 0.997))
\({A_4}\)
((0.019, 0.089, 0.313), (0.119, 0.267, 0.499), (0.206, 0.389, 0.886))
((0.06, 0.374, 0.676), (0.349, 0.563, 0.743), (0.481, 0.733, 0.987))
((0.487, 0.857, 0.899), (0.6, 0.857, 0.899), (0.823, 0.999, 0.999))
((0.231, 0.663, 0.904), (0.55, 0.786, 0.929), (0.716, 0.933, 1))
\({A_5}\)
((0.061, 0.177, 0.441), (0.337, 0.527, 0.717), (0.49, 0.739, 0.943))
((0.476, 1, 1), (0.7, 1, 1), (0.806, 1, 1))
((0.014, 0.331, 0.7), (0.29, 0.49, 0.7), (0.701, 0.93, 1))
((0.141, 0.49, 0.756), (0.45, 0.643, 0.827), (0.76, 0.973, 0.999))
 
3
\({A_1}\)
((0.108, 0.223, 0.433), (0.394, 0.639, 0.766), (0.566, 0.801, 0.976))
((0, 0, 0.115), (0.011, 0.036, 0.315), (0.421, 0.708, 0.971))
((0.115, 0.31, 0.488), (0.216, 0.429, 0.61), (0.62, 0.843, 0.983))
((0.014, 0.085, 0.231), (0.109, 0.188, 0.444), (0.764, 0.994, 0.994))
\({A_2}\)
((0.135, 0.28, 0.544), (0.568, 0.833, 0.914), (0.8, 0.95, 1))
((0, 0.12, 0.47), (0.229, 0.409, 0.624), (0.69, 0.889, 0.985))
((0.196, 0.388, 0.673), (0.293, 0.519, 0.673), (0.726, 0.97, 0.998))
((0.059, 0.133, 0.425), (0.191, 0.405, 0.585), (0.681, 0.916, 0.998))
\({A_3}\)
((0.051, 0.165, 0.358), (0.171, 0.314, 0.533), (0.261, 0.506, 0.85))
((0.073, 0.241, 0.509), (0.241, 0.458, 0.636), (0.5, 0.76, 0.939))
((0.209, 0.546, 0.824), (0.418, 0.624, 0.824), (0.705, 0.946, 0.999))
((0.141, 0.515, 0.836), (0.471, 0.674, 0.875), (0.745, 0.965, 1))
\({A_4}\)
((0.061, 0.184, 0.384), (0.243, 0.424, 0.601), (0.313, 0.563, 0.9))
((0.063, 0.125, 0.149), (0.088, 0.125, 0.379), (0.686, 0.831, 0.998))
((0.07, 0.265, 0.584), (0.205, 0.394, 0.584), (0.694, 0.936, 0.996))
((0.068, 0.145, 0.23), (0.124, 0.186, 0.43), (0.654, 0.744, 0.959))
\({A_5}\)
((0.093, 0.219, 0.425), (0.275, 0.471, 0.664), (0.409, 0.731, 0.95))
((0.044, 0.14, 0.336), (0.171, 0.274, 0.52), (0.598, 0.805, 0.974))
((0.2, 0.478, 0.749), (0.368, 0.596, 0.749), (0.738, 0.915, 0.984))
((0.076, 0.125, 0.366), (0.166, 0.379, 0.56), (0.698, 0.943, 0.998))
Table 19
Adjusted sub-group aggregation
 
\({C_1}\)
\({C_2}\)
\({C_3}\)
\({C_4}\)
 
1
\({A_1}\)
((0.02, 0.011, 0.059), (0.015, 0.048, 0.093), (0.026, 0.082, 0.144))
((0.008, 0.069, 0.133), (0.059, 0.1, 0.14), (0.074, 0.131, 0.157))
((0.09, 0.16, 0.16), (0.1, 0.16, 0.16), (0.109, 0.16, 0.16))
((0.062, 0.16, 0.16), (0.1, 0.16, 0.16), (0.108, 0.16, 0.16))
\({A_2}\)
((0.014, 0.005, 0.048), (0.023, 0.06, 0.096), (0.024, 0.069, 0.136))
((0.018, 0.004, 0.044), (0.007, 0.043, 0.079), (0.063, 0.11, 0.154))
((0.029, 0.089, 0.14), (0.058, 0.1, 0.14), (0.097, 0.136, 0.158))
((0.04, 0.034, 0.022), (0.022, 0.018, 0.058), (0.03, 0.086, 0.15))
\({A_3}\)
((0.017, 0.014, 0.055), (0.035, 0.084, 0.108), (0.054, 0.104, 0.16))
((0.002, 0.068, 0.111), (0.051, 0.1, 0.124), (0.073, 0.12, 0.16))
((0.038, 0.097, 0.136), (0.058, 0.104, 0.136), (0.082, 0.13, 0.16))
((0.02, 0.009, 0.063), (0.01, 0.054, 0.087), (0.069, 0.125, 0.159))
\({A_4}\)
((0.024, 0.004, 0.045), (0.007, 0.043, 0.08), (0.019, 0.081, 0.144))
((0.04, 0.031, 0.009), (0.028, 0.02, 0.034), (0.056, 0.113, 0.158))
((0.012, 0.015, 0.078), (0.002, 0.046, 0.078), (0.083, 0.151, 0.158))
((0.04, 0.04, 0.034), (0.04, 0.04, 0.018), (0.056, 0.136, 0.158))
\({A_5}\)
((0.02, 0.006, 0.051), (0.001, 0.032, 0.076), (0.006, 0.052, 0.136))
((0.012, 0.065, 0.132), (0.059, 0.1, 0.14), (0.102, 0.143, 0.158))
((0.08, 0.16, 0.16), (0.1, 0.16, 0.16), (0.126, 0.16, 0.16))
((0.067, 0.16, 0.16), (0.1, 0.16, 0.16), (0.114, 0.16, 0.16))
 
2
\({A_1}\)
((0.051, 0.006, 0.083), (0.042, 0.107, 0.159), (0.059, 0.152, 0.21))
((0.09, 0.052, 0.056), (0.037, 0.023, 0.084), (0.008, 0.105, 0.18))
((0.034, 0.048, 0.1), (0.015, 0.079, 0.133), (0.08, 0.153, 0.21))
((0.052, 0.21, 0.21), (0.12, 0.21, 0.21), (0.148, 0.21, 0.21))
\({A_2}\)
((0.066, 0.028, 0.06), (0.086, 0.167, 0.184), (0.136, 0.192, 0.21))
((0.052, 0.016, 0.069), (0.007, 0.059, 0.111), (0.058, 0.132, 0.209))
((0.027, 0.161, 0.184), (0.085, 0.166, 0.184), (0.139, 0.197, 0.21))
((0.035, 0.064, 0.136), (0.05, 0.123, 0.159), (0.102, 0.181, 0.21))
\({A_3}\)
((0.064, 0.017, 0.043), (0.006, 0.055, 0.103), (0.022, 0.081, 0.181))
((0.031, 0.172, 0.191), (0.102, 0.188, 0.197), (0.142, 0.21, 0.21))
((0.042, 0.156, 0.189), (0.084, 0.162, 0.189), (0.125, 0.182, 0.205))
((0.004, 0.091, 0.108), (0.042, 0.102, 0.136), (0.119, 0.182, 0.209))
\({A_4}\)
((0.084, 0.063, 0.004), (0.054, 0.01, 0.06), (0.028, 0.027, 0.176))
((0.072, 0.022, 0.113), (0.015, 0.079, 0.133), (0.054, 0.13, 0.206))
((0.056, 0.167, 0.18), (0.09, 0.167, 0.18), (0.157, 0.21, 0.21))
((0.021, 0.109, 0.181), (0.075, 0.146, 0.189), (0.125, 0.19, 0.21))
\({A_5}\)
((0.072, 0.037, 0.042), (0.011, 0.068, 0.125), (0.057, 0.132, 0.193))
((0.053, 0.21, 0.21), (0.12, 0.21, 0.21), (0.152, 0.21, 0.21))
((0.086, 0.009, 0.12), (0.003, 0.057, 0.12), (0.12, 0.189, 0.21))
((0.048, 0.057, 0.137), (0.045, 0.103, 0.158), (0.138, 0.202, 0.21))
 
3
\({A_1}\)
((0.196, 0.139, 0.034), (0.053, 0.069, 0.133), (0.033, 0.151, 0.238))
((0.25, 0.25, 0.193), (0.244, 0.232, 0.093), (0.039, 0.104, 0.236))
((0.193, 0.095, 0.006), (0.142, 0.036, 0.055), (0.06, 0.171, 0.241))
((0.243, 0.208, 0.134), (0.196, 0.156, 0.028), (0.132, 0.247, 0.247))
\({A_2}\)
((0.183, 0.11, 0.022), (0.034, 0.166, 0.207), (0.15, 0.225, 0.25))
((0.25, 0.19, 0.015), (0.136, 0.046, 0.062), (0.095, 0.194, 0.243))
((0.152, 0.056, 0.086), (0.104, 0.009, 0.086), (0.113, 0.235, 0.249))
((0.221, 0.184, 0.038), (0.154, 0.048, 0.043), (0.091, 0.208, 0.249))
\({A_3}\)
((0.224, 0.168, 0.071), (0.164, 0.093, 0.016), (0.119, 0.003, 0.175))
((0.214, 0.129, 0.004), (0.129, 0.021, 0.068), (0, 0.13, 0.219))
((0.146, 0.023, 0.162), (0.041, 0.062, 0.162), (0.103, 0.223, 0.249))
((0.179, 0.008, 0.168), (0.014, 0.087, 0.188), (0.123, 0.233, 0.25))
\({A_4}\)
((0.219, 0.158, 0.058), (0.129, 0.038, 0.051), (0.094, 0.031, 0.2))
((0.219, 0.188, 0.176), (0.206, 0.188, 0.061), (0.093, 0.166, 0.249))
((0.215, 0.118, 0.042), (0.148, 0.053, 0.042), (0.097, 0.218, 0.248))
((0.216, 0.178, 0.135), (0.188, 0.157, 0.035), (0.077, 0.122, 0.229))
\({A_5}\)
((0.204, 0.141, 0.038), (0.113, 0.014, 0.082), (0.046, 0.116, 0.225))
((0.228, 0.18, 0.082), (0.164, 0.113, 0.01), (0.049, 0.153, 0.237))
((0.15, 0.011, 0.124), (0.066, 0.048, 0.124), (0.119, 0.208, 0.242))
((0.212, 0.188, 0.067), (0.167, 0.061, 0.03), (0.099, 0.221, 0.249))
Table 20
Aggregation matrix
 
\({C_1}\)
\({C_2}\)
\({C_3}\)
\({C_4}\)
\({A_1}\)
((0.349, 0.32, 0.371), (0.405, 0.454, 0.501), (0.455, 0.492, 0.578))
((0.439, 0.347, 0.309), (0.361, 0.338, 0.368), (0.253, 0.51, 0.576))
((0.314, 0.355, 0.457), (0.186, 0.421, 0.497), (0.447, 0.532, 0.586))
((0.235, 0.442, 0.457), (0.37, 0.457, 0.442), (0.513, 0.586, 0.586))
\({A_2}\)
((0.365, 0.34, 0.406), (0.439, 0.44, 0.519), (0.508, 0.485, 0.576))
((0.427, 0.287, 0.364), (0.212, 0.348, 0.436), (0.409, 0.526, 0.583))
((0.275, 0.439, 0.526), (0.304, 0.473, 0.526), (0.49, 0.57, 0.586))
((0.413, 0.364, 0.401), (0.336, 0.379, 0.386), (0.44, 0.521, 0.582))
\({A_3}\)
((0.425, 0.316, 0.292), (0.293, 0.317, 0.448), (0.284, 0.44, 0.567))
((0.344, 0.332, 0.465), (0.203, 0.441, 0.505), (0.439, 0.547, 0.582))
((0.199, 0.467, 0.55), (0.341, 0.491, 0.55), (0.481, 0.562, 0.586))
((0.361, 0.392, 0.459), (0.373, 0.412, 0.503), (0.464, 0.561, 0.586))
\({A_4}\)
((0.334, 0.425, 0.279), (0.444, 0.358, 0.432), (0.417, 0.384, 0.563))
((0.434, 0.304, 0.37), (0.362, 0.362, 0.349), (0.372, 0.485, 0.585))
((0.284, 0.351, 0.434), (0.36, 0.316, 0.434), (0.472, 0.569, 0.586))
((0.409, 0.377, 0.415), (0.347, 0.412, 0.374), (0.411, 0.504, 0.582))
\({A_5}\)
((0.387, 0.354, 0.332), (0.211, 0.374, 0.457), (0.338, 0.423, 0.568))
((0.284, 0.364, 0.46), (0.269, 0.417, 0.504), (0.471, 0.554, 0.584))
((0.263, 0.404, 0.528), (0.272, 0.474, 0.528), (0.51, 0.575, 0.586))
((0.324, 0.375, 0.465), (0.309, 0.449, 0.512), (0.497, 0.58, 0.587))
Table 21
Weighted matrix
 
\({C_1}\)
\({C_2}\)
\({C_3}\)
\({C_4}\)
\({A_1}\)
((0.083, 0.076, 0.088), (0.096, 0.108, 0.119) (0.108, 0.117, 0.137))
((0.042, 0.033, 0.029), (0.034, 0.032, 0.035), (0.024, 0.048, 0.055))
((0.101, 0.114, 0.147), (0.06, 0.135, 0.16), (0.144, 0.171, 0.189))
((0.081, 0.153, 0.158), (0.128, 0.158, 0.153), (0.177, 0.203, 0.203))
\({A_2}\)
((0.087, 0.081, 0.096), (0.104, 0.104, 0.123), (0.121, 0.115, 0.137))
((0.04, 0.027, 0.035), (0.02, 0.033, 0.041), (0.039, 0.05, 0.055))
((0.089, 0.141, 0.169), (0.098, 0.152, 0.169), (0.158, 0.184, 0.189))
((0.041, 0.029, 0.035), (0.034, 0.034, 0.033), (0.035, 0.046, 0.055))
\({A_3}\)
((0.101, 0.075, 0.069), (0.069, 0.075, 0.106), (0.067, 0.104, 0.134))
((0.033, 0.031, 0.044), (0.019, 0.042, 0.048), (0.042, 0.052, 0.055))
((0.064, 0.15, 0.177), (0.11, 0.158, 0.177), (0.155, 0.181, 0.189))
((0.125, 0.136, 0.159), (0.129, 0.143, 0.174), (0.161, 0.194, 0.203))
\({A_4}\)
((0.079, 0.101, 0.066), (0.105, 0.085, 0.102), (0.099, 0.091, 0.134))
((0.041, 0.029, 0.035), (0.034, 0.034, 0.033), (0.035, 0.046, 0.055))
((0.091, 0.113, 0.14), (0.116, 0.102, 0.14), (0.152, 0.183, 0.189))
((0.141, 0.13, 0.144), (0.12, 0.143, 0.129), (0.142, 0.174, 0.201))
\({A_5}\)
((0.092, 0.084, 0.079), (0.05, 0.089, 0.108), (0.08, 0.1, 0.135))
((0.027, 0.034, 0.044), (0.025, 0.04, 0.048), (0.045, 0.052, 0.055))
((0.085, 0.13, 0.17), (0.087, 0.153, 0.17), (0.164, 0.185, 0.189))
((0.112, 0.13, 0.161), (0.107, 0.155, 0.177), (0.172, 0.201, 0.203))
I
((0.101, 0.101, 0.096), (0.105, 0.108, 0.123), (0.121, 0.117, 0.137))
((0.027, 0.027, 0.029), (0.019, 0.032, 0.033), (0.024, 0.046, 0.055))
((0.064, 0.113, 0.14), (0.06, 0.102, 0.14), (0.144, 0.171, 0.189))
((0.081, 0.126, 0.139), (0.107, 0.131, 0.129), (0.142, 0.174, 0.201))
AI
((0.079, 0.075, 0.066), (0.069, 0.075, 0.102), (0.067, 0.091, 0.134))
((0.042, 0.034, 0.044), (0.034, 0.042, 0.048), (0.045, 0.052, 0.055))
((0.101, 0.15, 0.177), (0.116, 0.158, 0.177), (0.164, 0.185, 0.189))
((0.143, 0.153, 0.161), (0.129, 0.158, 0.177), (0.177, 0.203, 0.203))
Literatur
12.
Zurück zum Zitat Cuong BC, Kreinovich V (2013) Picture fuzzy sets—a new concept for computational intelligence problems. In: 2013 Third World Congress on Information and Conmmunication Technologies (WICT). Third World Congress on Information and Communication Technologies (WICT), Hanoi, Vietnam, Dec 15–18, 2013, pp 1–6 Cuong BC, Kreinovich V (2013) Picture fuzzy sets—a new concept for computational intelligence problems. In: 2013 Third World Congress on Information and Conmmunication Technologies (WICT). Third World Congress on Information and Communication Technologies (WICT), Hanoi, Vietnam, Dec 15–18, 2013, pp 1–6
21.
Zurück zum Zitat Hara Y, Uchiyama M, Takahasi S (1998) A refinement of various mean inequalities. J Inequalities Appl 2:387–395MathSciNet Hara Y, Uchiyama M, Takahasi S (1998) A refinement of various mean inequalities. J Inequalities Appl 2:387–395MathSciNet
24.
31.
Zurück zum Zitat Khouri S, Rosova A, Straka M, Behun M (2018) Logistics performance and corporate logistic costs, their interconnections and consequences. Transform Bus Econ 17:426–446 Khouri S, Rosova A, Straka M, Behun M (2018) Logistics performance and corporate logistic costs, their interconnections and consequences. Transform Bus Econ 17:426–446
44.
Zurück zum Zitat Nie J, Zeng WY, Jing HL (2016) Research of logistics cost based on saving algorithm: a case of a certain logistics company’s logistics cost. In: Kao J, Sung W (eds) 2016 International Conference on Mechatronics, Manufacturing and Materials Engineering (MMME 2016), Hong Kong, Peoples R China, Jun 11–12, 2016. https://doi.org/10.1051/matecconf/20166304029 Nie J, Zeng WY, Jing HL (2016) Research of logistics cost based on saving algorithm: a case of a certain logistics company’s logistics cost. In: Kao J, Sung W (eds) 2016 International Conference on Mechatronics, Manufacturing and Materials Engineering (MMME 2016), Hong Kong, Peoples R China, Jun 11–12, 2016. https://​doi.​org/​10.​1051/​matecconf/​20166304029
47.
Zurück zum Zitat Opricovic S (1998) Multicriteria optimization of civil engineering systems. Expert Syst Appl 2:5–21 Opricovic S (1998) Multicriteria optimization of civil engineering systems. Expert Syst Appl 2:5–21
69.
Zurück zum Zitat Wang Z, Yoon KP, Hwang CL (1997) Multiple attribute decision making: an introduction. Interfaces 27:163–164 Wang Z, Yoon KP, Hwang CL (1997) Multiple attribute decision making: an introduction. Interfaces 27:163–164
76.
77.
Zurück zum Zitat Yu D (2013) Intuitionistic fuzzy Choquet aggregation operator based on Einstein operation laws. Sci Iran 20:2109–2122 Yu D (2013) Intuitionistic fuzzy Choquet aggregation operator based on Einstein operation laws. Sci Iran 20:2109–2122
Metadaten
Titel
A large-scale multi-attribute group decision-making method with R-numbers and its application to hydrogen fuel cell logistics path selection
verfasst von
Rui Cheng
Jianping Fan
Meiqin Wu
Hamidreza Seiti
Publikationsdatum
22.04.2024
Verlag
Springer International Publishing
Erschienen in
Complex & Intelligent Systems
Print ISSN: 2199-4536
Elektronische ISSN: 2198-6053
DOI
https://doi.org/10.1007/s40747-024-01437-9

Premium Partner