Next Article in Journal
New General Variants of Chebyshev Type Inequalities via Generalized Fractional Integral Operators
Next Article in Special Issue
An ITARA-TOPSIS Based Integrated Assessment Model to Identify Potential Product and System Risks
Previous Article in Journal
A Predictive Prescription Using Minimum Volume k-Nearest Neighbor Enclosing Ellipsoid and Robust Optimization
Previous Article in Special Issue
Application of Multi-Objective Evolutionary Algorithms for Planning Healthy and Balanced School Lunches
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

How to Influence the Results of MCDM?—Evidence of the Impact of Cognitive Biases

by
Gerda Ana Melnik-Leroy
* and
Gintautas Dzemyda
Institute of Data Science and Digital Technologies, Vilnius University, Akademijos str. 4, LT-08412 Vilnius, Lithuania
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(2), 121; https://doi.org/10.3390/math9020121
Submission received: 30 November 2020 / Revised: 22 December 2020 / Accepted: 31 December 2020 / Published: 7 January 2021
(This article belongs to the Special Issue Multiple Criteria Decision Making)

Abstract

:
Multi-criteria decision-making (MCDM) methods aim at dealing with certain limitations of human information processing. However, cognitive biases, which are discrepancies of human behavior from the behavior of perfectly rational agents, might persist even when MCDM methods are used. In this article, we focus on two among the most common biases—framing and loss aversion. We test whether these cognitive biases can influence in a predictable way both the criteria weights elicited using the Analytic Hierarchy Process (AHP) and the final ranking of alternatives obtained with the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS). In a controlled experiment we presented two groups of participants with a multi-criteria problem and found that people make different decisions when presented with different but objectively equivalent descriptions (i.e., frames) of the same criteria. Specifically, the results show that framing and loss aversion influenced the responses of decision makers during pairwise comparisons, which in turn caused the rank reversal of criteria weights across groups and resulted in the choice of a different best alternative. We discuss our findings in light of Prospect Theory and show that the particular framing of criteria can influence the outcomes of MCDM in a predictable way. We outline implications for MCDM methodology and highlight possible debiasing techniques.

1. Introduction

1.1. Literature Review

Multi-criteria decision-making methods have been developed and widely used in the past decades to deal with limitations of human decision making that occur when complex decision environments are involved. Specifically, when people have to solve decision problems, involving multiple conflicting criteria, the processing load required to evaluate the information and make a decision becomes very high. This often results in the tendency to simplify the problem by using intuitive or heuristic approaches instead of rational or analytic ones, and can cause subjective judgments and losses of important information [1,2]. Importantly, even experts are found to have difficulty in assessing complex trade-offs and to resort to simplified decision making [3]. Thus, in such situations better solutions can be achieved by applying MCDM techniques which can deal with large amounts of information and calculations [4].
Despite the popularity of MCDM methods, certain types of limitations in human decision making might persist even when decision support aids, such as MCDM methods, are used. Specifically, as long as human judgment is used as a base for subsequent calculations and model building, the influence of the processes underlying human judgment should be studied and accounted for. This would allow to avoid distortions that might occur even in mathematically most sophisticated models. It is also important to remember, that MCDM is a helpful tool that can support decisions, however, the responsibility of taking the final decision is always in the hands of the decision maker [5]. In the past decade, there has been a resurgence of papers in Operational research (OR) literature highlighting the importance of studying “various behavioral effects [that] can be embedded in and effect OR processes” [6]. The authors discuss in detail the importance and high potential of behavioral OR for bringing improvement into OR methods and propose nine topics for a respective research agenda. One of these topics is the study of cognitive aspects, such as cognitive biases. Cognitive biases are systematic discrepancies of human behavior from the behavior of perfectly rational agents. In other words, instead of behaving according to principles of probability and logic, humans make systematic errors in judgment and perception of the problem, which leads to erroneous decision making. Cognitive biases as a major behavioral effect have been vastly studied in other fields, such a cognitive psychology, behavioral economics and medical science [7,8,9], but often overlooked in MCDM and OR [10,11]. As noted by Borrero et al. [12], there is a lack of empirical evidence on how cognitive factors influence the effectiveness of the MCDM methods. Moreover, it is not clear if they contribute to diminishing cognitive biases, or if, on the contrary, cognitive biases diminish the effectiveness of these tools. Crucially, cognitive biases are systematic and inherent to the human mind. This means that cognitive biases, unlike individual differences and motivational biases, are much more universal, culturally independent and predictable [13]. Therefore, identifying their possible influence on particular stages of MCDM could help prevent frequently occurring problems. Moreover, debiasing techniques, which are effective ways of limiting the occurrence of cognitive biases, could be applied to prevent their influence.
Arnott [14] identified a taxonomy of 37 cognitive biases in Decision Support Systems, such as the framing bias, the loss aversion bias, the reference dependence bias, the anchoring bias and others, and proposed potential improvements for the design of DSS. Similarly, Montibeller & von Winterfeldt [10,15] provide a review of cognitive biases in Decision and Risk Analysis and MCDM. The authors highlight a number of cognitive biases that occur at different stages of MCDM and propose debiasing solutions. Importantly, Montibeller & von Winterfeldt [10] point to the fact that although the issue of cognitive biases in OR has been often identified as a problem, very few studies tested their influence on OR experimentally. One example of such research is the study by George [16], which tested a Decision support system designed to mitigate the effects of the anchoring bias in the context of house appraisals. Due to the anchoring bias individuals are disproportionally influenced by initial information presented to them (considered to be the “anchor”) and their subsequent judgments during decision making are accordingly biased towards this initial information. The results of the study showed that anchoring bias remained robust even when using automated decision support. Another study conducted by Ahn & Vazquez Novoa [17], investigated the impact of the decoy effect on the relative performance evaluation and a possible debiasing capacity of the Data Envelopment Analysis. The decoy effect is a cognitive bias that implies that the inclusion of a dominated alternative can influence the preference for non-dominated alternatives (for example, consumers change their preference between two options when presented with a third option). The authors showed that although the utility comparison of two alternatives was biased by the decoy effect, the addition of supplementary information about DEA results (efficiency scores and the mention of existing slacks) helped in debiasing the evaluation.
Despite these first attempts to study experimentally the impact of cognitive biases on different OR techniques, even fewer studies addressed the issue of cognitive biases on MCDM, specifically. The only study, to our knowledge, on this topic was done by Ferretti et al. [18]. In this paper the authors tested several methods to reduce the overconfidence bias when eliciting continuous probability distributions in the context of multicriteria decision analysis. Overconfidence describes the tendency of people to erroneously assess probabilities by underestimating variability and overestimating the tails of the distribution. Results revealed that participant were subject to overconfidence bias, and the debiasing techniques had a positive, though limited, debiasing effect.

1.2. Current Study

In the current paper we present the first experimental study on the influence of two major cognitive biases, i.e., framing and loss aversion, on MCDM. A framing bias is said to occur when people make different decisions if presented with different but objectively equivalent descriptions (i.e., frames) of the situations or outcomes. For instance, describing a surgery as yielding a 90% survival rates (positive frame) versus 10% mortality rates (negative frame) presents objectively equivalent information framed in two different ways [19]. Patients are more likely to choose to undergo a surgery described in the former than in the later way. Crucially, the direction of this opinion change occurring when different frames are presented, is predictable by the loss aversion bias. This cognitive bias captures the extreme sensitivity of humans towards losses compared to gains, i.e., the same amount of losses is perceived as being larger than the same amount of gains. Framing and loss aversion are strongly interlinked and therefore will be studied together in the current paper. The existence of these cognitive biases has been previously demonstrated by numerous laboratory experiments and field studies in the literature (for reviews and meta-analyses, see [20,21,22,23]). These phenomena were extensively described in the Prospect theory, developed by Kahneman and Tversky [1,7,13,24,25,26]. In 2002, Kahneman received the Nobel Memorial Prize in Economic Sciences for his work developing this theory. An overview of Prospect theory will be presented in the following subsection of this paper.
Turning to the susceptibility of MCDM to biases, it is important to note that cognitive biases might impact those steps in MCDM methods, which involve judgments by decision makers. According to Montibeller & von Winterfeldt [10], these steps include the generation of alternatives and objectives, the development of criteria for the objectives, the elicitation of utility or value functions over criteria levels and the elicitation of weights for criteria. In this paper we will focus on the last step, i.e., the weighting of criteria. Ample earlier research has already demonstrated that using different weight elicitation methods yields systematic and persistent differences in the numbers which decision makers assign to the weights of the criteria [27]. Here we will focus on a single weight elicitation methods and test whether the mere difference in the formulations of the criteria can still yield different results in weight assignation.
We consider two questions:
  • Can different ways of framing criteria have an impact on how decision makers evaluate them?
  • If framing and loss aversion biases are induced at early stages of weight elicitation (i.e., at the stage of pairwise comparisons), does it affect both the final ranking of criteria weights and the final ranking of alternatives? (see Figure 1)
As to the first question, we hypothesize that if a criterion is framed in terms of losses (e.g., jobs lost following a reorganization), it might be perceived as being more important than when it is framed in terms of gains (e.g., jobs saved following a reorganization). In order to elicit weights, we chose to use the AHP method, which is one of the most used methods across different fields, suited both for individual and group decision making [28]. Although many other methods have been since developed, the AHP remains one of the most widely used MCDM methods [29]. As the AHP is believed to be in accordance with psychological principles [30], it can be considered to be extremely suitable to test for robustness to psychological biases.
As to the second question, it is important to identify the specific stage at which framing and loss aversion biases are induced, and crucially, to test for their influence on the subsequent stages of the MCDM. As is common practice, we chose to use a different method than AHP, namely TOPSIS (TOPSIS was used instead of AHP as the former is less time-consuming for the participants. Moreover, in order to have better control on our experimental variable, we wanted to limit to a single MCDM stage the involvement of human judgment, whereas alternative selection using AHP would require an additional input from the decision makers), to obtain the final ranking of the alternatives [31]. Note though, that this method was chosen for simplicity reasons as one of the most common classical methods MCDM, and other methods could have been as suitable to test our hypotheses. We will thus target the following three stages of the MCDM:
  • the stage of individual pairwise comparisons (AHP)
  • the stage of criteria ranking (AHP)
  • the stage of alternative selection (TOPSIS)
All three of these stages have to be tested, as an effect of framing and loss aversion at one stage could as well disappear in the subsequent stages. For instance, both cognitive biases could impact the ratings given by the decision makers at the stage of an individual pairwise comparison, but this effect might be mitigated by other pairwise comparisons, subsequent normalization or weight calculation techniques, and thus result in no significant effect on the final ranking of alternatives. For example, [32] studied criteria weight elicitation using three different techniques and showed that although each technique led to different criteria weights, all techniques led to the selection of the same final alternative. An effect of framing and loss aversion found only at the stage of individual pairwise comparisons, but not in later steps, would indicate that AHP helps in diminishing the influence of cognitive biases on decision making. If, however, different ways of framing the same criteria lead to the assignation of different criteria ranks and to the final selection of different alternatives, this would be evidence for the sensitivity of MCDM to framing and loss aversion biases and hence, for the need of effective debiasing techniques.
In this research we evaluated the effect of framing and loss aversion biases on the judgments of decision makers in a controlled online experiment. Two groups of participants were asked to make pairwise comparisons on logically equivalent criteria which were framed in two different ways (positive vs. negative frames). Following recommendations to keep the number of criteria at seven or less for consistency and redundancy reasons [33], we chose to include six criteria. As the experiment was planned at the beginning of the COVID-19 global pandemic, when Lithuania (where the experiment was conducted) was under quarantine, we chose to design a MCDM problem revolving around the topic of COVID-19. As noted by O’Keefe [11], many experimental studies within OR are artificial, as they use students as participants and present problems that are not necessarily relevant to them. In the current study we addressed this criticism and chose a topic that was relevant to a great number of citizens, considering the global epidemiological situation. Moreover, we recruited most participants on COVID-19-related groups on social media (Facebook), thus both ensuring they were personally interested in the issue, and reaching a larger and more varied sample of participants. Finally, the MCDM problem used in our experiment involved qualitative criteria, such as policy impacts, which are particularly suited to be analyzed using AHP [34].
In the next sub-section, we will present the Prospect Theory together with the framing and loss aversion biases in further details, followed by a brief description of the AHP and TOPSIS methods.

1.3. Theoretical Background

1.3.1. The Prospect Theory and the Framing Bias

The most famous illustration of the framing effect comes from the seminal work by Tversky and Kahneman [35]. In their experiment two groups of participants were presented with a hypothetical scenario where they had to imagine that the US prepares for an outbreak of a deadly Asian disease, which is expected to kill 600 people. They were told to choose from one of two alternative programs to combat the disease. The first group received the following formulation of the two programs and their consequences:
If Program A is adopted, 200 people will be saved.
If Program B is adopted, there is a 1/3 probability that 600 people will be saved, and a 2/3 probability that no people will be saved.
72% of the participants of this group chose the Program A.
The other group of participants received a different description of the two programs:
If Program C is adopted, 400 people will die.
If Program D is adopted, there is a 1/3 probability that nobody will die, and a 2/3 probability that 600 people will die.
In the second group only 22% of the respondents opted for Program C. Thus, although the programs A and C were logically equivalent (200 saved and 400 dead), participants radically shifted they preferences depending on the formulation of the alternatives. The observation that people make different decisions when presented with different but objectively equivalent descriptions (i.e., frames) of the same problem is contrary to the principle of description invariance inherent to rational choice [7]. In order to explain this and related behavioral effects Kahneman & Tversky [1,26] proposed an alternative to the expected utility theory, called Prospect theory. The evaluation proposed in Prospect theory is similar to that of earlier models using weighting functions, namely:
V = i = 1 n π ( p i ) v x i
where: V is expected utility, x i are the potential outcomes, p i the respective probability of this outcome, π the probability weighting function, and v is a function that assigns a value to an outcome ( x i ). However, this value function is different from the standard utility function, as it passes through the reference point, is s-shaped and asymmetrical (Figure 2).
That is, the value of the potential outcome is set to v x = x α ,   f o r   x 0 , and to v x = λ x β ,   f o r   x < 0 , where v is the function that assigns a value to an outcome x , 𝛼 and 𝛽 denote the adjustable coefficients, 𝛼 specifying the concavity and 𝛽 the convexity of the value function. The value function satisfies the constraints that 0 < α 1 and 0 < β 1 . The parameter 𝜆 denotes the loss aversion and λ = 2.25 , as estimated from experimental data [25].
Thus, the value function proposed by this theory reflects three major cognitive biases:
  • Reference dependence bias: the value function (gains and losses) is defined in terms of deviations from a reference point and not in terms of absolute magnitudes. Thus, decisions are made relative to some status quo or baseline and are sensitive to framing. This differs from expected utility theory, in which a rational agent is indifferent to the reference point.
  • Loss aversion bias: the value function is steeper for losses than for gains. This suggests that the same amount of losses is perceived as being larger than the same amount of gains (e.g., the aversion of losing 10 euros seems stronger than the attractiveness of gaining 10 euros). In other words, humans are oversensitive to losses. Again, this differs from expected utility theory where individuals should value the same amount equally, independently of whether it is a gain or a loss.
  • Risk aversion bias: the value function is concave for gains and convex for losses. This means that when choices involve gains, people are risk averse and prefer a certain gain to a probable gain, even if the later has equal or greater expected utility. Conversely, when choices involve losses, people are risk seeking and prefer options that help avoid sure losses.
Turning back to the Asian decease problem, the reference point adopted in the first group was negatively framed in terms of the number of lives saved (gains), while in the second group it was positively framed in terms of lives lost (losses). Therefore, participants exhibited risk aversion in the first group and chose the solution with a certain outcome of saving 200 lives. In the second group, however, they exhibited loss aversion and risk seeking as they opted for the risky option that helped avoid a loss of 400 lives. Although the Asian decease problem was an artificial laboratory experiment, it is somewhat similar to multiple criteria decision making, as participants have to choose an alternative based on several conflicting criteria. Thus, if the above described biases have very strong effects in this type of experiments, it is likely that comparable effects might occur in real MCDM decision making.
Although all three cognitive biases characterized by the Prospect theory are relevant to MCDM, we will focus in this paper on the first two, namely, reference-dependence and loss aversion, which are directly linked to the framing bias. Specifically, framing occurs because individuals evaluate information relative to a reference point, which is often the status quo, and they evaluate it differently, depending on whether this information is related to gains vs. losses with respect to this reference point. Thus, we hypothesize that when evaluating the importance of a criterion, decision makers will be sensitive to the frame used to formulate this criterion. For instance, in a pairwise comparison the comparison between Criterion 1 and Criterion 3 will depend on the way they are framed:
C r i t e r i o n   1   v s .   C r i t e r i o n   3 c o n s t a n t     C r i t e r i o n   1 +   v s .   C r i t e r i o n   3 c o n s t a n t ,
where C r i t e r i o n   1 is a negatively framed criterion, C r i t e r i o n   1 + is the same, but positively framed criterion, and where the framing of C r i t e r i o n   3 c o n s t a n t is constant.

1.3.2. AHP

The AHP technique can handle both quantitative as well as qualitative information [36]. In AHP the decision problem is decomposed into a hierarchical structure, incorporating the goal of the decision problem, the criteria and the alternatives. In order to calculate the priority of each criterion with respect to the goal, and the priority of each alternative with respect to each criterion, the AHP uses comparative judgments carried out on pairs of criteria or alternatives. Importantly, the decision maker compares only two elements at a time, thus ensuring he can concentrate on the properties of the elements in question without having to think about the other elements [37]. The pairwise comparisons are done on a fundamental scale from 1 to 9 (Table 1).
This allows to obtain a pairwise reciprocal comparison matrix A [38]. A 1 , , A n denote the objects to be compared (criteria or alternatives), w 1 , , w n denote their weights.
A 1 A 2 A n
A 1 w 1 / w 1 w 1 / w 2 w 1 / w n
A 2 w 2 / w 1 w 2 / w 2 w 2 / w n
A n w n / w 1 w n / w 2 w n / w n
Local priorities (weights) are calculated from the comparison matrix with the eigenvalue method [38,39]:
A · p = λ m a x · p
where
  • A is the pairwise comparison matrix
  • p   is the priorities vector
  • λ m a x is the maximal eigenvalue
The consistency of judgements the decision maker provided during the pairwise comparisons is checked using the consistency index (CI):
C I = λ m a x n n 1 ,
where
  • λ m a x maximal eigenvalue of the matrix A
  • n size of the comparison matrix A
  • The Consistency Ratio ( C R ) is then calculated in:
  • C R = C I R I ,
by using the Random Index RI, which is the average C I values from a random simulation of 500 pairwise comparison matrices (the values for RI are given in Table 2). If RI ≤ 0.1, the inconsistency is acceptable, otherwise the judgments should be reviewed. Note, though, that acceptable levels of consistency can still be obtained when aggregating multiple individual inconsistent comparison matrixes, if the number of individual decision makers is sufficiently high [40].

1.3.3. TOPSIS

The Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS), is used only for the choice of an optimal alternative. The criteria weights are elicited by using a different method prior to the application of TOPSIS. TOPSIS is based on the ranking of alternatives through distance measures. Specifically, the optimal alternative is defined as the one that is the closest to an ideal solution and at the same time the most distant from the negative ideal solution. Both of these solutions are derived within the method. While the ideal solution aims at minimizing the cost criteria and maximizing the benefit criteria, the negative ideal solution minimizes the benefit and maximizes the cost criteria.
First, a decision matrix A with m alternatives and n criteria is created, with the intersection of each alternative and criterion given as x i j .
A = ( x i j ) m × n
Then this matrix is normalized to obtain R = ( r i j ) m × n , by applying the normalization method:
r i j = x i j k = 1 m   x k i 2 ,
where j = 1 , , m ; i = 1 , , n .
The third step involves generating the weighted normalized decision matrix T = ( t i j ) m × n expressed as:
t i j = r i j · ω i ,   j = 1 , , m ; i = 1 , , n
where ω i is the weight of the i th criterion, and i = 1 n ω i = 1 . The weight for each criterion can be derived by using various methods, such as AHP [37,41], simple multiattribute rating technique (SMART) [42], tradeoff weighting [43] etc.
Then the ideal solution S + and the negative ideal solution S are defined as follows:
S + = t 1 + , , t n + , where t i + = min j   t i j | i B max j   t i j | i B +
S = t 1 , , t n , where t i = max j   t i j | i B min j   t i j | i B +
where B + and B are associated with benefit and cost criteria, respectively.
Using the n-dimensional Euclidean distance, the distance D j + between every alternative and the ideal solution is calculated with:
D j + = i = 1 n t i j t i + 2 ,
And the distance D j between every alternative and the negative ideal solution is calculated with:
D j = i = 1 n t i j t i 2
Finally, the relative closeness of each alternative to the ideal solution (denoted as C j ) is obtained by the following equation:
C j = D j D j + + D j
The alternatives are ranked according to their C j value (the highest value represents the best solution).

2. Materials and Methods

2.1. Participants

Participants were recruited on voluntary bases on COVID-19-related groups on the social media Facebook. A small number of participants were recruited among university students. Participants completed the experiment on the online experimentation platform Qualtrics. Following the experimental part, participants were asked to fill in a short sociological questionnaire and provide information on their gender (55% were female in group A and 53% in group B), education level (80% had a higher education diploma in group A and 75% in group B), age (in group A participants were aged between 18 and 63, with mean age of 36; in group B participants were aged between 18 and 59, with mean age of 33). Participants were free to quit the experiment whenever they wanted, thus making sure that only interested and fully engaged participants were completing the questionnaire. In total, 248 participants completed both the experimental and the sociological parts of the test (41 participants were removed, as they did not answer all questions).

2.2. Procedure

The experiment was based on a classical multiple-criteria decision problem. Participants were randomly assigned to one of two groups A or B. They were first presented with a short description of the decision problem in which they had to help the government chose the best policy to stop a second wave of the COVID-19 virus. Specifically, they were informed that the state institutions have to choose the best policy among three alternatives. Participants were told that the financial and human resources of the state are limited, and therefore the policy alternatives had to be evaluated based on six conflicting criteria. These criteria described the estimated outcomes of each of the policies. The first three criteria were: the proportion of doctors infected with the virus (depends on the availability of protective equipment in hospitals and testing capacities), the number of new COVID-19 local outbreaks (depends on the funds and human resources allocated to testing, tracing and isolating new cases), the number of new imported COVID-19 cases (depends on funds and human resources allocated to control boarders and ensuring people self-isolate after travelling to countries with a high infection risk). The other three criteria were the availability of hospital beds for COVID-19 patients; the availability of PCR tests; the availability of disinfection liquid and other protective equipment—these three criteria depended on the allocation of funds from the government, the effective managing of medical care units/laboratories and on the organization of supply shipping.
Both the criteria and alternatives were formulated based on real information on COVID-19-related policies taken from official websites of the Ministry of Health. Participants were told that they would first evaluate the importance of the criteria in pairwise comparisons on a scale from 1 to 9 (the classical AHP scale). Once the weights for each criterion were obtained by means of AHP, the alternatives were evaluated and ranked using TOPSIS. For this we used a pre-filled decision matrix with the scores of each alternative on each of the six criteria.
The criteria weight elicitation involved 15 pairwise comparisons (this number is derived from the formula n n 1 / 2 , where n is the number of criteria). The experimental manipulation consisted in framing differently two of the six criteria (“Hospital beds” and “Doctors”) for each of the groups A and B. Specifically, the two criteria were framed in terms of losses in one group (“Proportion of hospital beds already occupied by COVID-19 patients” and “Proportion of doctors infected with COVID-19”) and in terms of gains in the other (Proportion of hospital beds still available for COVID-19 patients” and “Proportion of doctors who avoided COVID-19 infection”). The other four criteria were identical across both groups. To test for the presence of the framing and loss aversion biases at the pairwise comparison level, we concentrated on the first 8 pairwise comparisons, four in the test condition and four in the control condition (see the structure of the test and control conditions in Table 3), while the remaining pairwise comparisons were used only in subsequent stages of MCDM. In the test condition one member of each pair was framed either in terms of losses (Criteria 1 and 2 in participant group A) or in terms of gains (Criteria 1+ and 2+ in participant group B). The framing of the second member (Criteria 3constant and 4constant) was kept constant across experimental groups. The index minus (as in Criterion 1) indicates that the criterion was framed in terms of losses, the index plus (as in Criterion 1+) indicates that the criterion was framed in terms of losses, while the index ‘constant’ (as in Criterion 3constant) indicates that the framing of this criterion was kept constant.
The Criteria 3constant and 4constant correspond to “number of new COVID-19 local outbreaks” and “the number of new imported COVID-19 cases”. In the control condition the framing of both members of each pair was kept constant across groups, i.e., both groups received identical criteria pairs in the control condition. Note, that although the constant criteria were also framed in terms of gains or losses, the framing of these criteria was not manipulated and its valence remained the same across groups. Thus, if there is an effect of framing on pairwise comparisons, we expect to find a difference between participant groups A and B in the test condition specifically. Importantly, a small difference between groups A and B could occur in the control condition as well due to individual differences. However, this difference should be much smaller than in the test condition.
Two simple additional questions served as controls for whether participants are actively engaged in the experiment (the first question was: “The sum of 5 + 3 equals to:”; the second was formulated as a pairwise comparison between “funds allocated for vaccines against COVID-19” and “funds allocated for controlling the number of customers in supermarkets”). These questions were included in between experimental questions and were excluded from analyses.

3. Results

3.1. Stage of Pairwise Comparisons

Prior to analysis, we inspected the answers of the participants on the two simple additional questions, as well as their Consistency Ratio in the pairwise comparisons they provided. Participants who did not answer accurately to the additional control questions and those whose pairwise comparisons were inconsistent were excluded from the analysis (23.4%) (out of the removed participants, only five were removed due to inaccurate answers on both additional control questions.), resulting in a total of 102 participants in group A and 96 participants in group B.
As our data was nested (each participant had to make several pairwise comparisons), we analyzed the datasets using linear mixed effects regression modeling (R package lme4 [44]). We constructed a model with Scores assigned by participants in the pairwise comparisons as the dependent variable. Group (A vs. B) and Condition (Test vs. Control) were included as contrast-coded fixed effect, and intercepts for Participants and Pairwise comparisons as random factors. P-values were obtained by likelihood ratio tests of the full model against the model without the effect or interaction in question. We found significant effects of Group ( β = 0.44 ,   SE = 0.12 ,   χ 2 1 = 12.71 ,   p < 0.001 ) and a Group × Condition interaction ( β = 1.48 ,   SE = 0.21 ,   χ 2 1 = 48.16 ,   p < 0.001 ), but no effect of Condition (p > 0.5). Separate models for the Test and Control condition revealed that the interaction was due to the fact that in control condition, the effect of Group was not significant, while in test condition, there was a significant difference between the group A and group B ( β = 1.18 ,   SE = 0.25 ,   χ 2 1 = 21.82 ,   p < 0.001 ). Thus, our prediction that framing and loss aversion biases impact the scores assigned to criteria in the pairwise comparisons was borne out. Moreover, the fact that a difference between groups was found only in the test (framing) condition, but not in the control (no-framing) condition, shows that these differences were indeed caused by framing and loss aversion, but not by mere individual differences between participants in both groups.

3.2. Criteria Weights

Once the pairwise comparisons were elicited from participants, criteria weights were calculated and aggregated by using the R package AHPsurvey [45]. This package allows the researcher to adjust comparisons based on consistency, as well as to extract, calculate and aggregate the weights of the criteria. We transformed the scores of the pairwise comparisons to the balanced scale [46] as it was shown to increase the accuracy of the results, decrease the spread of weights and inconsistency compared to the 1-to-9 scale [47].
There is a variety of methods to aggregate the judgments of individual decision makers. Two of the most common methods are the aggregation of individual priorities (AIP) and the aggregation of individual judgments (AIJ). The former is used when the group of decision makers is assumed to act as separate individuals, while the latter is preferred when members of the group are thought to act together as one unit [48]. The decision problem in the current paper revolves around multiple disciplines where full specialization is not achievable. Thus, the participants of this study could be seen either as individuals having different backgrounds and thus different approaches to the problem, or as a single category of citizens dealing with the same problem on the daily bases. As a result, we used both aggregation techniques and compared their outcomes.

3.2.1. Aggregation Using Individual Priority Weights (AIP)

The individual priority weights, which are the weights per criterion for each participant, were computed using the Dominant Eigenvalues method described in [49]. The list of individual priorities for each participant can be found in Appendix A. The geometric mean was used to aggregate individual priority weights. The aggregated priority weights for each participant group can be found in Table 4. The weights and ranks of the three most important criteria that were affected by framing are in bold.
The results reveal that the ranking of criteria weights is different across groups. While in group A the top three criteria were ranked in the descending order C2 >> C1 >> C3, in group B the order changed to C3 >> C2 >> C1. This rank reversal lies in the weights assigned to the two criteria which underwent framing, whereby the criteria “Hospital beds” and “Doctors” received much higher weights in group A than in group B (note that, the criterion “Hospital beds” received a 51% higher weight in group A (0.262) than in group B (0.174)). Note, that together criteria C1 and C2 obtained almost 11% points more weight in group A (C1 + C2 = 0.454) than in group B (C1 + C2 = 0.346). Consequently, the proportion of weights given to the remaining criteria in group A is much smaller than in group B. Crucially, this difference between groups is clearly caused by the framing effect and loss aversion—the two criteria in question were framed as losses in group A and thus received more weight than in group B, where those same criteria were framed as gains. The implications of this findings will be discussed in the Discussion section.

3.2.2. Aggregation Using Individual Judgements (AIJ)

The AIJ allows to aggregate the individual judgements of all decision-makers into a single pairwise comparison matrix of all decision-makers. The geometric mean was used to aggregate individual comparison matrixes instead of the arithmetic mean as it provides the advantage of increasing the consistency for the whole group [36]. The aggregated pairwise comparison matrixes for group A and group B are presented in Table 5 and Table 6, respectively. The aggregated weights for each participant group can be found in Table 7. The weights and ranks of the three most important criteria that were affected by framing are in bold.
Similarly to the AIP aggregation method, the results of weight aggregation using AIJ reveal that the ranking of criteria weights is different across groups A and B. This difference lies in the ranking of the top three criteria, whereby the criteria “Hospital beds” and “Doctors” received much higher weights in group A than in group B. Here too, the criterion “Hospital beds” received a 52% higher weight in group A (0.300) than in group B (0.197). Moreover, criteria C1 and C2 together obtained 13% points more in weight in group A (C1 + C2 = 0.523) than in group B (C1 + C2 = 0.39). Thus, more than half of the weights were assigned to the framed criteria. Consequently, the proportion of weights given to the remaining criteria in group A was much smaller than in group B. As mentioned above, this effect is caused by the framing and loss aversion biases—the two criteria responsible for the rank reversal were specifically the framed ones. They received more weight when framed as losses (group A) than when framed as gains (group B).
Although the final weights attributed to criteria by means of the two aggregation techniques differed, the order of criteria ranking remained the same using both AIJ and AIP methods.

3.3. Alternative Ranking

In order to rank the three alternatives, we used a pre-filled decision matrix with the scores of each alternative on each of the six criteria. The two sets of aggregated criteria weights—one obtained using the AIP and one using the AIJ aggregation technique were taken from the previous step. Thus, we calculated the final alternative ranking in two ways (Table 8 show the results when AIP aggregated weights were used, Table 9 show the results when the AIJ aggregated weights were used.) The results show that the best alternative obtained in groups A and B is different. Namely, in group A Alternative3 was chosen as the best solution, whereas in group B the winner was Alternative2. The same alternative ranking was obtained both when using weights from AIP or AIJ.

4. Discussion

Dealing with the limitations of human information processing is all but straightforward. Cognitive biases, which are discrepancies of human behavior from the behavior of perfectly rational agents, have been shown to strongly impact decision making. Therefore, MCDM methods that involve judgments by decision-makers are also likely to be affected by those biases. In the current study we focused on two among the most studied cognitive biases, namely, the framing and the loss aversion biases. The results of our study show that:
  • By framing the criteria in a particular way, it is possible to influence the responses given by decision makers during AHP pairwise comparisons.
  • This caused the rank reversal of criteria weights across groups and resulted in the choice of different best alternatives.
  • The exact influence of different framings is predictable by the Prospect theory and can be explained by the loss aversion bias.
In this section we will first discuss our findings in light of the Prospect theory, we will then discuss the implications of these results for MCDM methods, we will finish by proposing ways of avoiding or diminishing the effects of these cognitive biases.

4.1. Discussion of Results and Interpretation in Light of Prospect Theory

In this research we tested whether framing and loss aversion biases can influence in a predictable way both the criteria weights elicited using the AHP method and the final ranking of alternatives derived employing the TOPSIS technique. The framing bias describes the tendency of people to make different decisions if presented with different but objectively equivalent descriptions (i.e., frames) of a situation or object. Specifically, things framed in terms of losses tend to be perceived as being more important than when they are framed in terms of gains. This extreme sensitivity to losses is predictable and characterized as loss aversion. In this paper we did a controlled experiment and presented participants with a real-world multi-criteria problem. Two groups of participants were asked to make pairwise comparisons on logically equivalent criteria which were framed in two different ways (positive vs. negative frames).
First, we hypothesized that participants who are presented with criteria framed as losses will perceive them as being more important compared to participants presented with the same criteria, but framed as gains. This will result in differences between groups in the evaluation of the framed criteria during the AHP pairwise comparisons. Second, we hypothesized that the subsequent stages of MCDM will in turn be impacted by those cognitive biases. The results of the study confirmed our hypotheses. We found that at the stage of pairwise comparisons of criteria, the two groups of participants assigned significantly different scores to the criteria, when they were framed differently (as losses vs. as gains). Moreover, the difference between groups was only found in the test condition which involved these framing differences. Conversely, in the control condition, were participants received criteria with exactly the same framing, no significant difference between groups was observed. This confirms that differences found in the test condition were indeed caused by framing and loss aversion biases, but not by mere individual differences between the participants of both groups. Thus, our results show that the two cognitive biases did occur at an early stage of MCDM, i.e., during criteria weight elicitation.
Second, we calculated group aggregated weights of criteria for each of the groups by using two techniques, the aggregation of individual priorities (AIP) and the aggregation of individual judgments (AIJ). We found that independently of the aggregation technique used, the ranking of criteria weights was different across groups. While in group A the top three criteria were ranked in the descending order C2 >> C1 >> C3, in group B the order changed to C3 >> C2 >> C1. Crucially, this rank reversal was caused by the two framed criteria “Hospital beds” and “Doctors” which received more weights when framed as losses (group A) than when framed as gains (group B). These findings are in line with the predictions of Prospect theory, which postulate that framing occurs because individuals evaluate information not in isolation, but relative to a reference point which is often the status quo. Furthermore, they evaluate this information differently, depending on whether it is perceived as gains or losses with respect to this reference point. Thus, in our experiment the participants of both groups evaluated the importance of the criteria based on the subjective status quo, or the situation as it is. The participant group A was presented with the criterion “Hospital beds” framed in terms of losses “Proportion of hospital beds already occupied by COVID-19 patients”. The negatively framed criterion suggested a loss of hospital beds and this triggered loss aversion, or the unwillingness, the regret to lose something that we have in the present situation. On the contrary, the positively framed version of this criterion (“Proportion of hospital beds still available for COVID-19 patients”), did not signal any negative change in the current situation and thus participants were less sensitive to it. Therefore, although both versions of this criterion were logically equivalent (e.g., 55% of available beds would be equivalent to 45% of occupied beds), the participants of this experiment did not evaluate them equally. This points to the crucial role that formulating criteria can have on the outcomes of weight elicitation.
Finally, we used TOPSIS to obtain the ranking of the alternatives. The results revealed that here too, the framing and loss aversion biases, induced during the previous stages of MCDM, influenced the choice of the best alternative. We found that different alternatives were chosen due to criteria weight differences between the two participant groups: while in group A the best solution was Alternative3, in group B the winner was Alternative2. These results provide evidence that the effect of cognitive biases remains strong throughout the process of Multiple-criteria decision-making.

4.2. Implications for MCDM

The results of this paper show that the framing and loss aversion biases influenced the responses of decision makers during pairwise comparisons, which in turn caused the rank reversal of criteria weights across groups and resulted in the choice of a different best alternative. Note also, that only two of the six criteria were framed differently in both groups, and this was sufficient to influence the whole final result of MCDM. This suggests that a conscious or unconscious framing of even a small proportion of criteria can strongly impact the outcomes of the MCDM procedure. In the former case, there is a risk that the MCDM process can potentially be manipulated, as the Prospect theory provides accurate predictions as to how each frame affects the decision makers. That is, in order to force the selection of a particular alternative, the criterion on which this alternative scores the best would first be chosen. Then this alternative would be framed so as to receive more weights from the decision makers. Thus, one could frame the criteria in such a way, that a wanted criteria ordering would be achieved. This would ad hoc rise the probability that the alternative in question is selected. This could undermine the whole process of MCDM and even be dangerous if wrong solutions are chosen in crucial decision-making processes, such as the choice of medical [50,51,52] or engineering [53] solutions. In the case when framing and loss aversion are induced involuntarily, the results of the MCDM process could be compromised as well, as they would reflect the sensitivity of decision makers to cognitive biases instead of their true opinion about the decision problem in question. As noted by Hämäläinen et al. [6], cognitive biases could cause an erroneous interpretation of the MCDM results: a successful intervention could be falsely attributed to the MCDM method, while a failure of such an intervention could be attributed to other factors than the method itself.
Although this experiment was performed on non-experts and on a specific topic, evidence from previous studies in other fields suggest, that the effect of cognitive biases could persist in a different population, making decisions about different decision problems. Recall that cognitive biases, unlike individual differences and motivational biases, are much more universal, culturally independent and predictable [13,14]. A variety of studies in psychology and behavioral economics showed that the framing and loss aversion biases influence both novices and experts, even though experts might be less sensitive to these effects (for reviews, see [20,22]). Interestingly, the framing bias was found to affect the decisions of mathematically trained participants [54] and even of professionals of business and finance [3]. Thus, even fields which involve technical and statistical knowledge, which could be expected to promote “rationality”, are not immune to framing and loss aversion biases. Similarly, these cognitive biases have been found to influence the decision making in a variety of fields, such as management [55], medical science [9,56], finance [57,58], engineering [59,60], law [61] and others. It is therefore likely, that MCDM problems close to any of these fields might suffer from the influence of cognitive biases. Indeed, MCDM methods have been widely used in a variety of technical environments related to engineering, industry and finance, for instance, in the selection of a suitable sewer network plan for a city [62]; in the selection of the best waste lubricant oil regenerative technology [63]; in the financial risk evaluation [64]. Furthermore, MCDM methods have been also extensively applied when dealing with policy, economics and societal issues, such as the economic development of government units [65]; low-carbon energy technology policies [66] and in economics [67]. All these examples point to the popularity of MCDM methods and to their importance in the decision-making process in major fields of our everyday life. It is therefore essential to guarantee the accuracy of these tools and carefully test MCDM methods for their sensitivity to cognitive biases.

4.3. Solutions

If the framing and loss aversion biases can have such negative outcomes on MCDM techniques, what solutions could reduce them? The first and the simplest solution would be to ensure that the biases do not occur. As the occurrence of cognitive biases is predictable, these psychological effects can be prevented in a systematic way. Therefore, debiasing techniques should be carefully studied and applied, whenever human judgment is involved in the process of MCDM. Several ways of reducing the framing and loss aversion biases have been proposed. For example, an experimental study [68] showed that the magnitude of framing effect could be reduced or even eliminated if the participants are warned about the possibility of bias. The authors found that both weak and strong warning conditions were effective in reducing bias in participants who were highly involved in the task, while only strong warning messages helped the participants with low involvement. In a similar way, [10] suggest that the loss aversion bias can be reduced when the logic of the symmetry of gains and losses is explicitly shown to the participants. In addition to this, listing advantages and disadvantages of each criterion prior to decision making could be and effective debiasing technique [50]. Further experimental studies should evaluate the efficiency of these and other debiasing methods on MCDM.
In addition to these debiasing techniques, another option for dealing with cognitive biases would be to use MCDM methods, that could potentially be less subject to these psychological effects. Although most MCMD methods do not take into account the psychological states of DMs during decision-making [69], fuzzy AHP does allow to deal with vagueness and uncertainty [70]. This method is a development of the classical AHP, in which decision makers can give vague or imprecise responses during pairwise comparison, instead of the crisp or exact numerical values used in classical AHP [71]. For that purpose fuzzy linguistic assessment variables are used in fuzzy AHP [72]. A number of studies [73,74,75] have provided evidence that fuzzy AHP is more effective in solving the problem of the imprecise judgments of decision makers compared to the traditional AHP method. Thus, it could be the case that the cognitive biases observed in our study could be reduced or circumvented when using the fuzzy version of the AHP. However, there is a risk that these cognitive biases might prevail even when using this method, as the framing and loss aversion biases were induced by the framing of the criteria, and not by the type of the scale used. Thus, it would be useful to test experimentally, if the use of the linguistic scale and the fuzzy logic in AHP could act as a counterforce to these cognitive biases.
Finally, attempts have been in recent years to reduce the effect of cognitive biases post hoc, i.e., in the MCDM method itself. One such example is the study by Phochanikorn & Tan [76]. The authors argue that once loss aversion is induced, it should be accounted for by the MCDM method. Thus, they define and calculate a loss aversion parameter (λ), based on the gain and loss value of each alternative. The results of the study show that the ranking of the alternatives changes as the loss aversion parameter increases. In a similar way, Deniz [77] proposed a way to manipulate the expected loss aversion bias during the criteria weight elicitation. After calculating criteria weights using the AHP method, the author calculated a debiased version of these weights by distributing the difference of the first and the second highest weight to other criteria proportionally to the initial weights. This way of debiasing the criteria weights, however, seems somewhat arbitrary, as it presupposes that the two highest ranked criteria are always biased. While it sometimes might be the case, in others the framed criteria could be less important and thus they might induce a loss aversion bias at a lower level in the criteria ranking. Still, the idea of predicting the amount of induced loss aversion is noteworthy as it could allow to at least partially reverse the negative effect of this cognitive bias. Several other recent studies used the theoretical background of Prospect theory to develop more accurate MCDM techniques [78,79,80], however, none of them addressed the issue of cognitive biases.
In conclusion, our paper provides the first experimental evidence of the impact of framing and loss aversion biases on MCDM. We showed that these cognitive biases can strongly influence the responses of decision makers in the pairwise comparisons of criteria. This in turn caused rank reversal in criteria weights and resulted in the choice of different best alternatives. In other words, our results point to the fact that different framings can influence both the weights of criteria and the selection of the best alternative. As these effects are predictable by the Prospect theory, we call attention to the risk of conscious or unconscious result manipulation in MCDM. We highlight possible debiasing techniques that could reduce or eliminate framing and loss aversion biases. Further studies should test experimentally, to which extent these techniques are effective and what could be the most appropriate ways to deal with this problem.

Author Contributions

Conceptualization, G.A.M.-L. and G.D.; methodology, G.A.M.-L.; validation, G.D.; formal analysis, G.A.M.-L. and G.D.; investigation, G.A.M.-L.; data curation G.A.M.-L.; writing—original draft preparation, G.A.M.-L.; writing—review and editing, G.D. and G.A.M.-L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by a postdoctoral fellowship from Vilnius University to Gerda Ana Melnik-Leroy.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data that support the findings of this study are available on request from author G.A.M.-L.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Individual priorities for each participant in group A.
Table A1. Individual priorities for each participant in group A.
Group AC1C2C3C4C5C6
10.130.410.070.060.270.06
20.430.110.040.150.260.02
30.220.140.060.220.220.14
40.220.090.200.200.200.08
50.250.080.070.120.430.05
60.270.360.150.050.140.03
70.190.600.120.030.040.02
80.130.100.190.180.340.06
90.070.430.170.200.080.05
100.260.260.270.050.140.02
110.380.360.090.040.100.03
120.210.200.220.090.240.03
130.220.290.190.080.180.05
140.170.250.390.090.090.02
150.220.370.300.060.020.02
160.120.210.370.040.210.05
170.110.490.190.050.100.05
180.180.290.050.050.400.02
190.260.270.200.090.120.06
200.240.430.150.050.120.02
210.270.140.150.190.170.07
220.180.440.220.030.070.05
230.170.300.160.080.260.04
240.230.240.140.150.220.02
250.200.330.160.090.170.05
260.330.330.120.140.080.02
270.050.470.150.210.100.02
280.250.260.220.140.100.03
290.030.570.170.060.170.02
300.310.290.120.090.150.05
310.220.350.110.080.180.06
320.150.170.250.180.240.02
330.280.410.080.040.160.02
340.330.290.080.080.180.04
350.210.280.160.140.100.09
360.140.220.170.050.390.03
370.220.370.230.060.100.02
380.290.400.100.080.110.03
390.150.180.230.210.200.03
400.090.180.180.040.430.08
410.190.230.210.150.160.06
420.190.230.210.230.090.05
430.210.310.110.070.260.04
440.150.490.140.090.090.04
450.160.390.170.060.210.02
460.360.360.060.130.070.02
470.330.310.080.050.210.03
480.190.210.290.190.090.03
490.220.230.220.100.160.08
500.340.410.070.100.040.04
510.120.290.290.120.160.02
520.080.180.220.080.400.05
530.250.250.170.150.160.03
540.110.110.280.390.070.04
550.240.310.190.080.140.05
560.120.190.240.190.160.09
570.230.220.120.150.150.13
580.130.210.280.280.080.02
590.440.140.090.090.210.03
600.190.100.170.340.150.05
610.190.500.190.080.020.02
620.250.150.130.120.290.06
630.240.360.170.060.120.05
640.110.270.200.190.170.05
650.200.440.090.070.120.08
660.190.500.070.040.100.09
670.300.410.110.020.120.03
680.180.060.520.190.020.02
690.270.460.090.050.100.03
700.250.440.110.060.110.03
710.280.180.110.070.260.09
720.280.300.070.250.060.03
730.170.180.100.070.460.03
740.070.130.430.200.130.05
750.140.450.090.070.160.09
760.290.210.130.130.190.06
770.240.300.160.110.130.07
780.280.320.040.030.280.05
790.240.380.160.060.130.03
800.170.090.390.250.060.03
810.240.270.140.090.210.06
820.480.210.190.040.070.02
830.190.330.070.060.260.09
840.310.360.120.030.160.02
850.080.120.090.360.270.09
860.260.240.160.130.150.07
870.210.600.050.030.040.07
880.150.290.240.230.070.02
890.120.360.210.140.130.04
900.270.290.250.090.070.03
910.350.130.100.160.180.09
920.270.240.150.110.150.08
930.040.310.210.210.180.06
940.160.180.230.130.270.02
950.330.360.100.080.080.05
960.270.130.250.140.130.08
970.070.600.110.110.070.04
980.230.500.030.030.190.03
990.240.190.180.140.180.07
1000.240.200.160.140.180.08
1010.020.190.200.040.500.05
1020.230.220.110.120.140.17
1030.410.410.050.040.050.05
Table A2. Individual priorities for each participant in group B.
Table A2. Individual priorities for each participant in group B.
Group BC1C2C3C4C5C6
10.200.180.240.160.120.10
20.330.250.110.080.140.08
30.210.250.240.090.180.03
40.130.300.250.200.090.03
50.270.130.230.240.080.05
60.260.160.150.160.200.07
70.530.130.110.050.130.05
80.240.170.160.110.300.02
90.180.170.160.170.170.17
100.170.160.230.190.200.05
110.110.180.340.100.220.05
120.250.160.210.200.130.05
130.170.130.230.150.240.09
140.160.100.130.380.160.07
150.060.070.270.370.210.02
160.090.100.160.590.030.03
170.220.110.450.110.080.02
180.130.150.350.240.100.03
190.170.080.220.130.370.03
200.190.190.150.160.180.15
210.160.160.200.160.160.15
220.130.100.240.240.230.07
230.040.440.200.050.250.03
240.300.170.120.120.190.10
250.150.150.230.180.230.08
260.080.050.190.510.150.02
270.160.390.180.130.110.03
280.360.180.070.040.330.02
290.210.080.280.090.290.05
300.070.480.160.100.110.08
310.120.340.210.120.130.07
320.200.120.300.100.160.12
330.120.080.430.240.100.04
340.350.060.140.260.160.03
350.270.090.140.140.240.12
360.150.280.180.110.250.03
370.160.100.310.340.050.04
380.240.100.120.430.070.05
390.090.090.350.250.150.07
400.260.250.250.040.160.03
410.110.110.310.340.090.03
420.230.370.170.060.140.03
430.200.060.160.170.290.12
440.150.470.180.070.110.02
450.200.150.190.160.180.12
460.200.280.250.100.100.08
470.220.230.200.080.230.05
480.210.200.160.130.250.04
490.190.440.070.030.250.02
500.080.180.060.370.250.06
510.160.240.350.080.140.03
520.070.310.190.090.300.04
530.170.170.180.120.160.20
540.280.140.060.080.390.04
550.240.220.110.140.170.11
560.170.230.150.130.170.16
570.340.270.160.100.120.02
580.350.100.060.180.120.18
590.100.140.150.370.180.06
600.110.510.140.120.050.07
610.140.130.220.250.190.08
620.110.180.360.220.100.02
630.160.200.190.250.120.09
640.250.280.120.140.170.05
650.370.310.120.080.100.02
660.100.360.120.210.180.03
670.140.120.160.320.130.13
680.260.180.120.100.190.15
690.220.310.120.140.180.04
700.160.120.130.240.290.06
710.070.340.210.140.210.04
720.170.170.170.170.170.17
730.150.170.210.120.290.06
740.220.110.030.060.560.02
750.410.030.230.240.070.03
760.190.190.390.100.110.02
770.050.300.350.100.180.02
780.170.200.250.130.210.05
790.170.170.170.170.170.17
800.130.320.210.100.190.06
810.290.240.120.110.220.02
820.150.120.310.140.160.11
830.190.320.100.130.130.13
840.260.060.320.100.200.06
850.110.100.220.420.110.04
860.260.260.190.090.180.02
870.130.400.260.060.120.03
880.220.390.170.090.070.05
890.060.240.160.150.370.02
900.140.330.220.140.090.07
910.150.160.290.260.100.04
920.350.120.060.050.330.08
930.410.240.110.040.160.04
940.240.150.420.090.060.03
950.290.240.120.140.130.08
960.150.060.230.320.200.04

References

  1. Kahneman, D.; Tversky, A. Prospect Theory: An Analysis of Decision under Risk. Econometrica 1979, 47, 263–292. [Google Scholar] [CrossRef] [Green Version]
  2. Tversky, A. Elimination by aspects: A theory of choice. Psychol. Rev. 1972, 79, 281–299. [Google Scholar] [CrossRef]
  3. Loke, W.H.; Tan, K.F. Effects of framing and missing information in expert and novice judgment. Bull. Psychon. Soc. 1992, 30, 187–190. [Google Scholar] [CrossRef]
  4. Turskis, Z.; Dzitac, S.; Stankiuviene, A.; Šukys, R. A Fuzzy Group Decision-making Model for Determining the Most Influential Persons in the Sustainable Prevention of Accidents in the Construction SMEs. Int. J. Comput. Commun. Control 2019, 14, 90–106. [Google Scholar] [CrossRef]
  5. Kazak, J.K.; van Hoof, J. Decision support systems for a sustainable management of the indoor and built environment. Indoor Built Environ. 2018, 27, 1303–1306. [Google Scholar] [CrossRef] [Green Version]
  6. Hämäläinen, R.P.; Luoma, J.; Saarinen, E. On the importance of behavioral operational research: The case of understanding and communicating about dynamic systems. Eur. J. Oper. Res. 2013, 228, 623–634. [Google Scholar] [CrossRef]
  7. Kahneman, D.; Tversky, A. Choices, values, and frames. Am. Psychol. 1984, 39, 341–350. [Google Scholar] [CrossRef]
  8. Barberis, N.C. Thirty years of prospect theory in economics: A review and assessment. J. Econ. Perspect. 2013, 27, 173–196. [Google Scholar] [CrossRef] [Green Version]
  9. Gong, J.; Zhang, Y.; Yang, Z.; Huang, Y.; Feng, J.; Zhang, W. The framing effect in medical decision-making: A review of the literature. Psychol. Health Med. 2013, 18, 645–653. [Google Scholar] [CrossRef]
  10. Montibeller, G.; von Winterfeldt, D. Cognitive and Motivational Biases in Decision and Risk Analysis. Risk Anal. 2015, 35, 1230–1251. [Google Scholar] [CrossRef]
  11. O’Keefe, R.M. Experimental behavioural research in operational research: What we know and what we might come to know. Eur. J. Oper. Res. 2016, 249, 899–907. [Google Scholar] [CrossRef]
  12. Borrero, S.; Icesi, U.; Henao, F.; Icesi, U. Can mangers be really objective? Bias in Multicriterial Decision Analysis. Acad. Strateg. Manag. J. 2017, 16, 244–259. [Google Scholar]
  13. Tversky, A.; Kahneman, D. The Framing of Decisions and the Psychology of Choice. In Environmental Impact Assessment, Technology Assessment, and Risk Analysis; Springer: Berlin/Heidelberg, Germany, 1985; Volume 1, pp. 107–129. ISBN 9781315196350. [Google Scholar]
  14. Arnott, D. Cognitive biases and decision support systems development: A design science approach. Inf. Syst. J. 2006, 16, 55–78. [Google Scholar] [CrossRef]
  15. Montibeller, G.; Von Winterfeldt, D. Biases and debiasing in multi-criteria decision analysis. Proc. Annu. Hawaii Int. Conf. Syst. Sci. 2015, 1218–1226. [Google Scholar] [CrossRef]
  16. George, J.F.; Duffy, K.; Ahuja, M. Countering the anchoring and adjustment bias with decision support systems. Decis. Support Syst. 2000, 29, 195–206. [Google Scholar] [CrossRef] [Green Version]
  17. Ahn, H.; Vazquez Novoa, N. The decoy effect in relative performance evaluation and the debiasing role of DEA. Eur. J. Oper. Res. 2016, 249, 959–967. [Google Scholar] [CrossRef]
  18. Ferretti, V.; Guney, S.; Montibeller, G.; Von Winterfeldt, D. Testing best practices to reduce the overconfidence bias in multi-criteria decision analysis. Proc. Annu. Hawaii Int. Conf. Syst. Sci. 2016, 2016, 1547–1555. [Google Scholar] [CrossRef] [Green Version]
  19. Levin, I.P.; Schneider, S.L.; Gaeth, G.J. All Frames Are Not Created Equal: A Typology and Critical Analysis of Framing Effects. Organ. Behav. Hum. Decis. Process. 1998, 76, 149–188. [Google Scholar] [CrossRef] [Green Version]
  20. Kühberger, A. The Influence of Framing on Risky Decisions: A Meta-analysis. Organ. Behav. Hum. Decis. Process. 1998, 75, 23–55. [Google Scholar] [CrossRef]
  21. Steiger, A.; Kühberger, A. A meta-analytic re-appraisal of the framing effect. J. Psychol. 2018, 226, 45–55. [Google Scholar] [CrossRef]
  22. Beratšová, A.; Krchová, K.; Gažová, N.; Jirásek, M. Framing and Bias: A Literature Review of Recent Findings. Cent. Eur. J. Manag. 2018, 3. [Google Scholar] [CrossRef] [Green Version]
  23. Piñon, A.; Gambara, H. A meta-analytic review of framming effect: Risky, attribute and goal framing. Psicothema 2005, 17, 325–331. [Google Scholar]
  24. Kahneman, D.; Knetsch, J.L.; Thaler, R.H. Anomalies: The Endowment Effect, Loss Aversion, and Status Quo Bias. J. Econ. Perspect. 1991, 5, 193–206. [Google Scholar] [CrossRef] [Green Version]
  25. Tversky, A.; Kahneman, D. Advances in Prospect Theory: Cumulative Representation of Uncertainty. J. Risk Uncertain. 1992, 323, 44–66. [Google Scholar] [CrossRef]
  26. Tversky, A.; Kahneman, D. Loss Aversion in Riskless Choice: A Reference-Dependent Model. Q. J. Econ. 1991, 106, 1039–1061. [Google Scholar] [CrossRef]
  27. Weber, M.; Borcherding, K. Behavioral influences on weight judgments in multiattribute decision making. Eur. J. Oper. Res. 1993, 67, 1–12. [Google Scholar] [CrossRef]
  28. Emrouznejad, A.; Marra, M. The state of the art development of AHP (1979–2017): A literature review with a social network analysis. Int. J. Prod. Res. 2017, 55, 6653–6675. [Google Scholar] [CrossRef] [Green Version]
  29. Turskis, Z.; Zavadskas, E.K.; Peldschus, F. Multi-criteria optimization system for decision making in construction design and management. Eng. Econ. 2009, 1, 7–17. [Google Scholar] [CrossRef]
  30. Saaty, T.L. Decision making—The Analytic Hierarchy and Network Processes (AHP/ANP). J. Syst. Sci. Syst. Eng. 2004, 13, 1–35. [Google Scholar] [CrossRef]
  31. Russo, R.D.F.S.M.; Camanho, R. Criteria in AHP: A systematic review of literature. Procedia Comput. Sci. 2015, 55, 1123–1132. [Google Scholar] [CrossRef] [Green Version]
  32. Pöyhönen, M.; Hämäläinen, R.P. On the convergence of multiattribute weighting methods. Eur. J. Oper. Res. 2001, 129, 569–585. [Google Scholar] [CrossRef] [Green Version]
  33. Saaty, T.L.; Vargas, L.G. Decision Making with the Economic, Political, Social and Technological Applications with Benefits, Opportunities, Costs and Risks; Springer: Pittsburgh, PA, USA, 2006; ISBN 9780387338590. [Google Scholar]
  34. Kolios, A.; Mytilinou, V.; Lozano-Minguez, E.; Salonitis, K. A comparative study of multiple-criteria decision-making methods under stochastic inputs. Energies 2016, 9, 566. [Google Scholar] [CrossRef] [Green Version]
  35. Tversky, A.; Kahneman, D. The framing of decisions and the psychology of choice. Science 1981, 211, 453–458. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. Ossadnik, W.; Schinke, S.; Kaspar, R.H. Group Aggregation Techniques for Analytic Hierarchy Process and Analytic Network Process: A Comparative Analysis. Gr. Decis. Negot. 2016, 25, 421–457. [Google Scholar] [CrossRef] [Green Version]
  37. Saaty, T.L. How to make a decision: The Analytic Hierarchy Process. Eur. J. Oper. Res. 1990, 48, 9–26. [Google Scholar] [CrossRef]
  38. Saaty, T.L. A scaling method for priorities in hierarchical structures. J. Math. Psychol. 1977, 15, 234–281. [Google Scholar] [CrossRef]
  39. Ishizaka, A.; Lusti, M. How to derive priorities in AHP: A comparative study. Cent. Eur. J. Oper. Res. 2006, 14, 387–400. [Google Scholar] [CrossRef] [Green Version]
  40. Aull-Hyde, R.; Erdogan, S.; Duke, J.M. An experiment on the consistency of aggregated comparison matrices in AHP. Eur. J. Oper. Res. 2006, 171, 290–295. [Google Scholar] [CrossRef]
  41. Saaty, T.L. How to Make a Decision: The Analytic Hierarchy Process. Interfaces 1994, 24, 19–43. [Google Scholar] [CrossRef] [Green Version]
  42. Edwards, W. How to Use Multiattribute Utility Measurement for Social Decisionmaking. IEEE Trans. Syst. Man Cybern. 1977, 7, 326–340. [Google Scholar] [CrossRef]
  43. Keeney, R.L.; Raiffa, H.; Rajala, D.W. Decisions with Multiple Objectives: Preferences and Value Trade-Offs. IEEE Trans. Syst. Man. Cybern. 1979, 9, 403. [Google Scholar] [CrossRef]
  44. Bates, D.M.; Maechler, M.; Bolker, B.; Walker, S. Fitting Linear Mixed-Effects Models Using lme4. J. Stat. Softw. 2015, 67, 1–48. [Google Scholar] [CrossRef]
  45. Cho, F. Analytic Hierarchy Process for Survey Data in R. Vignettes Ahpsurvey Package (ver 0.4.0). 2019, 26. Available online: https://cran.r-project.org/web/packages/ahpsurvey/vignettes/my-vignette.html (accessed on 30 October 2020).
  46. Salo, A.A.; Hämäläinen, R.P. On the measurement of preferences in the analytic hierarchy process. J. Multi-Criteria Decis. Anal. 1997, 6, 309–319. [Google Scholar] [CrossRef]
  47. Pöyhönen, M.A.; Hämäläinen, R.P.; Salo, A.A. An experiment on the numerical modelling of verbal ratio statements. J. Multi-Criteria Decis. Anal. 1997, 6, 1–10. [Google Scholar] [CrossRef]
  48. Forman, E.; Peniwati, K. Aggregating individual judgments and priorities with the analytic hierarchy process. Eur. J. Oper. Res. 1998, 108, 165–169. [Google Scholar] [CrossRef]
  49. Saaty, T.L. Decision-making with the AHP: Why is the principal eigenvector necessary. Eur. J. Oper. Res. 2003, 145, 85–91. [Google Scholar] [CrossRef]
  50. Almashat, S.; Ayotte, B.; Edelstein, B.; Margrett, J. Framing effect debiasing in medical decision making. Patient Educ. Couns. 2008, 71, 102–107. [Google Scholar] [CrossRef] [Green Version]
  51. Ivlev, I.; Kneppo, P.; Bartak, M. Multicriteria decision analysis: A multifaceted approach to medical equipment management. Technol. Econ. Dev. Econ. 2014, 20, 576–589. [Google Scholar] [CrossRef]
  52. Oddershede, A.M.; Quezada, L.E.; Cordova, F.M.; Carrasco, R.A. Decision support for healthcare ICT network system appraisal. Int. J. Comput. Commun. Control 2012, 7, 924–932. [Google Scholar] [CrossRef] [Green Version]
  53. Lukic, D.; Cep, R.; Vukman, J.; Antic, A.; Djurdjev, M.; Milosevic, M. Multi-Criteria Selection of the Optimal Parameters for High-Speed Machining of Aluminum Alloy Al7075 Thin-Walled Parts. Metals 2020, 10, 1570. [Google Scholar] [CrossRef]
  54. Loke, W.H.; Lau, S.L.L. Effects of framing and mathematical experience on judgments. Bull. Psychon. Soc. 1992, 30, 393–395. [Google Scholar] [CrossRef] [Green Version]
  55. Cornelissen, J.P.; Werner, M.D. Putting Framing in Perspective: A Review of Framing and Frame Analysis across the Management and Organizational Literature. Acad. Manag. Ann. 2014, 8, 181–235. [Google Scholar] [CrossRef]
  56. O’Sullivan, E.; Schofield, S. Cognitive bias in clinical medicine. J. R. Coll. Physicians Edinb. 2018, 48, 225–232. [Google Scholar] [CrossRef] [PubMed]
  57. Ritter, J.R. Behavioral finance. Pacific Basin Financ. J. 2003, 11, 429–437. [Google Scholar] [CrossRef]
  58. Zindel, M.L.; Zindel, T.; Quirino, M.G. Cognitive Bias and their Implications on the Financial Market. Int. J. Eng. Technol. 2014, 14. [Google Scholar]
  59. Baybutt, P. The validity of engineering judgment and expert opinion in hazard and risk analysis: The influence of cognitive biases. Process Saf. Prog. 2018, 37, 205–210. [Google Scholar] [CrossRef]
  60. Vermillion, S.D.; Malak, R.J.; Smallman, R.; Linsey, J. A Study on Outcome Framing and Risk Attitude in Engineering Decisions under Uncertainty. J. Mech. Des. Trans. ASME 2015, 137, 1–4. [Google Scholar] [CrossRef]
  61. Zamir, E. Law, Psychology, and Morality: The Role of Loss Aversion; Oxford University Press: New York, NY, USA, 2015. [Google Scholar]
  62. Wu, Z.; Abdul-Nour, G. Comparison of Multi-Criteria Group Decision-Making Methods for Urban Sewer Network Plan Selection. CivilEng 2020, 1, 26–48. [Google Scholar] [CrossRef]
  63. Maghsoodi, A.I.; Afezalkotob, A.; Ari, I.A.; Maghsoodi, S.I.; Hafezalkotob, A. Selection of waste lubricant oil regenerative technology using entropy-weighted risk-based fuzzy axiomatic design approach. Informatica 2018, 29, 41–74. [Google Scholar] [CrossRef]
  64. Peng, X.; Huang, H. Fuzzy Decision Making Method Based on Cocoso With Critic for Financial Risk Evaluation. Technol. Econ. Dev. Econ. 2020, 26, 695–724. [Google Scholar] [CrossRef] [Green Version]
  65. Just, M.; Luczak, A. Assessment of conditional dependence structures in commodity futures markets using copula-GARCH models and fuzzy clustering methods. Sustainability 2020, 12, 2571. [Google Scholar] [CrossRef] [Green Version]
  66. Kokkinos, K.; Karayannis, V. Supportiveness of low-carbon energy technology policy using fuzzy multicriteria decision-making methodologies. Mathematics 2020, 8, 1178. [Google Scholar] [CrossRef]
  67. Zavadskas, E.K.; Turskis, Z. Multiple criteria decision making (MCDM) methods in economics: An overview. Technol. Econ. Dev. Econ. 2011, 17, 397–427. [Google Scholar] [CrossRef] [Green Version]
  68. Cheng, F.F.; Wu, C.S. Debiasing the framing effect: The effect of warning and involvement. Decis. Support Syst. 2010, 49, 328–334. [Google Scholar] [CrossRef]
  69. Tian, X.; Xu, Z.; Gu, J. An extended TODIM based on cumulative prospect theory and its application in venture capital. Informatica 2019, 30, 413–429. [Google Scholar] [CrossRef]
  70. Wang, T.C.; Chen, Y.H. A new method on decision-making using fuzzy linguistic assessment variables and fuzzy preference relations. In Proceedings of the WMSCI 2005—The 9th World Multi-Conference on Systemics, Cybernetics and Informatics, Orlando, FL, USA, 10–13 July 2005; Volume 1, pp. 360–363. [Google Scholar]
  71. Chandrawati, T.B.; Ratna, A.A.P.; Sari, R.F. Path selection using fuzzy weight aggregated sum product assessment. Int. J. Comput. Commun. Control 2020, 15, 1–19. [Google Scholar] [CrossRef]
  72. Chen, Y.-H.; Wang, T.-C.; Wu, C.-Y. Multi-criteria decision making with fuzzy linguistic preference relations. Appl. Math. Model. 2011, 35, 1322–1330. [Google Scholar] [CrossRef]
  73. Kabir, G.; Ahsan Akhtar Hasin, M. Comparative Analysis of Ahp and Fuzzy Ahp Models for Multicriteria Inventory Classification. Int. J. Fuzzy Log. Syst. 2011, 1, 1–16. [Google Scholar]
  74. Mulubrhan, F.; Mokhtar, A.A.; Muhammad, M. Comparative analysis between fuzzy and traditional analytical hierarchy process. MATEC Web Conf. 2014, 13, 01006. [Google Scholar] [CrossRef] [Green Version]
  75. Reig-Mullor, J.; Pla-Santamaria, D.; Garcia-Bernabeu, A. Extended fuzzy analytic hierarchy process (E-fahp): A general approach. Mathematics 2020, 8, 2014. [Google Scholar] [CrossRef]
  76. Phochanikorn, P.; Tan, C. An Integrated Multi-Criteria Decision-Making Model Based on Prospect Theory for Green Supplier Selection under Uncertain Environment: A Case Study of the Thailand Palm Oil Products Industry. Sustainability 2019, 11, 1872. [Google Scholar] [CrossRef] [Green Version]
  77. Deniz, N. Cognitive biases in MCDM methods: An embedded filter proposal through sustainable supplier selection problem. J. Enterp. Inf. Manag. 2020, 33, 947–963. [Google Scholar] [CrossRef]
  78. Cheng, M.Y.; Yeh, S.H.; Chang, W.C. Multi-criteria decision making of contractor selection in mass rapid transit station development using bayesian fuzzy prospect model. Sustainability 2020, 12, 4606. [Google Scholar] [CrossRef]
  79. Li, N.; Zhang, H.; Zhang, X.; Ma, X.; Guo, S. How to select the optimal electrochemical energy storage planning program? a hybridmcdmmethod. Energies 2020, 13, 931. [Google Scholar] [CrossRef] [Green Version]
  80. Zhao, M.; Wei, G.; Wei, C.; Wu, J.; Guo, Y. Extended TODIM Based on Cumulative Prospect Theory for Picture Fuzzy Multiple Attribute Group Decision Making. Informatica 2020, 1–22. [Google Scholar] [CrossRef]
Figure 1. A schematic representation of the hypothesized influence of the framing and loss aversion biases on the different stages of MCDM.
Figure 1. A schematic representation of the hypothesized influence of the framing and loss aversion biases on the different stages of MCDM.
Mathematics 09 00121 g001
Figure 2. S-shaped value function, proposed by the Prospect theory [13].
Figure 2. S-shaped value function, proposed by the Prospect theory [13].
Mathematics 09 00121 g002
Table 1. The fundamental Saaty scale.
Table 1. The fundamental Saaty scale.
Intensity of Importance on an Absolute ScaleDefinitionExplanation
1Equal importanceTwo activities contribute equally to the objective
3Moderate importanceExperience and judgment slightly favor one activity over another
5Essential or strong importanceExperience and judgment strongly favor one activity over another
7Very strong importanceAn activity is very strongly favored and its dominance demonstrated in practice
9Extreme importanceThe evidence favoring one activity over another is of the highest possible order of affirmation
ReciprocalsIf activity i has one of the above numbers assigned to it when compared with activity j, then j has the reciprocal value when compared with i
2, 4, 6, 8Intermediate values can be used when compromise is needed
Table 2. Values of the Random Index (RI) for different n values [41].
Table 2. Values of the Random Index (RI) for different n values [41].
n12345678910
RI000.520.891.111.251.351.401.401.49
Table 3. The distribution of criteria types across the test and control conditions. Criteria in bold underwent experimental framing.
Table 3. The distribution of criteria types across the test and control conditions. Criteria in bold underwent experimental framing.
Group AGroup B
testCriterion 1Criterion 3 constantCriterion 1+Criterion 3 constant
Criterion 2Criterion 4 constantCriterion 2+Criterion 4 constant
Criterion 2Criterion 3 constantCriterion 2+Criterion 3 constant
Criterion 1Criterion 4 constantCriterion 1+Criterion 4 constant
controlCriterion 5 constantCriterion 3 constantCriterion 5 constantCriterion 3 constant
Criterion 6 constantCriterion 4 constantCriterion 6 constantCriterion 4 constant
Criterion 6 constantCriterion 3 constantCriterion 6 constantCriterion 3 constant
Criterion 5 constantCriterion 4 constantCriterion 5 constantCriterion 4 constant
Table 4. Aggregated priority weights and criteria ranking for each participant group.
Table 4. Aggregated priority weights and criteria ranking for each participant group.
A GroupB Group
CriterionWeightRankWeightRank
C1: Doctors0.19220.1723
C2: Hospital beds0.26210.1742
C3: Local outbreaks0.14430.1801
C4: Imported cases0.09450.1405
C5: PCR tests0.13940.1574
C6: Disinfectant & protective equipment0.04160.0506
Table 5. Aggregated pairwise comparison matrix for group A.
Table 5. Aggregated pairwise comparison matrix for group A.
A GroupC1C2C3C4C5C6
C11.000.961.032.071.494.33
C21.041.002.373.091.895.49
C30.970.421.001.600.963.27
C40.480.320.621.000.672.61
C50.670.531.041.491.003.59
C60.230.180.310.380.281.00
CR = 0.008
Table 6. Aggregated pairwise comparison matrix for group B.
Table 6. Aggregated pairwise comparison matrix for group B.
B GroupC1C2C3C4C5C6
C11.001.011.011.141.073.06
C20.991.000.971.311.083.21
C30.991.031.001.450.983.55
C40.870.760.691.000.923.14
C50.940.931.021.091.003.00
C60.330.310.280.320.331.00
CR = 0.002
Table 7. Aggregated individual judgements and criteria ranking for each participant group.
Table 7. Aggregated individual judgements and criteria ranking for each participant group.
A GroupB Group
CriterionWeightRankWeightRank
C1: Doctors0.22320.1933
C2: Hospital beds0.30010.1972
C3: Local outbreaks0.16330.2031
C4: Imported cases0.10750.1625
C5: PCR tests0.16040.1854
C6: Disinfectant & protective equipment0.04860.0596
Table 8. Results from TOPSIS for both participant groups, when AIP aggregated weights were used.
Table 8. Results from TOPSIS for both participant groups, when AIP aggregated weights were used.
A GroupB Group
AlternativeRelative ClosenessRankRelative ClosenessRank
Alternative10.376930.40543
Alternative20.493020.52421
Alternative30.561810.45822
Table 9. Results from TOPSIS for both participant groups, when AIJ aggregated weights were used.
Table 9. Results from TOPSIS for both participant groups, when AIJ aggregated weights were used.
A GroupB Group
AlternativeRelative ClosenessRankRelative ClosenessRank
Alternative10.380430.40463
Alternative20.489620.52451
Alternative30.560710.46042
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Melnik-Leroy, G.A.; Dzemyda, G. How to Influence the Results of MCDM?—Evidence of the Impact of Cognitive Biases. Mathematics 2021, 9, 121. https://doi.org/10.3390/math9020121

AMA Style

Melnik-Leroy GA, Dzemyda G. How to Influence the Results of MCDM?—Evidence of the Impact of Cognitive Biases. Mathematics. 2021; 9(2):121. https://doi.org/10.3390/math9020121

Chicago/Turabian Style

Melnik-Leroy, Gerda Ana, and Gintautas Dzemyda. 2021. "How to Influence the Results of MCDM?—Evidence of the Impact of Cognitive Biases" Mathematics 9, no. 2: 121. https://doi.org/10.3390/math9020121

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop