1 Introduction
Data envelopment analysis (DEA), which was introduced by Charnes et al. (
1985), is a methodology that was first used in the area of operational research to compare the productive efficiency of different independent units. Since then, many extensions of DEA have been developed, such as those proposed by Banker et al. (
1984), Cooper et al. (
1999), and Petersen (
1990). DEA has greatly enriched the microeconomic theory (Liu
2009) and its application in production function technology. Simultaneously, this method exhibits superiority that cannot be underestimated in terms of avoiding competent factors, simplifying algorithms, reducing errors, and so on.
Decision-making units (DMUs) (Yang et al.
2011) are grouping into two sets, namely efficient and inefficient DMUs. In some cases, ranking is required for DMUs; hence, studies on ranking have become popular. Andersen and Petersen (
1993) developed the super-efficiency approach, which may exhibit better performance in establishing a ranking value than methods that evaluate DMU exclusion from linear constraints. If a DMU is chosen as an effective target for other DMUs in the benchmarking ranking method (Wen and Li
2009), then the DMU is highly ranked.
The original DEA models assume that inputs and outputs are measured by exact values. In many situations, however, inputs and outputs are volatile and complex and thus are difficult to measure accurately. Consequently, some researchers have applied probability theory to establish stochastic DEA models, including Sengupta (
1982), Banker (
1993), Olesen and Petersen (
1995), and Cooper et al. (
1996). In addition, numerous documents have referred to fuzzy DEA when some inputs or outputs are fuzzy numbers. Kao and Liu (
2000), Liu (
2008) developed a method for determining the membership functions of fuzzy efficiency scores. Other researchers, including Entani et al. (
2002), Guo and Tanaka (
2001), Rashidi and Cullinane (
2019), and Lertworasirikul et al. (
2003) further explored possible measures.
Liu (
2007) presented an uncertainty theory to deal mathematically with human belief degree, which he refined in Liu (
2010). To obtain probability distribution, numerous samples are required. Sometimes, however, no sample is available because of economical or technological reasons. In such case, we have to invite domain experts to evaluate the belief degree that each possible event will happen. Given that humans tend to overweigh unlikely events (Kahneman and Tversky
1979), the belief degree has a considerably larger variance than the frequency and thus cannot be regarded as a probability distribution of a random variable. In such case, we can consider the belief degree as an uncertainty distribution of several uncertain variables and manage it using uncertainty theory. Some researchers focused on uncertainty theory and propose several different uncertainty DEA models. Nejad and Ghaffari-Hadigheh (
2018) proposed a new model that could acquire the highest belief degree that made the evaluated DMU efficient. However, the feasibility of this model cannot be proved, and the model is infeasible for some instances at higher belief degree rates. Jiang et al. (
2018) introduced an uncertain DEA model for scale efficiency evaluations using imprecise inputs and outputs. Additionally, they provided a sensitivity and stability analysis of the uncertain DEA model for scale efficiency. In another paper, Jiang et al. (
2018) used uncertain variables and uncertain data envelopment analysis model to measure seaports sustainability efficiency of China in 2018. Moreover, they captured the quantity to be improved of each output. Lio and Liu (
2018) introduced a new uncertain DEA model; in their paper, the inputs and outputs were considered as uncertain variables. However, this model was very sensitive to data changes, which was not conducive to the practical application of the model. In this study, DMUs will be considered as uncertain variables; thus, a new ranking method based on uncertainty theory is developed.
In previous works, the optimistic criterion and pessimistic criterion, which are both extreme cases, are widely used in uncertain environments. To balance between these two extremes, the Hurwicz criterion was proposed by Hurwicz (
1951a,
b). Since then, this criterion has been applied to many problems, such as the facility location–allocation problem. Inspired by this work, the current study presents a new ranking method for uncertain DEA using the Hurwicz criterion. In this way, the results are neither too optimistic nor too pessimistic.
This paper is organized as follows: In Sect.
2, we introduce the basic concepts and some results of uncertainty theory. In Sect.
3, we present a new uncertain ranking method using the Hurwicz criterion. Finally, a numerical example is provided in Sect.
4 to illustrate the proposed uncertain ranking method.
3 Uncertain ranking method
The original DEA models assume that inputs and outputs are measured by exact values. In many cases, however, inputs and outputs cannot be given exactly and thus can be considered as uncertain. Wen (
2014) provided an uncertain DEA model, where in the symbols and notations are given as follows:
\(\hbox {DMU}_i\): the \(i\hbox {th}\) DMU, \(i=1,2,\ldots ,n\); \(\hbox {DMU}_0\): the target DMU;
\({\widetilde{\varvec{x}}}_k=({\widetilde{x}}_{k1},{\widetilde{x}}_{k2},\ldots , {\widetilde{x}}_{kp})\): the uncertain inputs vector of \(\hbox {DMU}_k\), \(k=1,2,\ldots ,n\);
\(\Phi _{ki}(x)\): the uncertainty distribution of \({\widetilde{x}}_{ki}\), \(k=1,2,\ldots ,n\), \(i=1,2,\ldots ,p\);
\({\varvec{\Phi }}_k(x)=(\Phi _{k1}(x),\Phi _{k2}(x),\ldots , \Phi _{kp}(x))\): the uncertainty distribution vector of \({\widetilde{\varvec{x}}}_k=({\widetilde{x}}_{k1},{\widetilde{x}}_{k2},\ldots , {\widetilde{x}}_{kp})\), \(k=1,2,\ldots ,n\);
\(\varvec{x}_0=(x_{01},x_{02},\ldots ,x_{0p})\): the uncertain inputs vector of the target \(\hbox {DMU}_0\);
\(\Phi _{0i}(x)\): the uncertainty distribution of \({\widetilde{x}}_{0i}\), \(i=1,2,\ldots ,p\);
\({\widetilde{\varvec{y}}}_k=({\widetilde{y}}_{k1},{\widetilde{y}}_{k2},\ldots , {\widetilde{y}}_{kq})\): the uncertain outputs vector of \(\hbox {DMU}_k\), \(k=1,2,\ldots ,n\);
\(\Psi _{kj}(x)\): the uncertainty distribution of \({\widetilde{x}}_{kj}\), \(k=1,2,\ldots ,n\), \(j=1,2,\ldots ,q\);
\({\varvec{\Psi }}_k(x)=(\Psi _{k1}(x),\Psi _{k2}(x),\ldots , \Psi _{kq}(x))\): the uncertainty distribution vector of \({\widetilde{\varvec{y}}}_k=({\widetilde{y}}_{k1},{\widetilde{y}}_{k2},\ldots , {\widetilde{y}}_{kq})\), \(k=1,2,\ldots ,n\);
\(\varvec{y}_0=(y_{01},y_{02},\ldots ,y_{0q})\): the outputs vector of the target \(\hbox {DMU}_0\);
\(\Psi _{0j}(x)\): the uncertainty distribution of \({\widetilde{x}}_{0j}\), \(j=1,2,\ldots ,q\);
The model is given by Wen (
2014) as follows:
$$\begin{aligned} \left\{ \begin{array}{l} \theta _1=\max \ \sum \limits _{i=1}^{p}{s_{i}^{-}}+\sum \limits _{j=1}^{q}{s_{j}^{+}}, \\ \text{ which } \text{ is } \text{ subject } \text{ to } \text{: }\\ \quad \quad \quad \quad M\left\{ \sum \limits _{k=1}^{n}{{\widetilde{x}}_{ki}\lambda _{k}}\le {\widetilde{x}}_{0i}-s_{i}^{-}\right\} \ge \alpha _1,\quad i=1,2,\ldots , p;\\ \quad \quad \quad \quad M\left\{ \sum \limits _{k=1}^{n}{{\widetilde{y}}_{kj}\lambda _{k}} \ge {\widetilde{y}}_{0j}+s_{j}^{+}\right\} \ge \alpha _1, \quad j=1,2 \ldots ,q;\\ \quad \quad \quad \quad \sum \limits _{j=1}^{n}{\lambda _{j}}=1; \\ \quad \quad \quad \quad \lambda _j\ge 0, \quad j=1,2,\ldots ,n;\\ \quad \quad \quad \quad s_i^{-}\ge 0,\quad i=1,2,\ldots , p;\\ \quad \quad \quad \quad s_j^{+}\ge 0, \quad j=1,2 \ldots ,q, \end{array}\right. \end{aligned}$$
(11)
which considers the total distances to an efficient frontier.
$$\begin{aligned} \left\{ \begin{array}{l} \theta _2=\max \ \sum \limits _{i=1}^{p}{s_{i}^{-}}+\sum \limits _{j=1}^{q}{s_{j}^{+}}, \\ \text{ which } \text{ is } \text{ subject } \text{ to } \text{: }\\ \quad \quad \quad \quad M\left\{ \sum \limits _{k=1}^{n}{{\widetilde{x}}_{ki}\lambda _{k}}\ge {\widetilde{x}}_{0i}+s_{i}^{-}\right\} \ge \alpha _2,\quad i=1,2,\ldots , p;\\ \quad \quad \quad \quad M\left\{ \sum \limits _{k=1}^{n}{{\widetilde{y}}_{kj}\lambda _{k}} \le {\widetilde{y}}_{0j}-s_{j}^{+}\right\} \ge \alpha _2, \quad j=1,2 \ldots ,q;\\ \quad \quad \quad \quad \sum \limits _{j=1}^{n}{\lambda _{j}}=1; \\ \quad \quad \quad \quad \lambda _j\ge 0, \quad j=1,2,\ldots ,n;\\ \quad \quad \quad \quad s_i^{-}\ge 0,\quad i=1,2,\ldots , p;\\ \quad \quad \quad \quad s_j^{+}\ge 0, \quad j=1,2 \ldots ,q. \end{array}\right. \end{aligned}$$
(12)
This model clearly considers the total distance to an efficient frontier. The higher the optimal objective, the less efficient the
\(\hbox {DMU}_0\) ranking. The DMUs are compared with the best performances; hence, the method can be regarded as optimistic. By contrast, Jahanshahloo and Afzalinejad (
2006) presented a new model that compared DMUs with the worst performances; hence, this method can be regarded as pessimistic. Similarly, the pessimistic model in uncertain environments, which considers the total distance to an inefficient frontier, can be given as follows:
These definitions are aligned more closely with deterministic optimistic and pessimistic models. However, they also differ because a credibility measure is involved. For example, as determined by the choice of \(\alpha \), a risk that \(\hbox {DMU}_0\) will not be efficient or inefficient exists even when the conditions of Definition 3 or 4 are satisfied.
Given that
\(j=0\) is one of the
\(\hbox {DMU}_j\), we can always get a solution with
\(\lambda _0=1, \lambda _j=0 \ (j\ne 0)\) and all slacks zero. Thus, uncertain DEA models (
11) and (
12) have feasible solution and the optimal values
\(s_i^{-*}=s_j^{+*}=0\) for all
\(i, \ j\).
The aforementioned two models are both extreme cases: the first is too optimistic and the second is too pessimistic. Thus, we employ the Hurwicz criterion, which was proposed by Hurwicz (
1951a). This criterion incorporates a measure for both by assigning a certain percentage weight
\(\beta \) to
\(\theta _1^*\) and
\(1-\beta \) to
\(-\theta _2^*\), with
\(0\le \beta \le 1\):
$$\begin{aligned} \theta ^*=\beta \theta _1^*+(1-\beta )(-\theta _2^*) \end{aligned}$$
(13)
which can be rewritten as follows:
$$\begin{aligned} \theta ^*=\beta \theta _1^*-(1-\beta )\theta _2^*. \end{aligned}$$
(14)
Ranking criterion The greater the value
\(\theta ^*\), the less efficient the
\(\hbox {DMU}_0\) ranking.
In the Hurwicz criterion, parameter
\(\beta \in [0,1]\), which reflects the optimism degree of the decision maker, must be determined by the decision maker. In general, determining the appropriate
\(\beta \) for decision makers is difficult because this value varies from person to person. By varying parameter
\(\beta \), the Hurwicz criterion is transformed into various models. For example, when
\(\beta =1\), the criterion is the traditional DEA model (
11); meanwhile, when
\(\beta =0\), the criterion is transformed into model (
12). This phenomenon indicates that the Hurwicz criterion is fairly flexible.
In some cases, \(\theta _1^*=0\) and \(\theta _2^*=0\), and thus, the ranking value \(\theta ^*=0\). which indicates that \(\hbox {DMU}_0\) is both efficient and inefficient. This phenomenon occurs when \(\hbox {DMU}_0\) is the best in some inputs or outputs and the worst in some inputs or outputs. For example, if \(\hbox {DMU}_0\) is the only DMU with the largest value for input 1 and least amount for input 2, then \(\hbox {DMU}_0\) is both efficient and inefficient.
In Wen (
2014), the uncertain DEA model (
11) can be converted into the crisp model, as follows:
$$\begin{aligned} \left\{ \begin{array}{l} \theta _1^*=\max \quad \sum \limits _{i=1}^{p}{s_{i}^{-}}+\sum \limits _{j=1}^{q}{s_{j}^{+}}, \\ \hbox {which is subject to :}\\ \quad \quad \quad \quad \sum \limits _{k=1,k\ne 0}^{n}\lambda _{k}\Phi _{ki}^{-1}(\alpha _1)+\lambda _{0}\Phi _{0i}^{-1}(1-\alpha _1)\\ \quad \quad \quad \quad \quad \le \Phi _{0i}^{-1}(1-\alpha )-s_{i}^{-}, \qquad i=1,2,\ldots , p; \\ \quad \quad \quad \quad \sum \limits _{k=1,k\ne 0}^{n}\lambda _{k}\Psi _{kj}^{-1}(1-\alpha _1)+\lambda _{0}\Psi _{0j}^{-1}(\alpha _1)\\ \quad \quad \quad \quad \quad \ge \Psi _{0j}^{-1}(\alpha _1)+s_{j}^{+}, \ \quad \qquad j=1,2,\ldots , q; \\ \quad \quad \quad \quad \sum \limits _{k=1}^{n}{\lambda _{k}}=1, \\ \quad \quad \quad \quad \lambda _k\ge 0, \quad k=1,2,\ldots ,n;\\ \quad \quad \quad \quad s_i^{-}\ge 0, \quad i=1,2 \ldots , p;\\ \quad \quad \quad \quad s_j^{+}\ge 0,\quad j=1,2,\ldots ,q, \end{array}\right. \end{aligned}$$
(15)
which is a linear programming model. Thus, this model can be easily solved by many traditional methods.
Table 1Evaluation criteria of investment in human resources \(X_1\)
Level 1 | More than 10 | 10 |
Level 2 | 5–9 | 5 |
Level 3 | 1–4 | 2 |
Similarly, the pessimistic model (
12) can be transformed into the following linear programming model:
$$\begin{aligned} \left\{ \begin{array}{l} \theta _2^*=\max \quad \sum \limits _{i=1}^{p}{s_{i}^{-}}+\sum \limits _{j=1}^{q}{s_{j}^{+}}, \\ \hbox {which is subject to :}\\ \quad \quad \quad \quad \sum \limits _{k=1,k\ne 0}^{n}\lambda _{k}\Phi _{ki}^{-1}(1-\alpha 2)+\lambda _{0}\Phi _{0i}^{-1}(\alpha 2)\\ \quad \quad \quad \quad \quad \ge \Phi _{0i}^{-1}(\alpha 2)+s_{i}^{+}, \qquad \qquad i=1,2,\ldots , p; \\ \quad \quad \quad \quad \sum \limits _{k=1,k\ne 0}^{n}\lambda _{k}\Psi _{kj}^{-1}(\alpha _2)+\lambda _{0}\Psi _{0j}^{-1}(1-\alpha _2)\\ \quad \quad \quad \quad \quad \le \Psi _{0j}^{-1}(1-\alpha _2)-s_{j}^{-}, \qquad j=1,2,\ldots , q; \\ \quad \quad \quad \quad \sum \limits _{k=1}^{n}{\lambda _{k}}=1; \\ \quad \quad \quad \quad \lambda _k\ge 0, \quad k=1,2,\ldots ,n; \\ \quad \quad \quad \quad s_i^{-}\ge 0, \quad i=1,2 \ldots , p;\\ \quad \quad \quad \quad s_j^{+}\ge 0,\quad j=1,2,\ldots ,q. \end{array}\right. \end{aligned}$$
(16)
The preceding two models are both crisp models. Thus, they can be easily solved using many traditional methods.