Decision AidingSingular value decomposition in AHP
Introduction
The analytic hierarchy process (AHP) [23] has been accepted as a leading multiattribute decision model both by practitioners and academics. AHP can solve decision problems in various fields by the prioritization of alternatives. The heart of the most familiar version of the AHP is the Saaty’s eigenvector method (EM) which approximates an (n×n) positive reciprocal matrix A=(aij), aji=(1/aij), i,j=1,…,n, by a vector , where R+n is the positive orthant of the n-dimensional Euclidean space Rn, such that the matrix of the ratios (wi/wj), i,j=1,…,n, is the best approximation to A, in some sense. It is emphasized that the EM results in a priority vector and an inconsistency number λmax.
However, the EM has been criticised both from prioritization and consistency points of view and several new techniques have been developed. There are two different approaches in AHP: deterministic and statistical or stochastic. In the statistical approach, it is assumed that the preference judgments are random variables associated to an unknown probability distribution [3].
In the deterministic approach, the underlying assumption is that one can observe the preferences with certainty and the only uncertainty lies in the elicitation of these preferences which give rise to the inconsistency condition. The EM belongs to the deterministic approach. Other approaches include the least-squares method (LSM) and the weighted least-squares method (WLSM) proposed by Chu et al. [6], the logarithmic least-squares method (LLSM) introduced and considered as a model for a regression estimation of priorities by De Jong [8] and Crawford and Williams [7], the chi-square method (CSM) [15] which are distance-minimizing methods defined in Table 1 as well as the logarithmic goal programming method (LGPM) [5] and the fuzzy programming method (FPM) [20]. The LSM was studied by Jensen [15], [16], the WLSM by Blankmeyer [4] and the LLSM by Barzilai et al. [1]. In the case of consistent matrices, all these approaches yield the same solution. From theoretical point of view, the LSM solution seems to be the most desirable, but the corresponding nonlinear optimization problem has no special tractable form and generally has multiple solutions, therefore, it is difficult to solve it numerically. In order to eliminate the drawback of the LSM, the different methods suggest some relaxations and modifications.
Some comparative analyses of the above scaling methods can be found in the literature, but the conclusions are often contradictory. Some authors, e.g., Crawford and Williams [7]; Takeda et al. [31] and Barzilai [2] assert that the LLSM overperforms the EM. Saaty and Vargas [26], [27] claim that the EM is superior to the LLSM. In [24], Saaty compares EM with LLSM and states that EM captures the inconsistency of dominance, while LLSM minimizes a distance function and searches for symmetry without an explicit attempt to capture dominance. But inconsistent dominance judgments are asymmetric. In this case, EM and LLSM give rise to different derived scales, and, at times, to a different choice. Moreover, he summarized 10 reasons for not using LLSM instead of EM.
In [6], WLSM is compared with EM. WLSM has the advantage of involving the solution of a system of linear algebraic equations and is, thus, conceptually easier to be understood than EM. Comparisons show that sums for WLSM are less than those for EM, and the dominant weight seems to be larger for WLSM. An excellent comparison analysis is given among the commonly used methods, the EM, the LSM, the WLSM and the LLSM, for deriving priorities in Golany and Kress [12]. They conclude that there is no prioritization method that is superior to the other ones in all cases.
Throughout Saaty’s work only the right eigenvectors are used. However, theoretically, the use of left eigenvectors should be equally justified. This problem was observed first by Johnson et al. [17]. In [9], a new method, known as the modified AHP (MAHP), claimed that the right and left eigenvectors inconsistency problem can be effectively reduced. In [32], an attempt was made to compare AHP and MAHP by using 42 models comprising 294 reciprocal matrices. It was revealed that MAHP is not better than AHP. The geometry of the scaling problem is studied in Euclidean spaces in [34] and the scaling problem under the multiplicative and additive normalizations of weight in order to obtain a unique solution in [2].
Singular value decomposition (SVD) is an important tool of matrix algebra that has been applied to a number of areas, for example, principal component analysis, canonical correlation in statistics, the determination of the Moore–Penrose generalized inverse, and low rank approximation of matrices, Kennedy and Gentle [18]; Eckart and Young [10]; Greenacre [13]. The matrix algebra and computational aspects of SVD are discussed in Kennedy and Gentle [18], and statistical applications are described in Greenacre [13]. The aim of this paper is to show that the SVD seems to be a good tool to provide a theoretically well-founded solution in AHP both for the scaling and consistency problems. First, the main features of SVD will be presented, then the SVD of pairwise comparison matrices will be studied by developing further the results of Gass and Rapcsák [11], in order to derive the weight vector in a convenient and consistent way in AHP. Finally, some examples will be presented and the EM and SVD solutions will be compared.
Section snippets
EM, consistency index and ratio
A basic tool is the pairwise comparison method in the AHP methodology [23]. Here, the preferences of the DM are represented by evaluating the pairwise comparison matrices related to a set of attributes and all the alternatives with respect to every attribute. The pairwise comparisons by the DM are scaled from 1 to 9 and arrayed in n×n matrices where n either denotes the number of the attributes or alternatives. The elements aij, i,j=1,…,n, are the estimations on the dominance of the object i
Singular value decomposition
The SVD of a general matrix A is a transformation into a product of three matrices, each of which has a simple special form and geometric interpretation. This SVD representation is given by the following theorem [13]. Theorem 3.1 Any real (m×n) matrix A of rank k (k⩽min(m,n)), can be expressed in the form ofwhere D is a (k×k) diagonal matrix with positive diagonal elements α1,…,αk, U is an (m×k) matrix and V is an (n×k) matrix such that UTU=I, VTV=I, i.e., the columns of U and V are orthonormal in
Singular value decomposition of pairwise comparison matrices
In this section, the SVD of pairwise comparison matrices is discussed. Theorem 4.1 [11] The SVD of a positive, consistent matrix A is the diadwhere c1 and c2 are positive constants such that c12=∑i=1n(1/wi2), c22=(1/∑i=1nwi2), the right singular vector is equal to the left eigenvector multiplied by a normalizing constant, the left singular vector to the right eigenvector multiplied by a second normalizing constant, R+n is the positive orthant, and (c1/c2) is
Deriving weights by SVD in AHP
In this part, it will be shown how to derive uniquely a positive weight vector, based on SVD in AHP. Though Saaty reduced this problem to a matrix eigenvalue problem, various techniques have been proposed and analyzed for approximating an inconsistent pairwise comparison matrix A by a matrix of rank one. The LSM minimizes the Frobenius normunder some normalizing constraint for the weight vector and tends to produce ratios (wi/wj) for all i,j within the range of the matrix A.
Measuring of consistency based on the SVD
In this part, it will be shown that the consistency of pairwise comparison matrices can be effectively measured on an absolute scale based on the SVD approach and by using the Frobenius norm.
Let A be an n×n pairwise comparison matrix and an n×n positive and consistent matrix. A lower bound and an upper bound can be given for the Frobenius norm of the matrix () depending on the norm ∥A∥F and the size of the matrix A. Lemma 6.1 Let A be an (n×n) matrix of rank k. Then, Proof It follows
Examples
In this part, some examples show the EM and the SVD weights.
Ifthen the EM-weights are (0.742, 0.083, 0.1, 0.071), CR=0.02, IM=2.263 and the SVD-weights are (0.75, 0.08, 0.0972, 0.0755), IM=1.825.
Ifthen the EM-weights are (0.617, 0.224, 0.097, 0.062), CR=0.04, IM=3.355 and the SVD-weights are (0.6444, 0.1762, 0.1005, 0.07889), IM=2.652.
Ifthen the EM- and the SVD-weights are (0.7, 0.1,
Comparison of EM and SVD in AHP
In this part, a comparison is made between the EM and SVD which provide a priority vector and a consistency measure, respectively. Comparisons among different techniques can be found in [6], [15], [27], [32].
In the EM, the priority vector is equal to the normalized principal right eigenvector corresponding to the maximal eigenvalue of A, i.e., the normalized weight vector w is the solution to the homogeneous linear equationswhere λmax is the maximal eigenvalue of the matrix
Acknowledgements
This research was supported in part by the Hungarian National Research Foundation, Grant No. OTKA-T029572.
References (33)
- et al.
Consistent weights for judgements matrices of the relative importance of alternatives
Operations Research Letters
(1987) The categorical data analysis approach for ratio model of pairwise comparisons
European Journal of Operational Research
(2001)- et al.
A note on the analysis of subjective judgment matrices
Journal of Mathematical Psychology
(1985) A statistical approach to Saaty’s scaling method for priorities
Journal of Mathematical Psychology
(1984)- et al.
A note on synthesizing group decisions
Decision Support Systems
(1998) - et al.
A multicriteria evaluation of methods for obtaining weights from ratio-scale matrices
European Journal of Operational Research
(1993) An alternative scaling method for priorities in hierarchical structures
Journal of Mathematical Psychology
(1984/2)- et al.
Right-left asymmetry in an eigenvector ranking procedure
Journal of Mathematical Psychology
(1979) - et al.
On sensitivity analysis for a class of decision systems
Decision Support Systems
(1996) Eigenvector and logarithmic least squares
European Journal of Operational Research
(1990)