Decision Aiding
Singular value decomposition in AHP

https://doi.org/10.1016/S0377-2217(02)00755-5Get rights and content

Abstract

The analytic hierarchy process (AHP) has been accepted as a leading multiattribute decision-aiding model both by practitioners and academics. The foundation of the AHP is the Saaty’s eigenvector method (EM) and associated inconsistency index that are based on the largest eigenvalue and associated eigenvector of an (n×n) positive reciprocal matrix. The elements of the matrix are the decision maker’s (DM) numerical estimates of the preference of n alternatives with respect to a criterion when they are compared pairwise using the 1–9 AHP fundamental comparison scale. The components of the normalized eigenvector provide approximations of the unknown weights of the criteria (alternatives), and the deviation of the largest eigenvector from n yields a measure of how inconsistent the DM is with respect to the pairwise comparisons.

Singular value decomposition (SVD) is an important tool of matrix algebra that has been applied to a number of areas, e.g., principal component analysis, canonical correlation in statistics, the determination of the Moore–Penrose generalized inverse, and low rank approximation of matrices. In this paper, using the SVD and the theory of low rank approximation of a (pairwise comparison) matrix, we offer a new approach for determining the associated weights. We prove that the rank one left and right singular vectors, that is the vectors associated with the largest singular value, yield theoretically justified weights. We suggest that an inconsistency measure for these weights is the Frobenius norm of the difference between the original pairwise comparison matrix and one formed by the SVD determined weights. How this measure can be applied in practice as a means of measuring the confidence the DM should place in the SVD weights is still an open question. We illustrate the SVD approach and compare it to the EM for some numerical examples.

Introduction

The analytic hierarchy process (AHP) [23] has been accepted as a leading multiattribute decision model both by practitioners and academics. AHP can solve decision problems in various fields by the prioritization of alternatives. The heart of the most familiar version of the AHP is the Saaty’s eigenvector method (EM) which approximates an (n×n) positive reciprocal matrix A=(aij), aji=(1/aij), i,j=1,…,n, by a vector w=(wi)∈R+n, where R+n is the positive orthant of the n-dimensional Euclidean space Rn, such that the matrix of the ratios (wi/wj), i,j=1,…,n, is the best approximation to A, in some sense. It is emphasized that the EM results in a priority vector w=(wi)∈R+n and an inconsistency number λmax.

However, the EM has been criticised both from prioritization and consistency points of view and several new techniques have been developed. There are two different approaches in AHP: deterministic and statistical or stochastic. In the statistical approach, it is assumed that the preference judgments are random variables associated to an unknown probability distribution [3].

In the deterministic approach, the underlying assumption is that one can observe the preferences with certainty and the only uncertainty lies in the elicitation of these preferences which give rise to the inconsistency condition. The EM belongs to the deterministic approach. Other approaches include the least-squares method (LSM) and the weighted least-squares method (WLSM) proposed by Chu et al. [6], the logarithmic least-squares method (LLSM) introduced and considered as a model for a regression estimation of priorities by De Jong [8] and Crawford and Williams [7], the chi-square method (CSM) [15] which are distance-minimizing methods defined in Table 1 as well as the logarithmic goal programming method (LGPM) [5] and the fuzzy programming method (FPM) [20]. The LSM was studied by Jensen [15], [16], the WLSM by Blankmeyer [4] and the LLSM by Barzilai et al. [1]. In the case of consistent matrices, all these approaches yield the same solution. From theoretical point of view, the LSM solution seems to be the most desirable, but the corresponding nonlinear optimization problem has no special tractable form and generally has multiple solutions, therefore, it is difficult to solve it numerically. In order to eliminate the drawback of the LSM, the different methods suggest some relaxations and modifications.

Some comparative analyses of the above scaling methods can be found in the literature, but the conclusions are often contradictory. Some authors, e.g., Crawford and Williams [7]; Takeda et al. [31] and Barzilai [2] assert that the LLSM overperforms the EM. Saaty and Vargas [26], [27] claim that the EM is superior to the LLSM. In [24], Saaty compares EM with LLSM and states that EM captures the inconsistency of dominance, while LLSM minimizes a distance function and searches for symmetry without an explicit attempt to capture dominance. But inconsistent dominance judgments are asymmetric. In this case, EM and LLSM give rise to different derived scales, and, at times, to a different choice. Moreover, he summarized 10 reasons for not using LLSM instead of EM.

In [6], WLSM is compared with EM. WLSM has the advantage of involving the solution of a system of linear algebraic equations and is, thus, conceptually easier to be understood than EM. Comparisons show that sums for WLSM are less than those for EM, and the dominant weight seems to be larger for WLSM. An excellent comparison analysis is given among the commonly used methods, the EM, the LSM, the WLSM and the LLSM, for deriving priorities in Golany and Kress [12]. They conclude that there is no prioritization method that is superior to the other ones in all cases.

Throughout Saaty’s work only the right eigenvectors are used. However, theoretically, the use of left eigenvectors should be equally justified. This problem was observed first by Johnson et al. [17]. In [9], a new method, known as the modified AHP (MAHP), claimed that the right and left eigenvectors inconsistency problem can be effectively reduced. In [32], an attempt was made to compare AHP and MAHP by using 42 models comprising 294 reciprocal matrices. It was revealed that MAHP is not better than AHP. The geometry of the scaling problem is studied in Euclidean spaces in [34] and the scaling problem under the multiplicative and additive normalizations of weight in order to obtain a unique solution in [2].

Singular value decomposition (SVD) is an important tool of matrix algebra that has been applied to a number of areas, for example, principal component analysis, canonical correlation in statistics, the determination of the Moore–Penrose generalized inverse, and low rank approximation of matrices, Kennedy and Gentle [18]; Eckart and Young [10]; Greenacre [13]. The matrix algebra and computational aspects of SVD are discussed in Kennedy and Gentle [18], and statistical applications are described in Greenacre [13]. The aim of this paper is to show that the SVD seems to be a good tool to provide a theoretically well-founded solution in AHP both for the scaling and consistency problems. First, the main features of SVD will be presented, then the SVD of pairwise comparison matrices will be studied by developing further the results of Gass and Rapcsák [11], in order to derive the weight vector in a convenient and consistent way in AHP. Finally, some examples will be presented and the EM and SVD solutions will be compared.

Section snippets

EM, consistency index and ratio

A basic tool is the pairwise comparison method in the AHP methodology [23]. Here, the preferences of the DM are represented by evaluating the pairwise comparison matrices related to a set of attributes and all the alternatives with respect to every attribute. The pairwise comparisons by the DM are scaled from 1 to 9 and arrayed in n×n matrices where n either denotes the number of the attributes or alternatives. The elements aij, i,j=1,…,n, are the estimations on the dominance of the object i

Singular value decomposition

The SVD of a general matrix A is a transformation into a product of three matrices, each of which has a simple special form and geometric interpretation. This SVD representation is given by the following theorem [13].

Theorem 3.1

Any real (m×n) matrix A of rank k (k⩽min(m,n)), can be expressed in the form ofA=UDVT,where D is a (k×k) diagonal matrix with positive diagonal elements α1,…,αk, U is an (m×k) matrix and V is an (n×k) matrix such that UTU=I, VTV=I, i.e., the columns of U and V are orthonormal in

Singular value decomposition of pairwise comparison matrices

In this section, the SVD of pairwise comparison matrices is discussed.

Theorem 4.1

[11] The SVD of a positive, consistent matrix A is the diadA=c1c2c2w1c2w2c2wn1c1w1,1c1w2,…,1c1wn,w=(wi)∈R+n,where c1 and c2 are positive constants such that c12=∑i=1n(1/wi2), c22=(1/∑i=1nwi2), the right singular vector is equal to the left eigenvector multiplied by a normalizing constant, the left singular vector to the right eigenvector multiplied by a second normalizing constant, R+n is the positive orthant, and (c1/c2) is

Deriving weights by SVD in AHP

In this part, it will be shown how to derive uniquely a positive weight vector, based on SVD in AHP. Though Saaty reduced this problem to a matrix eigenvalue problem, various techniques have been proposed and analyzed for approximating an inconsistent pairwise comparison matrix A by a matrix of rank one. The LSM minimizes the Frobenius normi=1nj=1naijwiwj2under some normalizing constraint for the weight vector and tends to produce ratios (wi/wj) for all i,j within the range of the matrix A.

Measuring of consistency based on the SVD

In this part, it will be shown that the consistency of pairwise comparison matrices can be effectively measured on an absolute scale based on the SVD approach and by using the Frobenius norm.

Let A be an n×n pairwise comparison matrix and A an n×n positive and consistent matrix. A lower bound and an upper bound can be given for the Frobenius norm of the matrix (A−A) depending on the norm ∥AF and the size of the matrix A.

Lemma 6.1

Let A be an (n×n) matrix of rank k. Then,0⩽∥A−AF⩽∥A∥F+(41/9)n.

Proof

It follows

Examples

In this part, some examples show the EM and the SVD weights.

IfA=19990.11111110.11111120.111110.51,then the EM-weights are (0.742, 0.083, 0.1, 0.071), CR=0.02, IM=2.263 and the SVD-weights are (0.75, 0.08, 0.0972, 0.0755), IM=1.825.

IfA=14670.251340.16670.3333120.14290.250.51,then the EM-weights are (0.617, 0.224, 0.097, 0.062), CR=0.04, IM=3.355 and the SVD-weights are (0.6444, 0.1762, 0.1005, 0.07889), IM=2.652.

IfA=17770.14291110.14291110.1429111,then the EM- and the SVD-weights are (0.7, 0.1,

Comparison of EM and SVD in AHP

In this part, a comparison is made between the EM and SVD which provide a priority vector and a consistency measure, respectively. Comparisons among different techniques can be found in [6], [15], [27], [32].

In the EM, the priority vector is equal to the normalized principal right eigenvector corresponding to the maximal eigenvalue of A, i.e., the normalized weight vector w is the solution to the homogeneous linear equationsAwmaxw,eTw=1,w∈R+n,where λmax is the maximal eigenvalue of the matrix

Acknowledgements

This research was supported in part by the Hungarian National Research Foundation, Grant No. OTKA-T029572.

References (33)

  • T.L. Saaty et al.

    Inconsistency and rank preservation

    Journal of Mathematical Psychology

    (1984/1)
  • T.L. Saaty et al.

    Comparison of eigenvalue, logarithmic least squares, and least squares methods in estimating ratios

    Mathematical Modelling

    (1984/2)
  • E. Takeda et al.

    Estimating criterion weights using eigenvectors: A comparative study

    European Journal of Operational Research

    (1987)
  • S.L. Tung et al.

    A comparison of the Saaty’s AHP and modified AHP for right and left eigenvector inconsistency

    European Journal of Operational Research

    (1998)
  • S. Zahir

    Geometry of decision making and the vector space formulation of the analytic hierarchy process

    European Journal of Operational Research

    (1999)
  • J. Barzilai

    Deriving weights from pairwise comparison matrices

    Journal of the Operational Research Society

    (1997)
  • Cited by (0)

    View full text