Skip to main content
Log in

The Moore–Penrose Pseudoinverse: A Tutorial Review of the Theory

  • General and Applied Physics
  • Published:
Brazilian Journal of Physics Aims and scope Submit manuscript

Abstract

In the last decades, the Moore–Penrose pseudoinverse has found a wide range of applications in many areas of science and became a useful tool for physicists dealing, for instance, with optimization problems, with data analysis, with the solution of linear integral equations, etc. The existence of such applications alone should attract the interest of students and researchers in the Moore–Penrose pseudoinverse and in related subjects, such as the singular value decomposition theorem for matrices. In this note, we present a tutorial review of the theory of the Moore–Penrose pseudoinverse. We present the first definitions and some motivations, and after obtaining some basic results, we center our discussion on the spectral theorem and present an algorithmically simple expression for the computation of the Moore–Penrose pseudoinverse of a given matrix. We do not claim originality of the results. We rather intend to present a complete and self-contained tutorial review, useful for those more devoted to applications, for those more theoretically oriented, and for those who already have some working knowledge of the subject.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Tikhonov’s argument in [1] is actually more complicated, since he does not consider the regularized equation \(\big(K^*K + \mu\mathbb{1} \big)u_{\mu}=K^*f\), but a more general version where the identity operator \(\mathbb{1}\) is replaced by a Sturm–Liouville operator.

References

  1. A.N. Tikhonov, Solution of incorrectly formulated problems and the regularization method, Sov. Math., Dokl. 4, 1035–1038 (1963). English translation of Dokl. Akad. Nauk USSR 151, 501–504 (1963)

    Google Scholar 

  2. A.N. Tikhonov, V.A. Arsenin, Solution of Ill-Posed Problems (Winston, Washington, DC, 1977)

  3. D.K. Sodickson, C.A. McKenzie, A generalized approach to parallel magnetic resonance imaging, Med. Phys. 28, 1629 (2001). doi:10.1118/1.1386778

    Article  Google Scholar 

  4. R. Van De Walle, H.H. Barrett, K.J. Myers, M.I. Aitbach, B. Desplanques, A. F. Gmitro, J. Cornelis, I. Lemahieu, Reconstruction of MR images from data acquired on a general nonregular grid by pseudoinverse calculation, IEEE Trans. Med. Imag. 19 1160–1167 (2000). doi:10.1109/42.897806

    Article  Google Scholar 

  5. H. Ammari, An Introduction to Mathematics of Emerging Biomedical Imaging. Mathématiques et Applications, vol. 62 (Springer, New York, 2008). ISBN 978-3-540-79552-0

    Google Scholar 

  6. G. Lohmann, K. Müller, V. Bosch, H. Mentzel, S. Hessler, L. Chen, S. Zysset, D.Y. von Cramon. Lipsia—a new software system for the evaluation of functional magnetic resonance images of the human brain, Comput. Med. Imaging Graph. 25, 449–457 (2001). doi:10.1016/S0895-6111(01)00008-8

    Article  Google Scholar 

  7. X. Li, D. Coyle, L. Maguire, T.M. McGinnity, D.R. Watson, H. Benali, A least angle regression method for fMRI activation detection in phase-encoded experimental designs, NeuroImage 52, 1390–1400 (2010). doi:10.1016/j.neuroimage.2010.05.017

    Article  Google Scholar 

  8. V. Stefanescu, D.A. Stoichescu, C. Faye, A. Florescu, Modeling and simulation of cylindrical PET 2D using direct inversion method, in 16th International Symposium for Design and Technology in Electronic Packaging (SIITME), 2010 (IEEE, Piscataway, 2010), pp. 107–112. doi:10.1109/SIITME.2010.5653649

  9. M. Bertero, P. Boccacci, Introduction to Inverse Problems in Imaging (Taylor & Francis, Bristol, 1998). ISBN: 978-0750304351

  10. J.-Z. Wang, S.J. Williamson, L. Kaufman, Magnetic source imaging based on the minimum-norm least-squares inverse, Brain Topogr. 5(4), 365–371 (1993). doi:10.1007/BF01128692

    Article  Google Scholar 

  11. N.G. Gençer, S.J. Williamson, Magnetic source images of human brain functions, Behav. Res. Meth. 29(1), 78–83 (1997). doi:10.3758/BF03200570

    Article  Google Scholar 

  12. J.-Z. Wang, S.J. Willianson, L. Kaufman, Spatio-temporal model of neural activity of the human brain based on the MNLS inverse, in Biomagnetism: Fundamental Research and Clinical Applications. Studies in Applied Electromagnetics and Mechanics, Vol. 7. International Conference on Biomagnetism 1995 (University of Vienna), ed. by C. Baumgartner, L. Deecke, G. Stroink, S.J. Williamson (Elsevier, Amsterdam, 1995). ISBN: 978-9051992335

  13. M.T. Page, S. Custódio, R.J. Archuleta, J.M. Carlson, Constraining earthquake source inversions with GPS data 1: Resolution based removal of artifacts, J. Geophys. Res. 114, B01314 (2009). doi:10.1029/2007JB005449

    Article  Google Scholar 

  14. S. Atzori, A. Antonioli, Optimal fault resolution in geodetic inversion of coseismic data, Geophys. J. Int. 185, 529–538 (2011). doi:10.1111/j.1365-246X.2011.04955.x

    Article  ADS  Google Scholar 

  15. R.D. Pascual-Marqui, Standardized low resolution brain electromagnetic tomography (sLORETA): technical details, Methods Find. Exp. Clin. Pharmacol. 24D, 5–12 (2002)

    Google Scholar 

  16. C. Hennig, N. Christlieb, Validating visual clusters in large datasets: fixed point clusters of spectral features, Comput. Stat. Data Anal. 40, 723–739 (2002). doi:10.1016/S0167-9473(02)00077-4

    Article  MathSciNet  MATH  Google Scholar 

  17. L. Bedini, D. Herranz, E. Salerno, C. Baccigalupi, E.E. Kuruoǧlu, A. Tonazzini, Separation of correlated astrophysical sources using multiple-lag data covariance matrices, EURASIP J. Appl. Signal Process. 15, 2400–2412 (2005)

    ADS  Google Scholar 

  18. T.V. Ricci, J.E. Steiner, R.B. Menezes, NGC 7097: the active galactic nucleus and its mirror, revealed by principal component analysis tomography, Astrophys. J. 734, L10 (2011). doi:10.1088/2041-8205/734/1/L10

    Article  ADS  Google Scholar 

  19. J.E. Steiner, R.B. Menezes, T.V. Ricci, A.S. Oliveira, PCA tomography: how to extract information from data cubes, in Monthly Notices of the Royal Astronomical Society, vol. 395 (2009), p. 64

  20. M.E. Wall, A. Rechtsteiner, L.M. Rocha, Singular values decomposition and principal component analysis, in A Practical Approach to Microarray Data Analysis, ed. by D.P. Berrar, W. Dubitzky, M. Granzow (Kluwer, Norwell, 2003), pp. 91–109. LANL LA-UR-02-4001

  21. O. Troyanskaya, M. Cantor, G. Sherlock, P. Brown, T. Hastie, R. Tibshirani, D. Botstein, R.B. Altman, Missing value estimation methods for DNA microarrays, Bioinformatics 17(6), 520–525 (2001)

    Article  Google Scholar 

  22. H. Andrews, C. Patterson III, Singular values decomposition (SVD) image coding, IEEE Trans. Commun. 24(4), 425–432 (1976)

    Article  Google Scholar 

  23. S. Chountasis, V.N. Katsikis, D. Pappas, Applications of the Moore–Penrose inverse in digital image restoration. Math. Probl. Eng. 2009, Article ID 170724, 12 pp. (2009). doi:10.1155/2009/170724

  24. S. Chountasis, V.N. Katsikis, D. Pappas, Digital image reconstruction in the spectral domain utilizing the Moore–Penrose inverse. Math. Probl. Eng. 2010, Article ID 750352, 14 pp. (2010). doi:10.1155/2010/750352

  25. C.W. Groetsch, Integral equations of the first kind, inverse problems and regularization: a crash course, J. Phys.: Conf. Ser. 73, 012001 (2007). doi:10.1088/1742-6596/73/1/012001

    Article  ADS  Google Scholar 

  26. E.H. Moore, On the reciprocal of the general algebraic matrix. Bull. Am. Math. Soc. 26, 394–395 (1920)

    Google Scholar 

  27. R. Penrose, A generalized inverse for matrices, Proc. Camb. Philos. Soc. 51, 406–413 (1955)

    Article  MathSciNet  ADS  MATH  Google Scholar 

  28. R. Penrose, On best approximate solution of linear matrix equations, Proc. Camb. Philos. Soc. 52, 17–19 (1956)

    Article  MathSciNet  ADS  MATH  Google Scholar 

  29. M. Reed, B. Simon, Methods of Modern Mathematical Physics. Vol. 1: Functional Analysis (Academic, New York, 1972–1979)

  30. O. Bratteli, D.W. Robinson, Operator Algebras and Quantum Statistical Mechanics I (Springer, New York, )

    Google Scholar 

Download references

Acknowledgments

We are specially grateful to Prof. Nestor Caticha for providing us with some references on applications of the Moore–Penrose pseudoinverse and of the singular values decomposition. This work was supported in part by the Brazilian Agencies CNPq and FAPESP.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to João Carlos Alves Barata.

Appendices

Appendix 1: A Brief Review of Hilbert Space Theory and Linear Algebra

In this appendix, we collect the more important definitions and results on linear algebra and Hilbert space theory that we used in the main part of this paper. For the benefit of the reader, especially of students, we provide all results with proofs.

1.1 Hilbert Spaces: Basic Definitions

A scalar product in a complex vector space \({\mathcal V}\) is a function \({\mathcal V}\times{\mathcal V}\to{\mathbb{C}}\), denoted here by \({\langle }\cdot,\;\cdot{\rangle }\), such that the following conditions are satisfied: (1) For all \(u\in{\mathcal V}\), one has \({\langle } u, \; u{\rangle }\geq 0\) and \({\langle } u, \; u{\rangle }=0\) if and only if u = 0; (2) for all \(u, \; v_1, \; v_2\in{\mathcal V}\) and all α 1, α 2 ∈ ℂ, one has \({\big\langle } u, \; (\alpha_1v_1+\alpha_2v_2){\big\rangle } = \alpha_1 {\langle } u, \; v_1{\rangle }+\alpha_2 {\langle } u, \; v_2{\rangle } \) and \({\big\langle } (\alpha_1v_1+\alpha_2v_2), \; u{\big\rangle } = \overline{\alpha_1} {\langle } v_1, \; u {\rangle }+\overline{\alpha_2} {\langle } v_2, \; u {\rangle } \); and (3) \(\overline{{\langle } u, \; v{\rangle }}={\langle } v, \; u{\rangle }\) for all \(u, \; v\in{\mathcal V}\).

The norm associated to the scalar product \({\langle }\cdot,\;\cdot{\rangle }\) is defined by \(\|u\|\mathrel{\mathop:}=\sqrt{{\langle } u,\;u{\rangle }}\), for all \(u\in{\mathcal V}\). As one easily verifies using the defining properties of a scalar product, this norm satisfies the so-called parallelogram identity: For all \( a, \; b \in {\mathcal V}\), one has

$$ \| a + b\|^2 + \|a-b\|^2 \; = \; 2 \|a\|^2 + 2 \|b\|^2 \; . \label{sldkjfbnhv} $$
(44)

We say that a sequence \(\{v_n\in{\mathcal V}, \;n\in{\mathbb{N}} \}\) of vectors in \({\mathcal V}\) converges to an element \(v\in{\mathcal V}\) if for all ε > 0 there exists a N(ε) ∈ ℕ such that ||v n  − v|| ≤ ε for all n ≥ N(ε). In this case, we write v = lim n→∞  v n . A sequence \(\{v_n\in{\mathcal V}, \;n\in{\mathbb{N}} \}\) of vectors in \({\mathcal V}\) is said to be a Cauchy sequence if for all ε > 0 there exists a N(ε) ∈ ℕ such that ||v n  − v m || ≤ ε for all n, m ∈ ℕ such that n ≥ N(ε) and m ≥ N(ε). A complex vector space \({\mathcal V}\) is said to be a Hilbert space if it has a scalar product and if it is complete, i.e., if all Cauchy sequences in \({\mathcal V}\) converge to an element of \({\mathcal V}\).

1.2 The Best Approximant Theorem

A subset A of a Hilbert space \({\mathcal H}\) is said to be convex if for all u, v ∈ A and all μ ∈ [0, 1] one has μu + (1 − μ)v ∈ A. A subset A of a Hilbert space \({\mathcal H}\) is said to be closed if every sequence {u n  ∈ A, n ∈ ℕ } of elements of A that converges in \({\mathcal H}\) converges to an element of A. The following theorem is of fundamental importance in the theory of Hilbert spaces.

Theorem A.1

(Best Approximant Theorem) Let A be a convex and closed subset of a Hilbert space \({\mathcal H}\) . Then, for all \( x\in {\mathcal H}\) there exists a unique y ∈ A such that || x − y|| equals the smallest possible distance between x and A, that means, \( \| x - y\| = \inf_{y'\in A} \big\|x - y'\big\| \) .

Proof

Let D ≥ 0 be defined by \( D = \inf_{y'\in A} \|x - y'\| \). For each n ∈ ℕ, let us choose a vector y n  ∈ A with the property that \( \| x - y_n \|^2 < D^2 + \frac{1}{n}\). Such a choice is always possible, by the definition of the infimum of a set of real numbers bounded from below.

Let us now prove that the sequence y n , n ∈ ℕ is a Cauchy sequence in \( {\mathcal H}\). Let us take a = x − y n and b = x − y m in the parallelogram identity (44). Then, \( \big\|2 x - (y_m + y_n)\big\|^2 + \|y_m - y_n\|^2 = 2 \|x - y_n\|^2 + 2 \|x-y_m\|^2 \). This can be written as \( \|y_m - y_n\|^2 = 2 \|x - y_n\|^2 + 2 \|x-y_m\|^2 - 4 \big\|x - (y_m + y_n)/2\big\|^2 \). Now, using the fact that \(\| x - y_n \|^2 < D^2 + \frac{1}{n}\) for each n ∈ ℕ, we get

$$ \begin{array}{rll} \|y_m - y_n\|^2 &\leq& 4D^2 + 2\left( \frac{1}{n}+\frac{1}{m}\right)\\ &&-\, 4 \big\|x - (y_m + y_n)/2\big\|^2 . \end{array} $$

Now (y m  + y n )/2 ∈ A, since left-hand side is a convex linear combination of elements of the convex set A. Hence, by the definition of D, \( \big\|x - (y_m + y_n)/2\big\|^2 \geq D^2 \). Therefore, we have

$$ \begin{array}{rll} \|y_m - y_n\|^2 & \leq& 4D^2 + 2\left( \frac{1}{n}+\frac{1}{m}\right) - 4D^2\\ &=& 2\left( \frac{1}{n}+\frac{1}{m}\right) . \end{array} $$

The right-hand side can be made arbitrarily small, by taking both m and n large enough, proving that {y n } n ∈ ℕ is a Cauchy sequence. Since A is a closed subspace of the complete space \( {\mathcal H}\), the sequence {y n } n ∈ ℕ converges to y ∈ A.

Now we prove that ||x − y || = D. In fact, for all n ∈ ℕ, one has

$$ \begin{array}{rll} \|x-y \| &=& \big\| (x - y_n) - (y - y_n) \big\|\\ & \leq& \| x - y_n \|+\| y - y_n \|\\ &<& \sqrt{D^2 + \frac{1}{n}} + \| y - y_n \| . \end{array} $$

Taking n→ ∞ and using the fact that y n converges to y, we conclude that ||x − y || ≤ D . On the other hand, ||x − y || ≥ D by the definition of D and we must have ||x − y || = D.

At last, it remains to prove the uniqueness of y. Assume that there is another y′ ∈ A such that \( \big\|x-y' \big\| = D\). Using again the parallelogram identity (44), but now with a = x − y and b = x − y′, we get

$$ \begin{array}{rll} \big\|2 x &-& (y + y')\big\|^2 + \big\|y - y'\big\|^2\\ &=& 2 \big\|x - y\big\|^2 + 2 \big\|x-y'\big\|^2 = 4D^2 , \end{array} $$

which means that

$$ \begin{array}{rll} \big\|y - y'\big\|^2 &=& 4D^2 - \big\|2 x - (y + y')\big\|^2\\ &=& 4D^2 - 4 \Big\| x - \big(y + y'\big)/2 \Big\|^2 . \end{array} $$

Since (y + y′)/2 ∈ A (for A being convex), it follows that \( \big\| x - (y + y')/2\big\|^2 \geq D^2 \) and, hence, \( \big\|y - y'\big\|^2 \leq 0 \), proving that y = y′.□

1.3 Orthogonal Complements

If E is a subset of a Hilbert space \( {\mathcal H}\), we define its orthogonal complement E  ⊥  as the set of vectors in \( {\mathcal H}\) orthogonal to all vectors in E: \( E^\perp = \Big\{ y \in {\mathcal H} \big| \;\; {\langle } y, \; x {\rangle } =0 \mbox{ for all } x\in E \Big\} \). The following proposition is of fundamental importance:

Proposition A.2

The orthogonal complement E  ⊥  of a subset E of a Hilbert space \( {\mathcal H}\) is a closed linear subspace of \({\mathcal H}\) .

Proof

If x, y ∈ E  ⊥ , then, for any α, β ∈ ℂ, one has \( {\langle } \alpha x + \beta y, \; z {\rangle } = \overline{\alpha} {\langle } x, \; z{\rangle } + \overline{\beta} {\langle } y, \; z{\rangle } = 0 \) for any z ∈ E, showing that αx + βy ∈ E  ⊥ . Hence, E  ⊥  is a linear subspace of \({\mathcal H}\). If x n is a sequence in E  ⊥  converging to \( x \in {\mathcal H}\), then, for all z ∈ E, one has \(\displaystyle {\langle } x, \; z {\rangle } = \left\langle \lim_{n\to \infty}x_n, \; z \right\rangle = \lim_{n\to \infty} {\langle } x_n, \; z {\rangle } = 0 \), since \({\langle } x_n, \; z {\rangle } = 0 \) for all n. Hence, x ∈ E  ⊥ , showing that E  ⊥  is closed. Above, in the first equality, we used the continuity of the scalar product.□

1.4 The Orthogonal Decomposition Theorem

Theorem A.3

(Orthogonal Decomposition Theorem) Let \( {\mathcal M} \) be a closed and linear (and therefore convex) subspace of a Hilbert space \( {\mathcal H}\) . Then every \(x \in {\mathcal H}\) can be written in a unique way in the form x = y + z , with \( y\in {\mathcal M}\) and \( z \in {\mathcal M}^\perp\) . The vector y is such that \( \|x -y\| = \inf_{y'\in {\mathcal M}}\big\|x - y'\big\|\) , i.e., is the best approximant of x in \( {\mathcal M} \) .

Proof

Let x be an arbitrary element of \({\mathcal H}\). Since \({\mathcal M}\) is convex and closed, let us evoke Theorem 6.2 and choose y as the (unique) element of \( {\mathcal M}\) such that \( \|x -y\| = \inf_{y'\in {\mathcal M}}\big\|x - y'\big\|\). Defining \(z \mathrel{\mathop:}= x-y\), all we have to do is to show that \( z\in {\mathcal M}^\perp\) and to show uniqueness of y and z. Let us first prove that \(z\in {\mathcal M}^\perp\). By the definition of y, one has \( \| x - y\|^2 \leq \big\| x - y -\lambda y'\big\|^2 \) for all λ ∈ ℂ and all \(y'\in {\mathcal M}\). By the definition of z, it follows that \( \| z\|^2 \; \leq \; \big\| z -\lambda y'\big\|^2 \) for all λ ∈ ℂ. Writing the right-hand side as \( {\big\langle } z -\lambda y' , \; z -\lambda y'{\big\rangle } \), we get \( \| z\|^2 \leq \| z\|^2 - 2 \mathrm{Re}\big(\lambda {\langle } z, \; y'{\rangle } \big) + |\lambda|^2 \big\|y'\big\|^2 \). Hence,

$$ 2 \mathrm{Re}\big(\lambda {\langle } z, \; y'{\rangle } \big) \; \leq \; |\lambda|^2 \big\|y'\big\|^2 \; .\label{sdrvbwert} $$
(45)

Now, write \(\big\langle z, \; y'\big\rangle = \big|{\langle } z, \; y'{\rangle }\big|e^{i\alpha} \), for some α ∈ ℝ. Since (45) holds for all λ ∈ ℂ, we can pick λ in the form λ = te  −  , t > 0, and (45) becomes \( 2 t \big|{\langle } z, \; y'{\rangle }\big| \leq t^2 \big\|y'\big\|^2 \). Hence, \( \big|{\langle } z, \; y'{\rangle }\big| \leq \frac{t}{2} \big\|y'\big\|^2 \), for all t > 0. But this is only possible if the left-hand side vanishes: \(\big|{\langle } z, \; y'{\rangle }\big| =0 \). Since y′ is an arbitrary element of \( {\mathcal M}\), this shows that \( z\in {\mathcal M}^\perp\).

To prove uniqueness, assume that x = y′ + z′ with \( y'\in {\mathcal M}\) and \( z'\in {\mathcal M}^\perp\). We would have y − y′ = z′ − z. But \(y - y'\in {\mathcal M}\) and \(z'- z \in {\mathcal M}^\perp\). Hence, both belong to \({\mathcal M}\cap {\mathcal M}^\perp=\{0\}\), showing that y − y′ = z′ − z = 0.□

1.5 The Spectrum of a Matrix

The spectrum of a matrix A ∈ Mat (ℂ, n), denoted by σ(A), is the set of all λ ∈ ℂ for which the matrix \(\lambda\mathbb{1} -A \) has no inverse.

The characteristic polynomial of a matrix A ∈ Mat (ℂ, n) is defined by \( p_A(z) \mathrel{\mathop:}= \det (z\mathbb{1} -A) \). It is clearly a polynomial of degree n on z. It follows readily from these definitions that σ(A) coincides with the roots of p A . The elements of σ(A) are said to be the eigenvalues of A. If λ is an eigenvalue of A, the matrix \(A -\lambda\mathbb{1}\) has no inverse, and therefore, there exists at least one non-vanishing vector v ∈ ℂn such that \((A -\lambda\mathbb{1})v=0\), that means, such that Av = λv. Such a vector is said to be an eigenvector of A with eigenvalue λ. The set of all eigenvectors associated to a given eigenvalues (plus the null vector) is a linear subspace of ℂn, as one easily sees.

The multiplicity of a root λ of the characteristic polynomial of a matrix A ∈ Mat (ℂ, n) is called the algebraic multiplicity of the eigenvalue λ. The dimension of the subspace generated by the eigenvectors associated to the eigenvalues λ is called the geometric multiplicity of the eigenvalue λ. The algebraic multiplicity of an eigenvalue is always larger than or equal to its geometric multiplicity.

1.6 The Neighborhood of Singular Matrices

Proposition A.4

Let A ∈ Mat (ℂ, n) be arbitrary and let B ∈ Mat (ℂ, n) be a non-singular matrix. Then, there exist constants M 1 and M 2 (depending on A and B ) with 0 < M 1 ≤ M 2 such that A + μB is invertible for all μ ∈ ℂ with 0 < |μ| < M 1 and for all μ ∈ ℂ with |μ| > M 2 .

Proof

Since B has an inverse, we may write \(A+\mu B=\left(\mu\mathbb{1} + AB^{-1}\right)B\). Hence, A + μB has an inverse if and only if \(\mu\mathbb{1} + AB^{-1}\) is non-singular.

Let C ≡ − AB  − 1 and let {λ 1 , ..., λ n } ⊂ ℂ be the n not necessarily distinct roots of the characteristic polynomial p C of C. If all roots vanish, we take M 1 = M 2 > 0, arbitrary. Otherwise, let us define \(M_1\mathrel{\mathop:}=\min\{ |\lambda_k|, \; \lambda_k\neq 0\}\) and \(M_2\mathrel{\mathop:}=\max\{ |\lambda_k|, \; k=1, \, \ldots , \, n\}\). Then, the sets {μ ∈ ℂ| 0 < |μ| < M 1} and {μ ∈ ℂ| |μ| > M 2} do not contain roots of p C , and therefore, for μ in these sets, the matrix \(\mu\mathbb{1}-C=\mu\mathbb{1} + AB^{-1}\) is non-singular. □

1.7 Similar Matrices

Two matrices A ∈ Mat (ℂ, n) and B ∈ Mat (ℂ, n) are said to be similar if there is a non-singular matrix P ∈ Mat (ℂ, n) such that P  − 1 AP = B. One has the following elementary fact:

Proposition A.5

Let A and B ∈ Mat (ℂ, n) be two similar matrices. Then their characteristic polynomials coincide, p A  = p B , and therefore, their spectra also coincide, σ(A) = σ(B), as well as the geometric multiplicities of their eigenvalues

Proof

Let P ∈ Mat (ℂ, n) be such that P  − 1 AP = B. Then, \( p_A(z) = \det(z \mathbb{1} -A) = \det\Big(P^{-1}(z \mathbb{1} -A)P\Big) = \det\big(z \mathbb{1} -P^{-1}AP\big) = \det(z\mathbb{1} -B) = p_B(z) \), for al  z ∈ ℂ.□

1.8 The Spectrum of Products of Matrices

The next proposition contains a non-evident consequence of Propositions A.4 and A.5.

Proposition A.6

Let A, B ∈ Mat (ℂ, n). Then, the characteristic polynomials of the matrices AB and BA coincide: p AB  = p BA . Therefore, their spectra also coincide, σ(AB) = σ(BA), as well as the geometric multiplicities of their eigenvalues.

Proof

If A or B (or both) are non-singular, then AB and BA are similar. In fact, in the first case, we can write AB = A(BA)A  − 1 and in the second one has AB = B  − 1(BA)B. In both cases, the claim follows from Proposition A.5. Let us now consider the case where neither A nor B are invertible. We know from Proposition A.4 that there exists M > 0 such that \(A+\mu \mathbb{1}\) is non-singular for all μ ∈ ℂ with 0 < |μ| < M. Hence, for such values of μ, we have by the argument above that \(p_{(A+\mu\mathbb{1})B}=p_{B(A+\mu\mathbb{1})}\). Now the coefficient of the polynomials \(p_{(A+\mu\mathbb{1})B}\) and \(p_{B(A+\mu\mathbb{1})}\) are polynomials in μ and, therefore, are continuous. Hence, the equality \(p_{(A+\mu\mathbb{1})B}=p_{B(A+\mu\mathbb{1})}\) remains valid by taking the limit μ→0, leading to p AB  = p BA . □

Proposition A.6 can be extended to products of non-square matrices:

Proposition A.7

Let A ∈ Mat (ℂ, m, n) and B ∈ Mat (ℂ, n, m). Clearly, AB ∈ Mat (ℂ, m) and BA ∈ Mat (ℂ, n). Then, one has \(x^n p_{AB}(x)=x^m p_{BA}(x)\) . Therefore, σ(AB) ∖ {0} = σ(BA) ∖ {0}, i.e., the set of non-zero eigenvalues of AB coincide with the set of non-zero eigenvalues of BA .

Proof

Consider the two (m + n)×(m + n) matrices defined by

$$ A' \; \mathrel{\mathop:}= \; \begin{pmatrix} A & \mathbb{0}_{m, \; m} \\ \mathbb{0}_{n, \; n} & \mathbb{0}_{n, \; m} \end{pmatrix} \quad\mbox{ and }\quad B' \; \mathrel{\mathop:}= \; \begin{pmatrix} B & \mathbb{0}_{n, \; n} \\ \mathbb{0}_{m, \; m} & \mathbb{0}_{m, \; n} \end{pmatrix} \; \; . $$

See (8). It is easy to see that

$$ A'B' \!=\! \begin{pmatrix} AB & \mathbb{0}_{m, \; n} \\ \mathbb{0}_{n, \; m} & \mathbb{0}_{n, \; n} \end{pmatrix} \;\;\mbox{ and\;that }\;\; B'A' \!=\! \begin{pmatrix} BA & \mathbb{0}_{n, \; m} \\ \mathbb{0}_{m, \; n} & \mathbb{0}_{m, \; m} \end{pmatrix} . $$

From this, it is now easy to see that \(p_{A'B'}(x)=x^n p_{AB}(x)\) and that \(p_{B'A'}(x)=x^m p_{BA}(x)\). By Proposition A.6, one has p AB(x) = p BA(x), completing the proof.□

1.9 Diagonalizable Matrices

A matrix A ∈ Mat (ℂ, n) is said to be diagonalizable if it is similar to a diagonal matrix. Hence, A ∈ Mat (ℂ, n) is diagonalizable if there exists a non-singular matrix A ∈ Mat (ℂ, n) such that P  − 1 AP is diagonal. The next theorem gives a necessary and sufficient condition for a matrix to be diagonalizable:

Theorem A.8

A matrix A ∈ Mat (ℂ, n) is diagonalizable if and only if it has n linearly independent eigenvectors, i.e., if the subspace generated by its eigenvectors is n-dimensional.

Proof

Let us assume that A has n linearly independent eigenvectors {v 1, ..., v n }, whose eigenvalues are { d 1 , ..., d n }, respectively. Let P ∈ Mat (ℂ, n) be defined by \( P \; = \; {{\Big[\!\! \Big[}} v^1, \; \ldots , \; v^n {{\Big]\!\! \Big]}} \). By (12), one has

$$ A P \; = \; {{\Big[\!\! \Big[}} Av^1, \; \ldots , \; Av^n {{\Big]\!\! \Big]}} \; = \; {{\Big[\!\! \Big[}} d_1 v^1, \; \ldots , \; d_n v^n {{\Big]\!\! \Big]}} $$

and by (13), one has \( {{\Big[\!\! \Big[}} d_1 v^1, \; \ldots , \; d_n v^n {{\Big]\!\! \Big]}} = PD \). Therefore, AP = PD. Since the columns of P are linearly independent, P is non-singular and one has P  − 1 AP = D, showing that A is diagonalizable.

Let us now assume that A is diagonalizable and that there is a non-singular P ∈ Mat (ℂ, n) such that \( P^{-1} A P \; = \; D \; = \; \mathrm{diag}\,\big(d_1, \; \ldots , \; d_n\big) \). It is evident that the vectors of the canonical base (10) are eigenvectors of D, with D e a  = d a e a . Therefore, v a  = P e a are eigenvectors of A, since \( Av_a = A P{\mathbf e}_a = PD {\mathbf e}_a = P\big(d_a{\mathbf e}_a\big) = d_a P{\mathbf e}_a = d_a v_a \). To show that these vectors v a are linearly independent, assume that there are complex numbers α 1, ..., α n such that α 1 v 1 + ⋯ + α n v n  = 0 . Multiplying by P  − 1 from the left, we get α 1 e 1 + ⋯ + α n e n  = 0 , implying α 1 = ⋯ = α n  = 0, since the elements e a of the canonical basis are linearly independent.□

The spectral theorem is one of the fundamental results of functional analysis, and its version for bounded and unbounded self-adjoint operators in Hilbert spaces is of fundamental importance for the so-called probabilistic interpretation of quantum mechanics. Here we prove its simplest version for square matrices.

Theorem A.9

(Spectral Theorem for Matrices) A matrix A ∈ Mat (ℂ, n) is diagonalizable if and only if there exist r ∈ ℕ, 1 ≤ r ≤ n, scalars α 1, ..., α r  ∈ ℂ, and non-zero distinct projectors E 1 , ..., E r  ∈ Mat (ℂ, n) such that

$$ A \; = \; \sum\limits_{a=1}^r \alpha_a E_a \; , \label{decomposicaoespectral} $$
(46)

and

$$ \mathbb{1} \; = \; \sum\limits_{a=1}^r E_a \;, \label{OIUHYPoOprimo} $$
(47)

with E i E j  = δ i, j E j . The numbers α 1, ..., α r are the distinct eigenvalues of A.

The projectors E a in (46) are called the spectral projectors of A. The decomposition (46) is called spectral decomposition of A. In Proposition A.11, we will show how the spectral projections E a can be expressed in terms of polynomials in A. In Proposition A.12, we establish the uniqueness of the spectral decomposition of a diagonalizable matrix.

Proof of Theorem A.9

If A ∈ Mat (ℂ, n) is diagonalizable, there exists P ∈ Mat (ℂ, n) such that \(P^{-1}AP = D = \mathrm{diag}\, (\lambda_1 , \; \ldots , \; \lambda_n)\), where λ 1 , ..., λ n are the eigenvalues of A. Let us denote by { α 1 , ..., α r }, 1 ≤ r ≤ n, the set of all distinct eigenvalues of A.

One can clearly write \( D = \sum_{a=1}^{r} \alpha_a K_a \), where K a  ∈ Mat (ℂ, n) are diagonal matrices having 0 or 1 as diagonal elements, so that

$$ (K_a)_{ij} \; = \; \left\{ \begin{array}{ll} 1 \; , & \mbox{if }\; i=j \;\mbox{ and }\; (D)_{ii} = \alpha_a \; ,\\ 0 \; , & \mbox{if }\; i=j \;\mbox{ and }\; (D)_{ii} \neq \alpha_a \; ,\\ 0 \; , & \mbox{if }\; i\neq j \; . \end{array} \right. $$

Hence, (K a ) ij  = 1 if i = j and (D) ii  = α a and (K a ) ij  = 0 otherwise. It is trivial to see that

$$ \sum\limits_{a=1}^{r} K_a \; = \; \mathbb{1} \label{PoiOujjuJ} $$
(48)

and that

$$ K_a K_b \; = \; \delta_{a, \, b}\: K_a . \label{seifybvOIni} $$
(49)

Since A = PDP  − 1, one has \( A = \sum_{a=1}^{r} \alpha_a E_a \ \), where \(E_a \mathrel{\mathop:}= PK_a P^{-1}\). It is easy to prove from (48) that \( \mathbb{1} = \sum_{a=1}^r E_a \) and it is easy to prove from (48) that E i E j  = δ i, j E j .

Reciprocally, let us now assume that A has a representation like (46), with the E a ’s having the above-mentioned properties. Let us first notice that for any vector x and for k ∈ {1, ..., r}, one has by (46)

$$ AE_k x \; = \; \sum\limits_{j=1}^{r}\alpha_j E_j E_k x \; = \; \alpha_k E_k x \; . $$

Hence, E k x is either zero or is an eigenvalue of A. Therefore, the subspace \({\mathcal S}\) generated by all vectors \(\{E_k x, \; x\in{\mathbb{C}}^n, \; k=1, \; \ldots , \; r\}\) is a subspace of the space \({\mathcal A}\) generated by all eigenvectors of A. However, from (47), one has, for all x ∈ ℂn, \( x = \mathbb{1} x = \sum_{k=1}^{r} E_k x \), and this reveals that \({\mathbb{C}}^n={\mathcal S}\subset {\mathcal A}\). Hence, \( {\mathcal A}={\mathbb{C}}^n\), and by Theorem A.8, A is diagonalizable. □

The spectral theorem has the following corollary, known as the functional calculus:

Theorem A.10

(Functional Calculus) Let A ∈ Mat (ℂ, n) be diagonalizable and let \(\displaystyle A = \sum\limits_{a=1}^r \alpha_a E_a \) be its spectral decomposition. Then, for any polynomial p , one has \(\displaystyle p(A) = \sum\limits_{a=1}^r p(\alpha_a) E_a \) .

Proof

By the properties of the spectral projectors E a , one sees easily that \(\displaystyle A^2 = \sum\limits_{a, \; b=1}^r \alpha_a\alpha_b E_a E_b = \sum\limits_{a, \; b=1}^r \alpha_a\alpha_b \delta_{a, \; b }E_a = \sum\limits_{a=1}^r (\alpha_a)^2 E_a \). It is then easy to prove by induction that \(\displaystyle A^m = \sum\limits_{a=1}^r (\alpha_a)^m E_a \), for all m ∈ ℕ0 (by adopting the convention that \(A^0=\mathbb{1}\), the case m = 0 is simply (47)). From this, the rest of the proof is elementary.□

One can also easily show that for a non-singular diagonalizable matrix A ∈ Mat (ℂ, n) , one has

$$ A^{-1} \; = \; \sum\limits_{a=1}^r \frac{1}{\alpha_a} E_a \; . \label{eq:representacaoespectraldeinvesa} $$
(50)

1.10 Getting the Spectral Projections

One of the most useful consequences of the functional calculus is an explicit formula for the spectral projections of a diagonalizable matrix A in terms of a polynomial on A.

Proposition A.11

Let A ∈ Mat (ℂ, n) be non-zero and diagonalizable and let A = α 1 E 1 + ⋯ + α r E r be its spectral decomposition. Let the polynomials p j , j = 1, ..., r, be defined by

$$ p_j(x) \; \mathrel{\mathop:}= \; \prod\limits_{{l=1}\atop {l\neq j}}^r \left(\frac{x-\alpha_l}{\alpha_j-\alpha_l}\right) \; . \label{eq:definicaodospolinoomiospj-ytivuytivTYRyr} $$
(51)

Then,

$$ E_j \; = \; p_j(A) \; = \; \left( \prod\limits_{{k=1}\atop {k\neq j}}^r \frac{1}{\alpha_j-\alpha_k} \right) \prod\limits_{{l=1}\atop {l\neq j}}^r \Big( A-\alpha_l\mathbb{1} \Big) \label{eq:obtendoosprojetoresespectrais-YTvTRVytrvUYtv} $$
(52)

for all j = 1, ..., r.

Proof

By the definition of the polynomials p j , it is evident that p j (α k ) = δ j, k . Hence, by Theorem A.10, \( p_j(A) = \sum\limits_{k=1}^r p_j(\alpha_k) E_k = E_j \).□

1.11 Uniqueness of the Spectral Decomposition

Proposition A.12

The spectral decomposition of a diagonalizable matrix A ∈ Mat (ℂ, n) is unique.

Proof

Let \(\displaystyle A= \sum\limits_{k=1}^r \alpha_k E_k\) be the spectral decomposition of A as described in Theorem A.9, where α k , k = 1, ..., r, with 1 ≤ r ≤ n are the distinct eigenvalues of A, Let \(\displaystyle A= \sum\limits_{k=1}^{s} \beta_k F_k\) be a second representation of A, where the β k ’s are distinct and where the F k ’s are non-vanishing and satisfy F j F l  = δ j, l F l and \(\displaystyle \mathbb{1}= \sum\limits_{k=1}^{s} F_k\). For a vector x ≠ 0, it holds \(x=\sum\limits_{k=1}^{s} F_kx\), so that not all vectors F k x vanish. Let \(F_{k_0}x\neq 0\). One has \(AF_{k_0}x=\sum\limits_{k=1}^{s} \beta_k F_kF_{k_0}x=\beta_{k_0}F_{k_0}x\). This shows that \(\beta_{k_0}\) is one of the eigenvalues of A and, hence, { β 1, ..., β s } ⊂ { α 1, ..., α r } and we must have s ≤ r. Let us order both sets such that β k  = α k for all 1 ≤ k ≤ s. Hence,

$$ A \; = \; \sum\limits_{k=1}^r \alpha_k E_k \; = \; \sum\limits_{k=1}^{s} \alpha_k F_k \; . \label{eq:JboiuyBIuytvIUtyuiyt} $$
(53)

Now, consider the polynomials p j , j = 1, ..., r, defined in (51), for which p j (α j ) = 1 and p j (α k ) = 0 for all k ≠ j. By the functional calculus, it follows from (53) that, for 1 ≤ j ≤ s,

$$ p_j(A) = \underbrace{\sum\limits_{k=1}^r p_j(\alpha_k) E_k}_{= E_j} = \underbrace{\sum\limits_{k=1}^{s} p_j(\alpha_k) F_k}_{= F_j} , \;\;\; \therefore \;\;\; E_j = F_j . $$

(The equality \(p_j(A)=\sum_{k=1}^{s} p_j(\alpha_k) F_k\) follows from the fact that the E k ’s and the F k ’s satisfy the same algebraic relations and, hence, the functional calculus also holds for the representation of A in terms of the F k ’s). Since \(\displaystyle \mathbb{1} = \sum\limits_{k=1}^r E_k = \sum\limits_{k=1}^{s} E_k \) and E j  = F j for all 1 ≤ j ≤ s, one has \(\displaystyle\sum\limits_{k=s+ 1}^r E_k=\mathbb{0}\). Hence, multiplying by E l , with s + 1 ≤ l ≤ r, it follows that \(E_l = \mathbb{0}\) for all s + 1 ≤ l ≤ r. This is only possible if r = s, since the E k ’s are non-vanishing. This completes the proof.□

1.12 Self-Adjointness and Diagonalizability

Let A ∈ Mat (ℂ, m, n). The adjoint matrix A* ∈ Mat (ℂ, n, m) is defined as the unique matrix for which the equality

$$ {\big\langle } u, \; Av {\big\rangle } \; = \; {\big\langle } A^*u, \; v {\big\rangle } $$

holds for all u ∈ ℂm and all v ∈ ℂn. If A ij are the matrix elements of A in the canonical basis, it is an easy exercise to show that \(\big(A^*\big)_{ij}=\overline{A_{ji}}\), where the bar denotes complex conjugation. It is trivial to prove that the following properties hold: 1. \(\big(\alpha_1 A_1 + \alpha_2 A_2\big)^*=\overline{\alpha_1}A_1^*+ \overline{\alpha_2}A_2^*\) for all A 1, A 2 ∈ Mat (ℂ, m, n) and all α 1, α 2 ∈ ℂ; 2. \(\big(AB\big)^*=B^*A^*\) for all A ∈ Mat (ℂ, m, n) and B ∈ Mat (ℂ, p, m); 3. \(A^{**}\equiv\big(A^*\big)^*=A\) for all A ∈ Mat (ℂ, m, n).

A square matrix A ∈ Mat (ℂ, n) is said to be self-adjoint if A = A*. A square matrix U ∈ Mat (ℂ, n) is said to be unitary if U  − 1 = U*. Self-adjoint matrices have real eigenvalues. In fact, if A is self-adjoint, λ ∈ σ(A), and v ∈ ℂn is a normalized (i.e., ||v|| = 1) eigenvector of A with eigenvalue λ, then \(\lambda=\lambda{\langle } v, \; v{\rangle }= {\langle } v, \; \lambda v{\rangle }={\langle } v, \; Av{\rangle }= {\langle } Av, \; v{\rangle }={\langle } \lambda v, \; v{\rangle }= \overline{\lambda}{\langle } v, \; v{\rangle }= \overline{\lambda} \), showing that λ ∈ ℝ.

1.13 Projectors and Orthogonal Projectors

A matrix E ∈ Mat (ℂ, n) is said to be a projector if E 2 = E, and it is said to be a orthogonal projector if it is a self-adjoint projector: E 2 = E and E* = E. An important example of an orthogonal projector is the following. Let v ∈ ℂn be such that ||v|| = 1 and define,

$$ P_v u \; \mathrel{\mathop:}= \; {\langle } v, \; u {\rangle }\: v\; , \label{eq:defdePv-IUyboBYtibuyg} $$
(54)

for each u ∈ ℂn. In the canonical basis, the matrix elements of P v are given by \(\big(P_v\big)_{ij}=\overline{v_j}v_i\), where the v k ’s are the components of v. One has,

$$ P_v^2 u = {\langle } v, \; u {\rangle }\: P_v v = {\langle } v, \; u {\rangle }\: {\langle } v, \; v {\rangle }\: v = {\langle } v, \; u {\rangle }\: v = P_v u \; , $$

proving that \(P_v^2 = P_v\). On the other hand, for any a, b ∈ ℂn, we get

$$ \begin{array}{rll} {\langle } a , \; P_v b {\rangle } &=& {\big\langle } a, \; {\langle } v, \; b {\rangle }\: v {\big\rangle } = {\langle } v, \; b {\rangle }\: {\langle } a, \; v {\rangle }\\ &=& \left\langle\overline{{\langle } a, \; v {\rangle }}\: v, \; b \right\rangle = {\big\langle } {\langle } v, \; a {\rangle }\: v, \; b {\big\rangle } = {\langle } P_v a , \; b {\rangle } , \end{array} $$

showing that \(P_v^* = P_v\). Another relevant fact is that if v 1 and v 2 are orthogonal unit vectors, i.e., \({\langle } v_i, \; v_j {\rangle } =\delta_{ij}\), then \(P_{v_1} P_{v_2} = P_{v_2} P_{v_1} =0\). In fact, for any a ∈ ℂn, one has

$$ \begin{array}{rll} P_{v_1} \big(P_{v_2} a\big) &=& P_{v_1} \big({\langle } v_2, \; a {\rangle }\: v_2\big) = {\langle } v_2, \; a {\rangle }\: P_{v_1} v_2\\ &=& {\langle } v_2, \; a {\rangle }\: {\langle } v_1, \; v_2 {\rangle }\: v_1 = 0 . \end{array} $$

This shows that \(P_{v_1} P_{v_2}=0\) and, since both are self-adjoint, one has also \(P_{v_2} P_{v_1}=0\).

1.14 Spectral Theorem for Self-Adjoint Matrices

The following theorem establishes a fundamental fact about self-adjoint matrices.

Theorem A.13

(Spectral Theorem for Self-Adjoint Matrices) If A ∈ Mat (ℂ, n) is self-adjoint, one can find an orthonormal set {v 1, ..., v n } of eigenvectors of A with real eigenvalues λ 1, ..., λ n , respectively, and one has the spectral representation

$$ \label{eq:decomposicaoespectralparaautoadjuntos0} A \; = \; \lambda_1 P_{v_1} + \cdots + \lambda_n P_{v_n} \; , $$
(55)

where \(P_{v_k}u\mathrel{\mathop:}= {\langle } v_k, \; u{\rangle } v_k\) satisfy \(P_{v_k}^*=P_{v_k}\) and \(P_{v_j}P_{v_k}=\delta_{jk}P_{v_k}\) and one has \(\sum\limits_{k=1}^nP_{v_k}=\mathbb{1}\) .

Therefore, if A ∈ Mat (ℂ, n) is a self-adjoint matrix, it is diagonalizable. Moreover, there is a unitary P ∈ Mat (ℂ, n) such that \(P^{-1}AP=\mathrm{diag}\,\big(\lambda_1, \; \ldots , \; \lambda_n\big)\) .

Proof

Let λ 1 ∈ ℝ be an eigenvalue of A and let v 1 be a corresponding eigenvector. Let us choose ||v 1|| = 1. Define A 1 ∈ Mat (ℂ, n) by \( A_1 \mathrel{\mathop:}= A - \lambda_1 P_{v_1} \). Since both A and \(P_{v_1}\) are self-adjoint, so is A 1, since λ 1 is real.

It is easy to check that A 1 v 1 = 0. Moreover, \([v_1]^\perp\), the subspace orthogonal to v 1, is invariant under the action of A 1. In fact, for \(w \in [v_1]^\perp\), one has \( {\langle } A_1 w , \; v_1 {\rangle } = {\langle } w , \;A_1 v_1 {\rangle } = 0 \), showing that \(A_1 w\in[v_1]^\perp\).

It is therefore obvious that the restriction of A 1 to \([v_1]^\perp\) is also a self-adjoint operator. Let \(v_2 \in [v_1]^\perp\) be an eigenvector of this self-adjoint restriction with eigenvalues λ 2 and choose ||v 2|| = 1. Define

$$ A_2 \; \mathrel{\mathop:}= \; A_1 - \lambda_2 P_{v_2} \; = \; A - \lambda_1 P_{v_1} - \lambda_2 P_{v_2} \; . $$

Since λ 2 is real, A 2 is self-adjoint. Moreover, A 2 annihilates the vectors in the subspace [v 1 , v 2] and keeps \([v_1 , \; v_2]^\perp\) invariant. In fact, \(A_2 v_1=A v_1 - \lambda_1 P_{v_1} v_1 -\) \(\lambda_2 P_{v_2}v_1\!=\! \lambda_1 v_1 \!-\! \lambda_1 v_1 \!-\! \lambda_2 {\langle} v_2 , \; v_1{\rangle } v_2 \!=\! 0\), since \({\langle } v_2 , \; v_1{\rangle } =\) 0. Analogously, \(A_2 v_2 \!=\! A_1 v_2 \!-\! \lambda_2 P_{v_2} v_2 \!=\! \lambda_2 v_2 \!-\! \lambda_2 v_2 =\) 0. Finally, for any α, β ∈ ℂ and \(w \in [v_1 , \; v_2]^\perp\), one has \( {\big\langle } A_2 w , \; (\alpha v_1 + \beta v_2){\big\rangle } = {\big\langle } w , \; A_2 (\alpha v_1 + \beta v_2){\big\rangle } = 0 \), showing that \([v_1 , \; v_2]^\perp\) is invariant by the action of A 2.

Proceeding inductively, we find a set of vectors {v 1 , ..., v n }, with ||v k || = 1 and with v a  ∈ [v 1 , ..., \( \; v_{a-1}]^\perp \) for 2 ≤ a ≤ n, and a set of real numbers { λ 1 , ..., λ n } such that \( A_n = A - \lambda_1 P_{v_1} - \cdots -\lambda_n P_{v_n} \) annihilates the subspace [v 1 , ..., v n ]. But, since {v 1 , ..., v n } is an orthonormal set, one must have \( [v_1 , \; \ldots , \; v_{n}]={\mathbb{C}}^n\), and therefore, we must have A n  = 0, meaning that

$$ \label{decomposicaoespectralparaautoadjuntos} A \; = \; \lambda_1 P_{v_1} + \cdots + \lambda_n P_{v_n} \; . $$
(56)

One has \(P_{v_k}P_{v_l} = \delta_{k, \, l}\: P_{v_k}\), since \({\langle } v_k, \; v_l{\rangle }=\delta_{kl}\). Moreover, since {v 1 , ..., v n } is a basis in ℂn, one has

$$ \label{osiPIUYBfO} x \; = \; \alpha_1 v_1 + \cdots + \alpha_n v_n $$
(57)

for all x ∈ ℂn. By taking the scalar product with v k , one gets that \(\alpha_k={\langle } v_k, \; x{\rangle }\) and, hence,

$$ \begin{array}{rll} x &=& {\langle } v_1 , \; x {\rangle } v_1 + \cdots + {\langle } v_n , \; x {\rangle } v_n\\ &=& P_{v_1}x + \cdots + P_{v_n}x = \left( P_{v_1} + \cdots + P_{v_n} \right)x . \end{array} $$

Since x was an arbitrary element of ℂn, we established that \( P_{v_1} + \cdots + P_{v_n} = \mathbb{1} \).

It follows from (56) that Av a  =  λ a v a . Hence, each v k is an eigenvector of A with eigenvalue λ k . By Theorem A.8, A is diagonalizable: There is P ∈ Mat (ℂ, n) such that \(P^{-1}AP=\mathrm{diag}\,\big(\lambda_1, \; \ldots , \; \lambda_n\big)\). As we saw in the proof of Theorem A.8, we can choose \( P = {{\Big[\!\! \Big[}} v^1, \; \ldots , \; v^n {{\Big]\!\! \Big]}} \). This is, however, a unitary matrix, since, as one easily checks,

$$ P^* P \; = \; \begin{pmatrix} {\langle } v_1 , \; v_1 {\rangle } & \cdots & {\langle } v_1 , \; v_n {\rangle } \\ \vdots & \ddots & \vdots \\ {\langle } v_n , \; v_1 {\rangle } & \cdots & {\langle } v_n , \; v_n {\rangle } \end{pmatrix} \; = \; \mathbb{1} \;, $$

because \({\langle } v_a , \; v_b {\rangle } = \delta_{a, \, b}\).□

1.15 The Polar Decomposition Theorem for Square Matrices

It is well known that every complex number z can be written in the so-called polar form z = |z|e , where |z| ≥ 0 and θ ∈ [ − π, π), with \(|z|\mathrel{\mathop:}=\sqrt{\overline{z}z}\) and \(e^{i\theta} \mathrel{\mathop:}= z|z|^{-1}\). There is an analogous claim for square matrices A ∈ Mat (ℂ, n). This is the content of the so-called Polar Decomposition Theorem, Theorem A.14, below. Let us make some preliminary remarks.

Let A ∈ Mat (ℂ, n) and consider A* A. One has (A* A)* = A* A ** = A* A , and hence, A* A is self-adjoint. By Theorem A.13, we can find an orthonormal set {v k , k = 1, ..., n} of eigenvectors of A* A, with eigenvalues d k , k = 1, ..., n, respectively, with the matrix

$$ P\; \mathrel{\mathop:}= \; {{\Big[\!\! \Big[}} v_1, \; \ldots , \; v_n {{\Big]\!\! \Big]}} \label{eq:iuybuytrTYFGJhddd} $$
(58)

being unitary and such that \(P^*\big(A^* A\big)P = D\mathrel{\mathop:}=\) diag (d 1, ..., d n ). One has d k  ≥ 0 since \( d_k \|v_k\|^2 =\) \(d_k {\langle } v_k, \; v_k{\rangle } \!=\! {\langle } v_k, \; Bv_k{\rangle } \!= {\langle } v_k, \; A^*Av_k{\rangle } = {\langle } Av_k, \; Av_k{\rangle } =\) \( \|Av_k\|^2 \) and, hence, \(d_k=\|Av_k\|^2/\|v_k\|^2 \geq 0\).

Define \( D^{1/2} \mathrel{\mathop:}= \mathrm{diag}\, \left(\sqrt{d_1}, \; \ldots , \; \sqrt{d_n}\right)\). One has \(\left(D^{1/2}\right)^2 = D\). Moreover, \(\left(D^{1/2}\right)^*= D^{1/2}\), since every \(\sqrt{d_k}\) is real. The non-negative numbers \(\sqrt{d_1}, \; \ldots , \; \sqrt{d_n}\) are called the singular values of A.

Define the matrix \(\sqrt{A^*A}\in\mathrm{Mat}\,({\mathbb{C}}, \; n)\) by

$$ \sqrt{A^*A} \; \mathrel{\mathop:}= \; PD^{1/2}P^* \; . \label{eq:definicaoderaizdeAstarA} $$
(59)

The matrix \(\sqrt{A^*A}\) is self-adjoint, since \(\left(\sqrt{A^*A}\right)^*=\left(PD^{1/2}P^* \right)^* =PD^{1/2}P^*= \sqrt{A^*A}\). Notice that \(\left(\sqrt{A^*A}\right)^2 = P(D^{1/2})^2P^* =PDP^* =A^*A\). From this, it follows that

$$ \begin{array}{rll} \left(\det\left(\sqrt{A^*A}\right)\right)^2 &=& \det\left(\left(\sqrt{A^*A}\right)^2\right)\\ &=& \det(A^*A) = \det(A^*)\det(A)\\ &=& \overline{\det(A)}\det(A) = |\det(A)|^2 . \end{array} $$

Hence, \(\det\left(\sqrt{A^*A}\right)=|\det(A)|\) and, therefore, \(\sqrt{A^*A}\) is invertible if and only if A is invertible.

We will denote \(\sqrt{A^*A}\) by |A|, following the analogy suggested by the complex numbers. Now we can formulate the polar decomposition theorem for matrices:

Theorem A.14

(Polar Decomposition Theorem) If A ∈ Mat (ℂ, n), there is a unitary matrix U ∈ Mat (ℂ, n) such that

$$ A \; = \; U\sqrt{A^*A} \; . \label{eq:adecomposicaopolardematrizes} $$
(60)

If A is non-singular, then U is unique. The representation (60) is called the polar representation of A .

Proof

As above, let d k , k = 1, ..., n be the eigenvalues of A*A and let v k , k = 1, ..., n be a corresponding orthonormal set of eigenvalues: \(A^*Av_k=d_kv_k\) and \({\langle } v_k, \; v_l{\rangle }=\delta_{k\, l}\) (see Theorem A.13).

Since d k  ≥ 0 we order them in a way that d k  > 0 for all k = 1, ..., r and d k  = 0 for all k = r + 1, ..., n. Hence,

$$ Av_k \; = \; 0 \mbox{ for all } k\; = \; r+1, \; \ldots , \; n \; , \label{eq:sidpfoD3susudygs} $$
(61)

because \(A^*Av_k=0\) implies \(0={\langle } v_k, \; A^*Av_k{\rangle }={\langle } Av_k,\) \( \; Av_k{\rangle } =\|Av_k\|^2\).

For k = 1, ..., r, let w k be the vectors defined by

$$ w_k \; \mathrel{\mathop:}= \; \frac{1}{\sqrt{d_k}}Av_k \; , \quad k\; =\; 1, \, \ldots , \, r \; . \label{eq:nkhbfcrtrdtfRR} $$
(62)

It is easy to see that

$$ \begin{array}{rll} {\langle } w_k, \; w_l{\rangle } &=& \frac{1}{\sqrt{d_kd_l}}{\langle } A v_k, \; Av_l{\rangle } = \frac{1}{\sqrt{d_kd_l}}{\langle } A^*A v_k, \; v_l{\rangle }\\ &=& \frac{d_k}{\sqrt{d_kd_l}}{\langle } v_k, \; v_l{\rangle } = \frac{d_k}{\sqrt{d_kd_l}}\delta_{k\, l} = \delta_{k\, l} , \end{array} $$

for all k, l = 1, ..., r. Hence, {w k , k = 1, ..., r} is an orthonormal set. We can add to this set an additional orthonormal set {w k , k = r + 1, ..., n} in the orthogonal complement of the set generated by {w k , k = 1, ..., r} and get a new orthonormal set {w k , k = 1, ..., n} as a basis for ℂn.

Let P ∈ Mat (ℂ, n) be defined as in (58) and let Q and U be elements of Mat (ℂ, n) defined by

$$ Q \; \mathrel{\mathop:}= \; {{\Big[\!\! \Big[}} w_1, \; \ldots , \; w_n {{\Big]\!\! \Big]}} \; , \qquad U \; \mathrel{\mathop:}= \; QP^* \;. $$

Since \(\big\{v_k,\; k =1, \, \ldots , \, n\big\}\) and \(\big\{w_k,\; k =1, \, \ldots , \, n\big\}\) are orthonormal sets, one easily sees that P and Q are unitary and, therefore, U is also unitary.

It is easy to show that AP = QD 1/2, where \(D^{1/2}\mathrel{\mathop:}=\mathrm{diag}\,\left(\sqrt{d_1}, \; \ldots , \; \sqrt{d_n}\right)\), In fact,

$$ \begin{array}{rll} AP &\stackrel{\text{(58)}}{=}& A{{\Big[\!\! \Big[}} v_1, \; \ldots , \; v_n {{\Big]\!\! \Big]}} \stackrel{\text{(12)}}{=} {{\Big[\!\! \Big[}} Av_1, \; \ldots , \; Av_n {{\Big]\!\! \Big]}}\\ &\stackrel{\text{(61)}}{=}& {{\Big[\!\! \Big[}} Av_1, \; \ldots , \; Av_r\; 0, \; \ldots , \;0 {{\Big]\!\! \Big]}} \\ &\stackrel{\text{(62)}}{=}& {{\Big[\!\! \Big[}} \sqrt{d_1}w_1, \; \ldots , \; \sqrt{d_r}w_r\; 0, \; \ldots , \;0 {{\Big]\!\! \Big]}}\\ &\stackrel{\text{(13)}}{=}& {{\Big[\!\! \Big[}} w_1, \; \ldots , \; w_n {{\Big]\!\! \Big]}} D^{1/2} = QD^{1/2} . \end{array} $$

Now, since AP = QD 1/2, it follows that \(A=QD^{1/2}P^* = UPD^{1/2}P^* \stackrel{\text{(59)}}{=}U\sqrt{A^*A}\), as we wanted to show.

To show that U is uniquely determined if A is invertible, assume that there exists U′ such that \(A=U\sqrt{A^*A}=U'\sqrt{A^*A}\). We noticed above that \(\sqrt{A^*A}\) is invertible if and only if A is invertible. Hence, if A is invertible, the equality \(U\sqrt{A^*A}=U'\sqrt{A^*A}\) implies U = U′. If A is not invertible, the arbitrariness of U lies in the choice of the orthonormal set {w k , k = r + 1, ..., n}.□

The following corollary is elementary:

Theorem A.15

Let A ∈ Mat (ℂ, n). Then, there exists a unitary matrix V ∈ Mat (ℂ, n) such that

$$ A \; = \; \sqrt{AA^*}\, V \; . \label{eq:adecomposicaopolardematrizes-II} $$
(63)

If A is non-singular, then V is unique.

Proof

For the matrix A*, relation (60) says that \(A^*=U_0\sqrt{(A^*)^*A^*}=U_0\sqrt{AA^*}\) for some unitary U 0. Since \(\sqrt{AA^*}\) is self-adjoint, one has \(A=\sqrt{AA^*}\, U_0^*\). Identifying \(V\equiv U_0^*\), we get what we wanted.□

The polar decomposition theorem can be generalized to bounded or closed unbounded operators acting on Hilbert spaces and even to C*-algebras. See, e.g., [29] and [30].

1.16 Singular Values Decomposition

The polar decomposition theorem, Theorem A.14, has a corollary of particular interest.

Theorem A.16

(Singular Values Decomposition Theorem) Let A ∈ Mat (ℂ, n). Then, there exist unitary matrices V and W ∈ Mat (ℂ, n) such that

$$ A \; = \; VSW^* \; , \label{eq:adecomposicaoemvaloressingularesdematrizes} $$
(64)

where S ∈ Mat (ℂ, n) is a diagonal matrix whose diagonal elements are the singular values of A , i.e., the eigenvalues of \(\sqrt{A^*A}\) .

Proof

The claim follows immediately from (60) and from (59) by taking V = UP, W = P, and S = D 1/2.□

Theorem A.16 can be generalized to rectangular matrices. In what follows, m, n ∈ ℕ and we will use definitions (4), (8), and relation (9) that allow to injectively map rectangular matrices into certain square matrices.

Theorem A.17

(Singular Values Decomposition Theorem. General Form) Let A ∈ Mat (ℂ, m, n). Then, there exist unitary matrices V and W ∈ Mat (ℂ, m + n) such that

$$ A \; = \; I_{m, \; m+n}VSW^*J_{ m+n , \; n} \; , \label{eq:adecomposicaoemvaloressingularesdematrizes-GERAL} $$
(65)

where S ∈ Mat (ℂ, m + n) is a diagonal matrix whose diagonal elements are the singular values of A (defined in (8)), i.e., are the eigenvalues of \(\sqrt{(A')^*A'}\) .

Proof

The matrix A′ ∈ Mat (ℂ, m + n) is a square matrix, and by Theorem A.16, it can be written in terms of a singular value decomposition A′ = VSW* with V and W ∈ Mat (ℂ, m + n), both unitary, and S ∈ Mat (ℂ, m + n) being a diagonal matrix whose diagonal elements are the singular values of A′. Therefore, (65) follows from (9). □

Appendix 2: Singular Values Decomposition and Existence of the Moore–Penrose Pseudoinverse

We will now present a second proof of the existence of the Moore–Penrose pseudoinverse of a general matrix A ∈ Mat (ℂ, m , n) making use of Theorem A.16. We first consider square matrices and later consider general rectangular matrices.

2.1 The Moore–Penrose Pseudoinverse of Square Matrices

Let us first consider square diagonal matrices. If D ∈ Mat (ℂ, n) is a diagonal matrix, its Moore–Penrose pseudoinverse is given by D  +  ∈ Mat (ℂ, n), where, for i = 1, ..., n, one has

$$ \big(D^+\big)_{ii} \; = \; \left\{ \begin{array}{cl} \big(D_{ii}\big)^{-1} \;, & \mbox{if } D_{ii}\neq 0\;, \\ 0 \; , & \mbox{if } D_{ii} = 0\;. \end{array} \right. $$

It is elementary to check that DD  +  D = D, D  +  DD  +  = D  +  and that DD  +  and D  +  D are self-adjoint. Actually, DD  +  = D  +  D, a diagonal matrix whose diagonal elements are either 0 or 1:

$$ \big(DD^+\big)_{ii}=\big(D^+D\big)_{ii} \; = \; \left\{ \begin{array}{cl} 1 \; , & \mbox{if } D_{ii}\neq 0\;, \\ 0 \; , & \mbox{if } D_{ii} = 0\;. \end{array} \right. $$

Now, let A ∈ Mat (ℂ, n) and let A = VSW* be its singular values decomposition (Theorem A.16). We claim that its Moore–Penrose pseudoinverse A  +  is given by

$$ A^+ \; = \; WS^+V^* \; . $$
(66)

In fact, \(AA^+A=\big(VSW^*\big)\big(WS^+V^*\big)\big(VSW^*\big)=VSS^+\) SW  +  = VSW* = A and

$$ \begin{array}{rll} A^+AA^+ &=& \big(WS^+V^*\big)\big(VSW^*\big)\big(WS^+V^*\big)\\ &=& WS^+SS^+V^* = WS^+V^* = A^+ . \end{array} $$

Moreover, \(AA^+=\big(VSW^*\big)\big(WS^+V^*\big)=V\big(SS^+\big)V^*\) is self-adjoint, since SS  +  is a diagonal matrix with diagonal elements 0 or 1. Analogously, \(A^+A=\big(WS^+V^*\big)\) \(\big(VSW^*\big)=W\big(S^+S\big)W^*\) is self-adjoint.

2.2 The Moore–Penrose Pseudoinverse of Rectangular Matrices

Consider now A ∈ Mat (ℂ, m, n) and let A′ ∈ Mat (ℂ, m + n) be the (m + n)×(m + n) defined in (8). Since A′ is a square matrix, it has, by the comments above, a unique Moore–Penrose pseudoinverse ( A′) +  satisfying

  1. 1.

    \(A'\big(A'\big)^+A'=A'\),

  2. 2.

    \(\big(A'\big)^+A'\big(A'\big)^+=\big(A'\big)^+\),

  3. 3.

    \(A'\big(A'\big)^+\) and \(\big(A'\big)^+A'\) are self-adjoint.

In what follows, we will show that A  +  ∈ Mat (ℂ, n, m) is given by

$$ A^+ \; \mathrel{\mathop:}= \; I_{n,\; m+n } \big(A'\big)^+ J_{m+n, \; m} \;, \label{eq:apseudoinversaparamatrizesretangulares} $$
(67)

with the definitions (4) and (5), i.e.,

$$ A^+ \; = \; I_{ n, \; m+n} \Big( J_{m+n, \; m} A I_{ n, \; m+n } \Big)^+ J_{m+n, \; m} \;. \label{eq:apseudoinversaparamatrizesretangulares-2} $$
(68)

The starting point is the existence of the Moore–Penrose pseudoinverse of the square matrix A′. Relation \(A'\big(A'\big)^+A'=A'\) means, using definition (8), that \( J_{ m+n, \; m} A \Big[ I_{ n, \; m+n } \big(A'\big)^+ J_{m+n, \; m}\Big] A I_{ n, \; m+n } = J_{m+n, \; m} A I_{ n, \; m+n } \) and from (6) and (7) it follows, by multiplying to the left by I m, m + n and to the right by J m + n , n , that A A  +  A = A , one of the relations we wanted to prove.

Relation \(\big(A'\big)^+A'\big(A'\big)^+=\big(A'\big)^+\) means, using definition (8), that \( \big(A'\big)^+ J_{m+n, \; m} A I_{ n, \; m+n } \big(A'\big)^+ = \big(A'\big)^+ \). Multiplying to the left by I n, m + n and to the right by J m + n, m , this establishes that A  +  A A  +  = A  +  .

Since \(A'\big(A'\big)^+\) is self-adjoint, it follows from the definition (8) that \(J_{ m+n, \; m} A I_{ n, \; m+n } \big(A'\big)^+ \) is self- adjoint, i.e.,

$$ J_{m+n, \; m} A I_{ n, \; m+n } \big(A'\big)^+ \; = \; \left( A I_{ n, \; m+n } \big(A'\big)^+\right)^* I_{m, \; m+n} \; . $$

Therefore, multiplying to left by I m, m + n and to the right by J m + n, m , it follows from (6) that

$$ \begin{array}{rll} A I_{ n, \; m+n } \big(A'\big)^+ J_{m+n, \; m} &=& I_{m, \; m+n}\Big( A I_{ n, \; m+n } (A')^+\Big)^*\\ &=& \left( A I_{ n, \; m+n } \big(A'\big)^+ J_{m+n, \; m} \right)^* , \end{array} $$

proving that A A  +  is self-adjoint

Finally, since \(\big(A'\big)^+A'\) is self-adjoint, it follows from definition (8) that \(\big(A'\big)^+ J_{m+n, \; m} A I_{ n, \; m+n }\) is self- adjoint, i.e.,

$$ \big(A'\big)^+ J_{m+n, \; m} A I_{ n, \; m+n } \; = \; J_{ m+n , \; n} \left( \big(A'\big)^+ J_{ m+n, \; m} A \right)^* \;. $$

Hence, multiplying to the left by I n, m + n and to the right by J m + n , n , if follows from (7) that

$$ \begin{array}{rll} I_{n, \; m+n}(A')^+ J_{m+n, \; m} A &=& \left( \big(A'\big)^+ J_{m+n, \; m} A \right)^*J_{ m+n , \; n}\\ &=& \left( I_{ n, \; m+n} \big(A'\big)^+ J_{m+n, \; m} A \right)^* , \end{array} $$

establishing that A  +  A is self-adjoint. This proves that A  +  given in (67) is the Moore–Penrose pseudoinverse of A.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Barata, J.C.A., Hussein, M.S. The Moore–Penrose Pseudoinverse: A Tutorial Review of the Theory. Braz J Phys 42, 146–165 (2012). https://doi.org/10.1007/s13538-011-0052-z

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13538-011-0052-z

Keywords

Navigation