Skip to main content
Erschienen in:
Buchtitelbild

2015 | OriginalPaper | Buchkapitel

1. Direct Methods for Linear Systems

verfasst von : Åke Björck

Erschienen in: Numerical Methods in Matrix Computations

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

By a matrix we mean a rectangular array of real or complex numbers ordered in \(m\) rows and \(n\) columns:

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Fußnoten
1

James Joseph Sylvester (1814–1893), English mathematician, studied at St. John’s College, Cambridge. Because of his Jewish faith, Sylvester could not find an adequate research position in England. His most productive period was 1877–1884, when he held a chair at Johns Hopkins University, USA. Much of the terminology in linear algebra is due to him, e.g., “canonical form”, “minor”, and “nullity”.

 
2

Arthur Cayley (1821–1895), English mathematician, studied at Trinity College, Cambridge. He worked as a lawyer before being appointed Sadleirian Professor of Pure Mathematics at Cambridge in 1863. In 1858 he published “Memoir on the theory of matrices”, which contained the first abstract definition of a matrix. Besides developing the algebra of matrices, his most important work was in geometry and group theory.

 
3

To add to the confusion, in computer literature flops means floating-point operations per second.

 
4

Determinants of \(3\times 3\) matrices were first introduced by Sati and Leibniz in 1683. Cramer 1750 gave the general rule for \(n\times n\) matrices. In 1770 Laplace gave the expansion of a determinant now named after him. The term “determinant” was coined by Gauss 1801 in a paper discussing quadratic forms. A paper from 1812 by Cauchy is the most complete of the early works on determinants.

 
5

Named after the Swiss mathematician Gabriel Cramer (1704–1752).

 
6

Issai Schur (1875–1941) was born in Russia, but studied at the University of Berlin, where he became full professor in 1919. Schur is mainly known for his fundamental work on the theory of groups, but he worked also in the field of matrices.

 
7

Tadeusz Banachiewicz (1882–1954) was a Polish astronomer and mathematician. In 1919 he became director of Krakow (Cracow) Observatory. In 1925 he developed a special kind of matrix algebra for “Cracovians” that brought him international recognition.

 
8

Hermann Minkowski (1864–1909) was born in Alexotas, Russian Empire (now Kaunas, Lithuania). He studied mathematics in Königsberg where he became close friends with David Hilbert. In 1887 he obtained a professorship at Bonn and four years later he was appointed to ETH, Zürich, where Einstein attended several of his lectures. In 1902 he accepted a chair at Göttingen. Minkowski’s earlier work had been in quadratic forms and continued fractions, but in Göttingen he started to work on problems in mathematical physics. He developed a new view of space and time as a four-dimensional non-euclidean space. This provided a mathematical framework for the theory of electrodynamics and relativity.

 
9

A function \(f(x)\) is convex on a convex set \(S\) if for any \(x_1\) and \(x_2\) in \(S\) and any \(\lambda \) with \(0 < \lambda < 1\), we have \(f(\lambda x_1 + (1-\lambda )x_2) \le \lambda f(x_1) + (1-\lambda )f(x_2)\).

 
10

Ferdinand George Frobenius (1849–1917), German mathematician, received his doctorate at University of Berlin, supervised by Weierstrass. In 1875 he took up a position as professor at ETH, Zürich. He remained there until 1892, when he succeeded Kronecker in Berlin, where to became the leading mathematician. His contributions to linear algebra include fundamental results in the theory of irreducible matrices. Issai Schur was Frobenius’ doctoral student.

 
11

John von Neumann was born János Neumann in Budapest in 1903, and died in Washington D.C. in 1957. He studied under Hilbert in Göttingen (1926–1927), was appointed professor at Princeton University in 1931, and in 1933 joined the newly founded Institute for Advanced Studies in Princeton. He built a framework for quantum mechanics, worked in game theory, and was one of the pioneers of computer science. His contributions to modern numerical analysis are surveyed by Grcar [114, 2011].

 
12

From the Latin verb specere meaning “to look”.

 
13

Aleksei Nikolaevich Krylov (1863–1945), Russian mathematician, joined the department of ship construction at the Maritime Academy of St. Petersburg. In 1931 he found a new method for determining the frequency of vibrations in mechanical systems using these subspaces.

 
14

Eugenio Beltrami (1835–1900) studied applied mathematics in Pavia and Milan. In 1864 he was appointed to the chair of geodesy at the University of Pisa and from 1866 on he was professor of rational mechanics in Bologna. In 1873 he moved to Rome and after three years he went back to Pavia. Beltrami made major contributions to the differential geometry of curves and surfaces.

 
15

George E. Forsythe (1917–1972), American mathematician, graduated from Brown University in 1939. As a meteorologist, he became interested in numerical analysis and computing and in 1948 he joined the Institute for Numerical Analysis at UCLA. In 1957 he took up a position at Stanford University as a professor of mathematics. One of his PhD students was Cleve Moler, who later invented Matlab. At this time computer science was hardly thought of as a special discipline. Forsythe became president of the Association of Computing Machinery. In 1961 he created the Computer Science Department at Stanford University, which under his leadership had a profound influence on the development of the subject.

 
16

The first to interpret GE as triangular factorization seems to have been Banachiewicz in 1937.

 
17

James Hardy Wilkinson (1919–1986), English mathematician, graduated from Trinity College, Cambridge. He became Alan Turing’s assistant at the National Physical Laboratory in London in 1946, where he worked on the ACE computer project. He did pioneering work on numerical methods for solving linear systems and eigenvalue problems and developed software and libraries of numerical routines.

 
18

Myrick Doolittle (1830–1911) worked for the U.S. Coast and Geodetic Survey.

 
19

In the days of hand computations these algorithms had the advantage that they did away with the necessity in GE to write down \(\approx n^3/3\) intermediate results—one for each multiplication.

 
20

Named after Wilhelm Jordan (1842–1899), a German geodesist who made surveys in Germany and Africa. He used this method to compute the covariance matrix in least squares problems.

 
21

Abraham van der Sluis (1928–2004) became the doyen of Numerical Mathematics in the Netherlands, counting Henk van der Vorst among his students.

 
22

André-Louis Cholesky (1875–1918) was a French military officer involved in geodesy and surveying in Crete and North Africa just before World War I. He developed the algorithm named after him. His work was posthumously published by Benoit [15, 1924].

 
23

Sylvester published the theorem in [188, 1852], but the result was later found in notes of Jacobi dated 1847 and published posthumously.

 
24

Traditionally, a conic section is defined as the intersection between a circular cone and a plane. The Greek mathematician Appolonius of Perga (died 190 BC) wrote an eight volume treatise Conic Sections, which summarized early knowledge.

 
25

Alan Mathison Turing (1912–1954) English mathematician and fellow of Kings College, Cambridge. For his work on undecidable mathematical propositions he invented the “Turing machine”, which proved to be of fundamental importance in mathematics and computer science. During WW2 he led the group at Bletchley Park that broke the Enigma coding machine used by the German Luftwaffe and Navy. After the end of the war, Turing worked at the National Physical Laboratory in London on the design of the Pilot ACE computer. In 1948 he moved to Manchester to work on the design of subroutines and numerical analysis and wrote a remarkable paper, in which he formulated the LU factorization and introduced matrix condition numbers.

 
26

It was suggested that the IEEE 754 standard should require inner products to be precisely specified, but that did not happen.

 
27

INTLAB Version 8 is available from http://​www.​ti3.​tuhh.​de.

 
28

Named after the German mathematician and engineer Karl Hessenberg (1904–1959). These matrices first appeared in [126, 1940].

 
29

George Dantzig (1914–2005) American mathematician, started graduate studies at UC Berkeley in 1939. In 1941 he went to Washington to do statistical work for the Air Force at the Combat Analysis Branch. At the end of the war he became mathematical adviser to the Defense Department, where he worked on mechanizing planning processes. From 1952 Dantzig worked for the RAND Corporation with implementing the simplex method for computers. In 1960 he was appointed professor at the Operations Research Center at UC Berkeley. In 1966 he moved to Stanford University, where he was to remain for the rest of his career.

 
30

Leopold Kronecker (1823–1891) German mathematician, is also known also for his remark “God created the integers, all else is the work of man”.

 
31

Otto Toeplitz (1881–1940), German mathematician. While in Göttingen 1906–1913, influenced by Hilbert’s work on integral equations, he studied summation processes and discovered what are now known as Toeplitz operators. He emigrated to Palestine in 1939.

 
32

Hermann Hankel (1839–1873), German mathematician, studied determinants of the class of matrices now named after him in his thesis [119, 1861].

 
33

Complex symmetric matrices have special properties. For example, they have a symmetric SVD, which can be computed by an algorithm given by Bunse-Gernster and Gragg [34, 1988].

 
34

Note that \(\Pi _N^T = \Pi _N^{-1}\) is the so-called perfect shuffle permutation. in which the permuted vector \(\Pi _N^T f\) is obtained by splitting \(f\) in half and then “shuffling” the top and bottom halves.

 
35

Augustin Cauchy (1789–1857) is the father of modern analysis and the creator of complex analysis. He defined a complex function of a complex variable for the first time in 1829. He produced no less than 729 papers on all the then known areas of mathematics.

 
Literatur
  1. Aasen, J.O.: On the reduction of a symmetric matrix to tridiagonal form. BIT 11, 233–242 (1971)
  2. Ammar, G., Gragg, W.B.: Superfast solution of real positive definite Toeplitz systems. SIAM J. Matrix. Anal. Appl. 9(1), 61–76 (1988)
  3. Andersen, B.S., Waśniewski, J, Gustavson, F.G.: A recursive formulation of Cholesky factorization or a packed matrix. ACM Trans. Math. Softw. 27(2), 214–244 (2001)
  4. Anderson, E., Bai, Z., Bischof, C.H., Blackford, L.S., Demmel, J.W., Dongarra, J.J., Du Croz, J.J., Greenbaum, A., Hammarling, S.J., McKenney, A., Sorensen, D.C.: LAPACK Users’ Guide, 3rd edn. SIAM, Philadelphia (1999)
  5. Ashcraft, C., Grimes, R.G., Lewis, J.G.: Accurate symmetric indefinite linear system solvers. SIAM J. Matrix Anal. Appl. 38(1), 513–561 (1998)
  6. Asplund, E.: Inverses of matrices \(\{a_{ij}\}\) which satisfy \(a_{ij} = 0\) for \(j {\>} i+ p\). Math. Scand. 7, 57–60 (1959)
  7. Autonne, L.: Sur les matrices hypohermitiennes et les unitaires. Comptes Rendus de l’Academie Sciences, Paris 156, 858–860 (1913)
  8. Banach, S.: Sur les opérations dans les ensembles abstraits et les applications aux équations integrales. Fundementa Mathematicae 3, 133–181 (1922)
  9. Barker, V.A., Blackford, L.S., Dongarra, J., Croz, J.J.D., Hammarling, S.J., Marinova, M., Waśniewski, J., Yalamov, P.: LAPACK 95 Users’ Guide. SIAM, Philadelphia (2001)
  10. Bartels, R.H., Golub, G.H.: The simplex method of linear programming using LU decomposition. Commun. ACM 12, 266–268 (1969)
  11. Bartels, R.H., Stoer, J., Zenger, C.: A realization of the simplex method based on triangular decompositions. In: Wilkinson, J.H., Reinsch, C. (eds.) Handbook for Automatic Computation. Linear Algebra, vol. II, pp. 152–190. Springer, New York (1971)
  12. Bauer, F.L.: Genauigkeitsfragen bei der Lösung linearer Gleichungssysteme. Z. Angew. Math. Mech. 46(7), 409–421 (1966)
  13. Bellman, R.: Introduction to Matrix Analysis, 2nd edn. McGraw-Hill, New York (1970). Republished by SIAM, Philadelphia (1997)
  14. Beltrami, E.: Sulle funzioni bilineari. Giornale di Matematiche ad Uso degli Studenti Delle Universita 11, 98–106 (1873)
  15. Benoit, C:. Sur la méthode de résolution des équations normales, etc. (procédés du commandant Cholesky). Bull. Géodesique, 2, 67–77 (1924).
  16. Bixby, R.E.: Implementing the simplex method: the initial basis. ORSA J. Comput. 4, 267–284 (1992)
  17. Björck, Å.: Iterative refinement and reliable computing. In: Cox, M.G., Hammarling, S.J. (eds.) Reliable Numerical Computation, pp. 249–266. Clarendon Press, Oxford (1990)
  18. Björck, Å., Pereyra, V.: Solution of Vandermonde system of equations. Math. Comp. 24, 893–903 (1970)
  19. Blackford, L.S., Choi, J., Cleary, A., D’Azevedo, E., Demmel, J.W., Dhillon, I., Dongarra, J.J., Hammarling, S.J., Henry, G., Petitet, A., Stanley, K., Walker, D., Whaley, R.C.: ScaLAPACK Users’ Guide, 3rd edn. SIAM, Philadelphia (1997)
  20. Blackford, L.S., Demmel, J.W., Dongarra, J.J., Duff, I., Hammarling, S.J., Henry, G., Heroux, M., Kaufman, L., Lumsdaine, A., Petitet, A., Pozo, R., Remington, K., Whaley, R.C.: An updated set of basic linear algebra subprograms (BLAS). ACM Trans. Math. Softw. 28(2), 135–151 (2002)
  21. Boisvert, R.F., Pozo, R., Remington, K., Barret, R., Dongarra, J.J.: Matrix Market: A web resource for test matrix collections. In: Boisvert, R.F. (ed.) Quality of Numerical Software. Assessment and Enhancement, pp. 125–137. Chapman & Hall, London (1997)
  22. de Boor, C.: An empty exercise. ACM SIGNUM Newsl. 25, 2–6 (1990)
  23. de Boor, C., Pinkus, A.: Backward error analysis for totally positive linear systems. Numer. Math. 27, 485–490 (1977)
  24. Boros, T., Kailath, T., Olshevsky, V.: A fast parallel Björck-Pereyra-type algorithm for parallel solution of Cauchy linear systems. Linear Algebra Appl. 302–303, 265–293 (1999)
  25. Bothe, Z.: Bounds for rounding errors in the Gaussian elimination for band systems. J. Inst. Math. Appl. 16, 133–142 (1975)MathSciNetView Article
  26. Boyd, D.W.: The power method for \(\ell ^p\) norms. Linear Algebra Appl. 9, 95–101 (1974)
  27. Brent, R.P.: Algorithms for matrix multiplications. Technical Report No. CS-157, Computer Science Department, Stanford University, CA (1970)
  28. Brent, R.P., Percival, C., Zimmermann, P.: Error bounds on complex floating-point multiplication. Math. Comp. 76, 1469–1481 (2007)
  29. Bunch, J.R.: Partial pivoting strategies for symmetric matrices. Numer. Anal. 11, 521–528 (1974)
  30. Bunch, J.R.: Stability of methods for solving Toeplitz systems of equations. SIAM J. Sci. Stat. Comp. 6(2), 349–364 (1985)
  31. Bunch, J.R., Kaufman, L.: Some stable methods for calculating inertia and solving symmetric linear systems. Math. Comp. 31, 163–179 (1977)
  32. Bunch, J.R., Parlett, B.N.: Direct methods for solving symmetric indefinite systems of linear equations. SIAM J. Numer. Anal. 8(4), 639–655 (1971)
  33. Bunch, J.R., Kaufman, L., Parlett, B.N.: Decomposition of a symmetric matrix. Numer. Math. 27(1), 95–110 (1976)
  34. Bunse-Gerstner, A., Gragg, W.B.: Singular value decomposition of complex symmetric matrices. SIAM J. Matrix. Anal. Appl. 21(1), 41–54 (1988)
  35. Calvetti, D., Reichel, L.: On the solution of Cauchy systems of linear equations. Electron. Trans. Numer. Anal. 4, 125–137 (1996)
  36. Cauchy, A.: Mémoire sur les fonctiones alternées et sur les sommes alternées. Exercises d’Analyse et de Phys. Math., vol. 2, Paris (1841)
  37. Chandrasekaran, S., Gu, M.: Fast and stable algorithms for banded plus semiseparable systems of linear equations. SIAM J. Matrix Anal. Appl. 25(2), 373–384 (2003)
  38. Chandrasekaran, S., Gu, M., Sun, X., Xia, J., Zhu, J.: A superfast algorithm for Toeplitz systems of linear equations. SIAM J. Matrix Anal. Appl. 29(4), 1247–1266 (2007)
  39. Chang, X.-W., Paige, C.C., Titley-Peloquin, D.: Characterizing matrices that are consistent with given solutions. SIAM J. Matrix Anal. Appl. 30(4), 1408–1420 (2008)
  40. Chu, M.T., Funderlic, R.E., Golub, G.H.: A rank-one reduction formula and its application to matrix factorization. SIAM Rev. 37(3), 512–530 (1995)
  41. Ciarlet, P.G.: Introduction to Numerical Linear Algebra and Optimization. Cambridge University Press, Cambridge (1989)
  42. Cline, A.K., Moler, C.B., Stewart, G.W., Wilkinson, J.H.: An estimate for the condition number of a matrix. SIAM J. Numer. Anal. 16(2), 368–375 (1979)
  43. Cooley, J.W., Tukey, J.W.: An algorithm for machine calculation of complex Fourier series. Math. Comp. 19, 297–301 (1965)
  44. Coppersmith, D., Winograd, S.: Matrix multiplication via arithmetic progressions. J. Symb. Comput. 9, 251–280 (1990)MathSciNetView ArticleMATH
  45. Crout, P.D.: A short method for evaluating determinants and solving systems of linear equations with real or complex coefficients. Trans. Am. Inst. Electr. Eng. 60, 1235–1240 (1941)View Article
  46. Cuthill, E., McKee, J.: Reducing the bandwidth of sparse symmetric matrices. In: Proceedings of 24th National Conference on Association for Computing Machinery, ACM Publications P-69, pp. 157–172. ACM Publications, New York (1969)
  47. Cybenko, G.: The numerical stability of the Levinson-Durbin algorithm for Toeplitz systems of equations. SIAM J. Sci. Stat. Comp. 1(3), 303–309 (1980)MathSciNetView ArticleMATH
  48. Dahlquist, G., Björck, Å.: Numerical Methods in Scientific Computing, vol. I. SIAM, Philadelphia (2008)
  49. Dantzig, G.B.: Linear Programming and Extensions, 2nd edn. Princeton University Press, Princeton (1965)
  50. Davidson, E.R.: The iterative calculation of a few of the lowest eigenvalues and corresponding eigenvectors of large real symmetric matrices. J. Comput. Phys. 17, 87–94 (1975)View ArticleMATH
  51. Davis, T.A.: Direct Methods for Sparse Linear Systems. Fundamental of Algorithms, vol. 2. SIAM, Philadelphia (2006)
  52. Davis, T.A., Duff, I.S.: An unsymmetric-pattern multifrontal method for sparse LU factorizatiom. SIAM J. Matrix Anal. Appl. 18(1), 140–158 (1997)
  53. Davis, T.A., Hu, Y.: The University of Florida sparse matrix collection. ACM Trans. Math. Softw. 38(1), 1:1–1:25 (2011)
  54. Demmel, J.W.: Applied Numerical Linear Algebra. SIAM, Philadelphia (1997)
  55. Demmel, J.W., Koev, P.: The accurate and efficient solution of a totally positive generalized Vandermonde linear system. SIAM J. Matrix Anal. Appl. 27, 142–152 (2005)
  56. Demmel, J.W., Higham, N.J., Schreiber, R.S.: Stability of block LU factorizations. Numer. Linear Algebra Appl. 2, 173–190 (1995)
  57. Demmel, J.W., Eisenstat, S.C., Gilbert, J.R. Lin, X.S., Liu, J.W.H.: A supernodal approach to sparse partial pivoting. SIAM J. Matrix Anal. Appl. 20, 720–755 (1999)
  58. Demmel, J.W., Hida, Y., Kahan, W.M., Li, X.S., Mukherjee, S., Riedy, E.J.: Error bounds from extra-precise iterative refinement. ACM Trans. Math. Softw. 32(2), 325–351 (2006)
  59. Dongarra, J.J., Eijkhout, V.: Numerical linear algebra and software. J. Comput. Appl. Math. 123, 489–514 (2000)
  60. Dongarra, J.J., Luszek, P.: How elegant code evolves with hardware: The case of Gaussian elimination. In: Oram, A., Wilson G. (eds.) Beautiful Code, pp. 229–267. O’Reilly, Sebastopol (2007)
  61. Dongarra, J.J., Walker, D.W.: Software libraries for linear algebra computations on high performance computers. SIAM Rev. 37, 151–180 (1995)
  62. Dongarra, J.J., Bunch, J.R., Moler, C.B., Stewart, G.W.: LINPACK Users’ Guide. SIAM, Philadelphia (1979)
  63. Dongarra, J.J., Du Croz, J., Duff, I.S., Hammarling, S.J.: A set of level 3 basic linear algebra subprograms. ACM Trans. Math. Softw. 16(1), 1–17 (1990)
  64. Dongarra, J.J., Du Croz, J., Duff, I.S., Hammarling, S.J.: Algorithm 679: A set of level 3 basic linear algebra subprograms: Model implementations and test programs. ACM Trans. Math. Softw. 16(1), 18–32 (1990)
  65. Dongarra, J.J., Du Croz, J., Hammarling, S., Hanson, R.J.: Algorithm 656. An extended set of FORTRAN Basic Linear Algebra Subprograms: Model implementation and test programs. ACM Trans. Math. Softw. 14, 18–32 (1988)
  66. Dongarra, J.J , Du Croz, J., Hammarling, S., Hanson.R.J.: An extended set of FORTRAN basic linear algebra subprograms. ACM Trans. Math. Software. 14:1–17 (1988)
  67. Dongarra, J.J., Duff, I.S., Sorensen, D.C., van der Vorst, H.A.: Numerical Linear Algebra for High Performance Computers. SIAM, Philadelphia (1998)
  68. Doolittle, M.H.: Method employed in the solution of normal equations and the adjustment of a triangularization. In: U. S. Coast and Geodetic Survey, Report, pp. 115–120 (1878)
  69. Du Croz, J., Higham, N.J.: Stability of methods for matrix inversion. IMA J. Numer. Anal. 12, 1–19 (1992)
  70. Duff, I.S.: On algorithms for obtaining a maximum transversal. ACM Trans. Math. Softw. 7, 315–330 (1981)
  71. Duff, I.S.: Sparse numerical linear algebra: Direct methods and preconditioning. In: Duff, I.S., Watson, G.A. (eds.) The State of the Art in Numerical Analysis, pp. 27–62. Oxford University Press, London (1997)
  72. Duff, I.S., Reid, J.K.: An implementation of Tarjan’s algorithm for the block triangularization of a matrix. ACM Trans. Math. Softw. 4, 137–147 (1978)
  73. Duff, I.S., Reid, J.K.: The multifrontal solution of indefinite sparse symmetric linear systems. ACM Trans. Math. Softw. 9, 302–325 (1983)
  74. Duff, I.S., Erisman, A.M., Reid, J.K.: Direct Methods for Sparse Matrices. Oxford University Press, London (1986)
  75. Duff, I.S., Grimes, R.G., Lewis, J.G.: Sparse matrix test problems. ACM Trans. Math. Softw. 15(1), 1–14 (1989)
  76. Duff, I.S., Heroux, M.A., Pozo, R.: An overview of the sparse basic linear algebra subprograms: The new standard from the BLAS technical forum. ACM Trans. Math. Softw. 28(2), 239–257 (2002)
  77. Durbin, J.: The fitting of time-series models. Rev. Int. Stat. Inst. 28, 229–249 (1959)
  78. Eckart, C., Young, G.: The approximation of one matrix by another of lower rank. Psychometrika 1, 211–218 (1936)
  79. Eisenstat, S.C., Gursky, M.C., Schultz, M.H., Sherman, A.H.: The Yale sparse matrix package, 1. The symmetric code. Int. J. Numer. Meth. Eng. 18, 1145–1151 (1982)
  80. Faddeev, D.K., Faddeeva, V.N.: Computational Methods of Linear Algebra. W. H. Freeman, San Francisco (1963)
  81. Fiedler, M.: Special Matrices and Their Applications in Numerical Mathematics., 2nd edn. Dover, Mineola (2008)
  82. Fletcher, R.: Factorizing symmetric indefinite matrices. Linear Algebra Appl. 14, 257–272 (1976)
  83. Forrest, J., Tomlin, J.A.: Updated triangular factors of the basis to maintain sparsity in the product form simplex method. Math. Program. 2, 263–278 (1972)
  84. Forsythe, G.E.: Tentative classification of methods and bibliography of on solving systems of linear equations. Nat. Bur. Stand. Appl. Math. Ser. 29, 1–28 (1953)
  85. Forsythe, G.E., Malcolm, M.A., Moler, C.B.: Computer Methods for Mathematical Computations. Prentice-Hall, Englewood Cliffs (1977)
  86. Gander, W., Golub, G.H.: Cyclic reduction—history and applications. In: Luk, F.T., Plemmons, R.J. (eds.) Scientific Computing, pp. 73–85. Springer, Singapore (1997)
  87. Gantmacher, F.R.: The Theory of Matrices, vol. I, x+374 pp. Chelsea Publishing Co, New York (1959)
  88. Gantmacher, F.R.: The Theory of Matrices. vol. II, ix+276 pp. Chelsea Publishing Co, New York (1959)
  89. Garbow, B.S., Boyle, J.M., Dongarra, J.J., Stewart, G.W.: Matrix Eigensystems Routines: EISPACK Guide Extension. Lecture Notes in Computer Science, vol. 51. Springer, New York (1977)
  90. Gasca, M., Peña, J.M.: On factorization of totally positive matrices. In: Gasca, M., Micchelli, C.A. (eds.) Total Positivity, pp. 109–130. Kluwer Academic Publishers, Dordrecht (1996)
  91. Gastinel, N.: Inversions d’une matrice generalisant la matrice de Hilbert. Chiffres 3, 149–152 (1960)
  92. Gastinel, N.: Linear Numerical Analysis. Academic Press, London (1970)
  93. van de Geijn, R.A.: Using PLAPACK. The MIT Press, Boston (1997)
  94. George, A., Liu, J.W.H.: User guide for SPARSPAK, Waterloo Sparse Linear Equation Package. Technical Report Res. CS 78–30, Department of Computer Science, University of Waterloo, Waterloo, Ontario, Canada (1980)
  95. George, A., Liu, J.W.H.: Computer Solution of Large Sparse Positive Definite Systems. Prentice-Hall, Englewood Cliffs (1981)
  96. George, A., Liu, J.W.H.: The evolution of the minimum degree ordering algorithm. SIAM Rev. 31(1), 1–19 (1989)
  97. George, A., Ikramov, K.D., Kucherov, A.B.: On the growth factor in Gaussian elimination for generalized Higham matrices. Numer. Linear Algebra Appl. 9, 107–114 (2002)
  98. George, J.A., Ng, E.G.: An implementation of Gaussian elimination with partial pivoting for sparse systems. SIAM J. Sci. Stat. Comput. 6(2), 390–409 (1985)
  99. George, J.A., Ng, E.G.: Symbolic factorization for sparse Gaussian elimination with partial pivoting. SIAM J. Sci. Stat. Comput. 8(6), 877–898 (1987)
  100. Gilbert, J.R.: Predicting structure in sparse matrix computations. SIAM J. Matrix Anal. Appl. 15(1), 62–79 (1994)
  101. Gilbert, J.R., Peierls, T.: Sparse partial pivoting in time proportional to arithmetic operations. SIAM J. Sci. Stat. Comput. 9(5), 862–874 (1988)
  102. Gilbert, J.R., Moler, C.B., Schreiber, R.: Sparse matrices in Matlab: Design and implementation. SIAM J. Matrix Anal. Appl. 13(1), 333–356 (1992)
  103. Gill, P.E., Murray, W., Saunders, M.A.: SNOPT: An SQP algorithm for large-scale constrained optimization. SIAM Rev. 47(1), 99–131 (2005)
  104. Gill, P.E., Murray, W., Wright, M.H.: Numerical Linear Algebra and Optimization, vol. 1. Addison-Wesley, London (1991)
  105. Gill, P.E., Murray, W., Saunders, M.A., Wright, M.H.: Sparse matrix methods in optimization. SIAM J. Sci. Stat. Comput. 5(3), 562–589 (1984)
  106. Gill, P.E., Murray, W., Saunders, M.A., Wright, M.H.: Maintaining LU factors of a general sparse matrix. Linear Algebra Appl. 88(89), 239–270 (1987)
  107. Gohberg, I., Kailath, T., Kohltracht, I.: Linear complexity algorithms for semiseparable matrices. J. Integr. Equ. Oper. Theor. 8, 780–804 (1985)
  108. Gohberg, I., Kailath, T., Olshevsky, V.: Fast Gaussian elimination with partial pivoting for matrices with displacement structure. Math. Comp. 64, 1557–1576 (1995)
  109. Golub, G.H., Kahan, W.: Calculating the singular values and pseudoinverse of a matrix. SIAM J. Numer. Anal. Ser. B 2, 205–224 (1965)
  110. Golub, G.H., Reinsch, C.: Singular value decomposition and least squares solutions. In: Wilkinson, J.H., Reinsch, C. (eds.) Handbook for Automatic Computation. Linear Algebra. vol. II, pp. 134–151. Springer, New York (1970). Prepublished in Numer. Math. 14, 403–420 (1970)
  111. Golub, G.H., Van Loan, C.F.: Matrix Computations. Johns Hopkins University Press, Baltimore (1983)
  112. Golub, G.H., Van Loan, C.F.: Matrix Computations, 4th edn. Johns Hopkins University Press, Baltimore (2013)
  113. Graham, S.L., Snir, M., Patterson, C.A. (eds.): Getting up to Speed. The Future of Supercomputing. The National Academies Press, Washington, D.C. (2004)
  114. Grcar, J.F.: John von Neumann’s analysis of Gaussian elimination and the origin of modern numerical analysis. SIAM Rev. 53(4), 607–682 (2011)
  115. Gu, M.: Stable and efficient algorithms for structured systems of linear equations. SIAM J. Matrix Anal. Appl. 19(2), 279–306 (1998)
  116. Gustavson, F.G.: Recursion leads to automatic variable blocking for dense linear-algebra algorithms. IBM J. Res. Dev. 41(6), 737–754 (1997)
  117. Hager, W.W.: Condition estimates. SIAM J. Sci. Stat. Comput. 5(2), 311–316 (1984)
  118. Hager, W.W.: Updating the inverse of a matrix. SIAM Rev. 31(2), 221–239 (1989)
  119. Hankel, H.: Über eine besondere Klasse der symmetrischen Determinanten. Universität Göttingen, Germany, Inaugural Diss. (1861)
  120. Hansen, E.: Interval arithmetic in matrix computations. ii. SIAM J. Numer. Anal. 4(1), 1–9 (1965)
  121. Hansen, E.: Topics in Interval Analysis. Oxford University Press, Oxford (1969)
  122. Hargreaves, G.I.: Interval analysis in MATLAB. Numerical Analysis Report 418, Department of Mathematics, University of Manchester (2002)
  123. Heinig, G.: Inversion of generalized Cauchy matrices and other classes of structured matrices. IMA Vol. Math. Appl. 69, 95–114 (1994)
  124. Henderson, H.V., Searle, S.R.: The vec-permutation matrix, the vec operator and Kronecker products: A review. Linear Multilinear Algebra 9, 271–288 (1981)MathSciNetView ArticleMATH
  125. Henderson, H.V., Searle, S.R.: On deriving the inverse of a sum of matrices. SIAM Rev. 23(1), 53–60 (1981)MathSciNetView ArticleMATH
  126. Hessenberg, K.: Behandlung linearer Eigenvertaufgaben mit Hilfe der Hamilton-Cayleyschen Gleichung. Technical Report IPM-1, Institute of Practical Mathematics, Technische Hochschule, Darmstadt (1940)
  127. Higham, N.J.: FORTRAN codes for estimating the one-norm of a real or complex matrix, with application to condition estimation. ACM Trans. Math. Softw. 14(4), 381–396 (1988)
  128. Higham, N.J.: Stability of the diagonal pivoting method with partial pivoting. SIAM J. Matrix Anal. Appl. 18(1), 52–65 (1997)
  129. Higham, N.J.: Accuracy and Stability of Numerical Algorithms, 2nd edn. SIAM, Philadelphia (2002)
  130. Hogben, L. (ed.): Handbook of Linear Algebra, 2nd edn. Chapman & Hall/CRC Press, Boca Raton (2013)
  131. Horn, R.A., Johnson, C.R.: Topics in Matrix Analysis. Cambridge University Press, Cambridge (1991)
  132. Horn, R.A., Johnson, C.R.: Matrix Analysis, 2nd edn. Cambridge University Press, Cambridge (2012)
  133. Hotelling, H.: Some new methods in matrix calculus. Ann. Math. Statist. 14, 1–34 (1943)
  134. Householder, A.S.: The Theory of Matrices in Numerical Analysis, xi+257 pp. Dover, New York (1975) (Corrected republication of work first published in 1964 by Blaisdell Publ. Co, New York)
  135. IEEE Standard for Binary Floating-Point Arithmetic: ANSI/IEEE Standard 754-1985. SIGPLAN Notices 22(2), 9–25 (1987)
  136. Ikebe, Y.: On inverses of Hessenberg matrices. Linear Algebra Appl. 24, 93–97 (1979)
  137. Irons, B.M.: A frontal solution program for finite element analysis. Int. J. Numer. Meth. Eng. 2, 5–32 (1970)View ArticleMATH
  138. Jordan, C.: Mémoires sur les formes bilinéaires. J. Meth. Pures. Appl., Déuxieme Série. 19, 35–54 (1874)
  139. Kågström, B., Ling, P., Van Loan, C.F.: GEMM-based level 3 BLAS high performance model implementation and performance evaluation benchmarks. ACM Trans. Math. Softw. 24(3), 268–302 (1998)
  140. Kahan, W.M.: Accurate eigenvalues of a symmetric tridiagonal matrix. Technical Report No. CS-41, Revised June 1968, Computer Science Department. Stanford University, CA (1966)
  141. Kahan, W.M.: Numerical linear algebra. Canad. Math. Bull. 9, 757–801 (1966)View ArticleMATH
  142. Kailath, T., Sayed, A.H.: Displacement structures: Theory and applications. SIAM Rev. 37, 297–386 (1995)
  143. Kiełbasiński, A., Schwetlick, H.: Numerische Lineare Algebra. Eine Computerorientierte Einfürung. VEB Deutscher Verlag der Wissenschaften, Berlin (1988)
  144. Knuth, D.E.: The Art of Computer Programming. Seminumerical Algorithms, vol. 2, 3rd edn. Addison-Wesley, Reading (1998)
  145. Lancaster, P., Tismenetsky, M.: The Theory of Matrices. With Applications. Academic Press, New York (1985)
  146. Lawson, C.L., Hanson, R.J., Kincaid, D.R., Krogh, F.T.: Basic Linear Algebra Subprograms for Fortran usage. ACM Trans. Math. Softw. 5, 308–323 (1979)
  147. Levinson, N.: The Wiener root-mean square error criterion in filter design and prediction. J. Math. Phys. 25, 261–278 (1947)MathSciNet
  148. Li, X.S., Demmel, J.W., Bailey, D.H., Henry, G., Hida, Y., Iskandar, J., Kahan, W.M., Kang, S.Y., Kapur, A., Martin, M.C., Thompson, B.J., Tung, T., Yoo, D.J.: Design, implementation and testing of extended and mixed precision BLAS. ACM Trans. Math. Softw. 27(2), 152–205 (2002)
  149. Lipton, R.J., Rode, D.J., Tarjan, R.E.: Generalized nested dissection. SIAM J. Numer. Anal. 16, 346–358 (1979)MathSciNetView ArticleMATH
  150. Liu, J.W.H.: The role of elimination trees in sparse matrix factorization. SIAM J. Matrix Anal. Appl. 11(1), 134–172 (1990)
  151. Liu, J.W.H.: The multifrontal method for sparse matrix solution: theory and practice. SIAM Rev. 34(1), 82–109 (1992)
  152. Marcus, M., Minc, H.: A Survey of Matrix Theory and Matrix Inequalities. Allyn and Bacon, Boston (1964). Reprinted by Dover, Mineola (1992)
  153. Markowitz, H.M.: The elimination form of the inverse and its application to linear programming. Manage. Sci. 3, 255–269 (1957)
  154. Markowitz, H.M.: Autobiography. In: Frängsmyr, T. (ed.) The Nobel Prizes 1990. Nobel Foundation, Stockholm (1991)
  155. Meinguet, J.: Refined error analysis of Cholesky factorization. SIAM J. Numer. Anal. 20(6), 1243–1250 (1983)
  156. Minkowski, H.: Theorie der Konvexen Körper insbesondere Begründung ihres Oberflächenbegriffs. In: Hilbert, D. (ed.) Minkowski Abhandlung, pp. 191–203. Teubner Verlag (1911)
  157. Moler, C.B., Eddins, S.: Cleve’s Corner: Fast finite Fourier transforms. MATLAB News Notes Winter 2001:14–15 (2001)
  158. Muller, J.-M., Brisebarre, N., de Dinechin, F., Jeannerod, C.-P., Lefèvre, V., Melquiond, G., Stehlé, N., Torres, S.: Handbook of Floating-Point Arithmetic. Birkhäuser, Boston (2010)View ArticleMATH
  159. Neal, L., Poole, G.: A geometric analysis of Gaussian elimination. ii. Linear Algebra Appl. 173, 239–264 (1992)
  160. Neal, L., Poole, G.: The rook’s pivoting strategy. J. Comput. Appl. Math. 123, 353–369 (2000)
  161. von Neumann, J.: Some matrix-inequalities and metrization of matrix-space. Tomsk Univ. Rev. 1, 286–300 (1937)
  162. von Neumann, J., Goldstine, H.H.: Numerical inverting of matrices of high order. Bull. Amer. Math. Soc. 53, 1021–1099 (1947)
  163. Oettli, W., Prager, W.: Compatibility of approximate solution of linear equations with given error bounds for coefficients and right-hand sides. Numer. Math. 6, 404–409 (1964)MathSciNetView Article
  164. Parter, S.V.: The use of linear graphs in Gauss elimination. SIAM Rev. 3, 119–130 (1961)
  165. Peters, G., Wilkinson, J.H.: On the stability of Gauss-Jordan elimination with pivoting. Comm. Assoc. Comput. Mach. 18, 20–24 (1975)
  166. Picard, É.: Quelques remarques sur les équations intégrales de premiére espéce et sur certains problémes de Physique mathématique. Comptes Rendus de l’Academie Sciences, Paris 148, 1563–1568 (1909)
  167. Reid, J.K.: A note on the stability of Gaussian elimination. J. Inst. Maths. Applics. 8(3), 374–375 (1971)
  168. Reid, J.K.: A sparsity-exploiting variant of the Bartels-Golub decomposition for linear programming bases. Math. Program. 24, 55–69 (1982)
  169. Rigal, J.L., Gaches, J.: On the compatibility of a given solution with the data of a linear system. J. Assoc. Comput. Mach. 14(3), 543–548 (1967)MathSciNetView ArticleMATH
  170. Robinson, S.: Toward an optimal algorithm for matrix multiplication. SIAM News 38(9), 2–3 (2005)
  171. Rose, D.J.: A graph-theoretic study of the numerical solution of sparse positive definite systems of linear equations. In: Read, R.C. (ed.) Graph Theory and Computing. pp. 183–217. Academic Press, New York (1972)
  172. Rump, S.M.: Fast and parallel interval arithmetic. BIT 39(3), 534–554 (1999)
  173. Rump, S.M.: INTLAB—INTerval LABoratory. In: Csendes, T. (ed.) Developments in Reliable Computing, pp. 77–104. Kluwer Academic Publishers, Dordrecht (1999)
  174. Saunders, M.A.: A fast stable implementation of the simplex method using Bartels-Golub updating. In: Bunch, J.R., Rose, D.J. (eds.) Sparse Matrix Computations, pp. 213–226. Academic Press, New York (1976)
  175. Schreiber, R.S.: A new implementation of sparse Gaussian elimination. ACM Trans. Math. Softw. 8(3), 256–276 (1982)
  176. Schur, I.: Über potenzreihen, die in Innern des Einheitskreise beschränkt sind. J. Reine. Angew. Math. 147, 205–232 (1917)
  177. Sherman, J., Morrison, W.J.: Adjustment of an inverse matrix corresponding to a change in one element of a given matrix. Ann. Math. Stat. 21, 124–127 (1949)MathSciNetView Article
  178. Skeel, R.D.: Scaling for stability in Gaussian elimination. J. Assoc. Comput. Mach. 26, 494–526 (1979)
  179. Skeel, R.D.: Iterative refinement implies numerical stability for Gaussian elimination. Math. Comput. 35, 817–832 (1980)
  180. van der Sluis, A.: Condition numbers and equilibration of matrices. Numer. Math. 14, 14–23 (1969)MathSciNetView ArticleMATH
  181. Smith, B.T., Boyle, J.M., Dongarra, J.J., Garbow, B.S., Ikebe, Y., Klema, V.C., Moler, C.B.: Matrix Eigensystems Routines–EISPACK Guide. Lecture Notes in Computer Science, vol. 6, 2nd edn. Springer, New York (1976)
  182. Stewart, G.W.: Introduction to Matrix Computations. Academic Press, New York (1973)MATH
  183. Stewart, G.W.: Matrix Algorithms Volume I: Basic Decompositions. SIAM, Philadelphia (1998)
  184. Stewart, G.W.: The decompositional approach to matrix computation. Comput. Sci. Eng. 2(1), 50–59 (2000)View Article
  185. Stewart, G.W.: Matrix Algorithms Volume II: Eigensystems. SIAM, Philadelphia (2001)
  186. Stewart, G.W., Sun, J.: Matrix Perturbation Theory. Academic Press, New York (1990)
  187. Strassen, V.: Gaussian elimination is not optimal. Numer. Math. 13, 354–356 (1969)
  188. Sylvester, J.J.: A demonstration of the theorem that every homogeneous quadratic polynomial is reducible by real orthogonal substitutions to the form of a sum of positive and negative squares. Philos. Mag. 2, 138–142 (1852)
  189. Tarjan, R.E.: Depth-first search and linear graph algorithms. SIAM J. Comput. 1, 146–159 (1972)MathSciNetView ArticleMATH
  190. Taub, A.H.: John von Neumann Collected Works, vol. V, ix+784 pp. Design of Computers, Theory of Automata and Numerical Analysis. Pergamon Press, Oxford (1963)
  191. Tinney, W.F., Walker, J.W.: Direct solution of sparse network equations by optimally ordered triangular factorization. Proc. IEEE 55, 1801–1809 (1967)View Article
  192. Toledo, S.: Locality of reference in LU decomposition with partial pivoting. SIAM J. Matrix Anal. Appl. 18(4), 1065–1081 (1997)
  193. Trefethen, L.N., Bau III, D.: Numerical Linear Algebra. SIAM, Philadelphia (1997)
  194. Trefethen, L.N., Schreiber, R.S.: Average-case stability of Gaussian elimination. SIAM J. Matrix Anal. Appl. 11(3), 335–360 (1990)
  195. Trench, W.F.: An algorithm for the inversion of finite Toeplitz matrices. J. SIAM 12, 515–522 (1964)MathSciNetMATH
  196. Turing, A.M.: Rounding-off errors in matrix processes. Q. J. Mech. Appl. Math. 1, 287–308 (1948)
  197. Van Loan, C.F.: Computational Framework for the Fast Fourier Transform. SIAM, Philadelphia (1992)
  198. Van Loan, C.F.: The ubiquitous Kronecker product. J. Comput. Appl. Math. 123, 85–100 (2000)
  199. Vandebril, R., Van Barel, M., Mastronardi, N.: A note on the representation and definition of semiseparable matrices. Numer. Linear Algebra Appl. 12, 839–858 (2005)
  200. Vandebril, R., Van Barel, M., Mastronardi, N.: Matrix Computations and Semiseparable Matrices. Linear Systems, vol. 1. The Johns Hopkins University Press, Baltimore (2007)
  201. Vandebril, R., Van Barel, M., Mastronardi, N.: Matrix Computations and Semiseparable Matrices. Eigenvalue and Singular Value Methods, vol. 2. The Johns Hopkins University Press, Baltimore (2008)
  202. Varah, J.M.: On the solution of block-tridiagonal systems arising from certain finite-difference equations. Math. Comp. 26(120), 859–869 (1972)
  203. Watkins, D.S.: Fundamentals of Matrix Computation, 2nd edn. Wiley-InterScience, New York (2002)
  204. Wilkinson, J.H.: Error analysis of direct methods of matrix inversion. J. Assoc. Comput. Mach. 8, 281–330 (1961)
  205. Wilkinson, J.H.: Rounding Errors in Algebraic Processes. Notes on Applied Science No. 32, vi+161 pp. Majesty’s Stationery Office, London (1963) Republished in 1994 by Dover, Mineola
  206. Wilkinson, J.H.: The Algebraic Eigenvalue Problem. Clarendon Press, Oxford (1965)
  207. Wilkinson, J.H.: A priori error analysis of algebraic processes. In: Proceedings International Congress Mathematicians, pp. 629–639. Izdat. Mir, Moscow (1968)
  208. Wilkinson, J.H.: Error analysis revisited. IMA Bull. 22, 192–200 (1968)
  209. Wilkinson, J.H.: Modern error analysis. SIAM Rev. 13(4), 548–568 (1971)
  210. Wilkinson, J.H., Reinsch, C. (eds.): Handbook for Automatic Computation. Linear Algebra, vol. II. Springer, New York (1971)
  211. Woodbury, M.A.: Inverting modified matrices. Memorandum Report 42, Statistical Research Group, Princeton (1950)
  212. Wright, S.J.: Primal-Dual Interior-Point Methods. SIAM, Philadelphia (1997)
  213. Yannakis, M.: Computing the minimum fill-in is NP-complete. SIAM J. Algebraic Disc. Math. 2(1), 77–79 (1990)
  214. Zhang, F. (ed.): The Schur Complement and Its Application. Number 4 in Numerical Methods and Algorithms. Springer, New York (2005)
Metadaten
Titel
Direct Methods for Linear Systems
verfasst von
Åke Björck
Copyright-Jahr
2015
DOI
https://doi.org/10.1007/978-3-319-05089-8_1

Premium Partner