Skip to main content
main-content

Über dieses Buch

This book provides state-of-the-art and interdisciplinary topics on solving matrix eigenvalue problems, particularly by using recent petascale and upcoming post-petascale supercomputers. It gathers selected topics presented at the International Workshops on Eigenvalue Problems: Algorithms; Software and Applications, in Petascale Computing (EPASA2014 and EPASA2015), which brought together leading researchers working on the numerical solution of matrix eigenvalue problems to discuss and exchange ideas – and in so doing helped to create a community for researchers in eigenvalue problems. The topics presented in the book, including novel numerical algorithms, high-performance implementation techniques, software developments and sample applications, will contribute to various fields that involve solving large-scale eigenvalue problems.

Inhaltsverzeichnis

Frontmatter

An Error Resilience Strategy of a Complex Moment-Based Eigensolver

Recently, complex moment-based eigensolvers have been actively developed in highly parallel environments to solve large and sparse eigenvalue problems. In this paper, we provide an error resilience strategy of a Rayleigh–Ritz type complex moment-based parallel eigensolver for solving generalized eigenvalue problems. Our strategy is based on an error bound of the eigensolver in the case that soft-errors like bit-flip occur. Using the error bound, we achieve an inherent error resilience of the eigensolver that does not require standard checkpointing and replication techniques in the most time-consuming part.

Akira Imakura, Yasunori Futamura, Tetsuya Sakurai

Numerical Integral Eigensolver for a Ring Region on the Complex Plane

In the present paper, we propose an extension of the Sakurai-Sugiura projection method (SSPM) for a circumference region on the complex plane. The SSPM finds eigenvalues in a specified region on the complex plane and the corresponding eigenvectors by using numerical quadrature. The original SSPM has also been extended to compute the eigenpairs near the circumference of a circle on the complex plane. However these extensions can result in division by zero, if the eigenvalues are located at the quadrature points set on the circumference. Here, we propose a new extension of the SSPM, in order to avoid a decrease in the computational accuracy of the eigenpairs resulting from locating the quadrature points near the eigenvalues. We implement the proposed method in the SLEPc library, and examine its performance on a supercomputer cluster with many-core architecture.

Yasuyuki Maeda, Tetsuya Sakurai, James Charles, Michael Povolotskyi, Gerhard Klimeck, Jose E. Roman

A Parallel Bisection and Inverse Iteration Solver for a Subset of Eigenpairs of Symmetric Band Matrices

The tridiagonalization and its back-transformation for computing eigenpairs of real symmetric dense matrices are known to be the bottleneck of the execution time in parallel processing owing to the communication cost and the number of floating-point operations. To overcome this problem, we focus on real symmetric band eigensolvers proposed by Gupta and Murata since their eigensolvers are composed of the bisection and inverse iteration algorithms and do not include neither the tridiagonalization of real symmetric band matrices nor its back-transformation. In this paper, the following parallel solver for computing a subset of eigenpairs of real symmetric band matrices is proposed on the basis of Murata’s eigensolver: the desired eigenvalues of the target band matrices are computed directly by using parallel Murata’s bisection algorithm. The corresponding eigenvectors are computed by using block inverse iteration algorithm with reorthogonalization, which can be parallelized with lower communication cost than the inverse iteration algorithm. Numerical experiments on shared-memory multi-core processors show that the proposed eigensolver is faster than the conventional solvers.

Hiroyuki Ishigami, Hidehiko Hasegawa, Kinji Kimura, Yoshimasa Nakamura

The Flexible ILU Preconditioning for Solving Large Nonsymmetric Linear Systems of Equations

The ILU factorization is one of the most popular preconditioners for the Krylov subspace method, alongside the GMRES. Properties of the preconditioner derived from the ILU factorization are relayed onto the dropping rules. Recently, Zhang et al. (Numer Linear Algebra Appl 19:555–569, 2011) proposed a Flexible incomplete Cholesky (IC) factorization for symmetric linear systems. This paper is a study of the extension of the IC factorization to the nonsymmetric case. The new algorithm is called the Crout version of the flexible ILU factorization, and attempts to reduce the number of nonzero elements in the preconditioner and computation time during the GMRES iterations. Numerical results show that our approach is effective and useful.

Takatoshi Nakamura, Takashi Nodera

Improved Coefficients for Polynomial Filtering in ESSEX

The ESSEX project is an ongoing effort to provide exascale-enabled sparse eigensolvers, especially for quantum physics and related application areas. In this paper we first briefly summarize some key achievements that have been made within this project.Then we focus on a projection-based eigensolver with polynomial approximation of the projector. This eigensolver can be used for computing hundreds of interior eigenvalues of large sparse matrices. We describe techniques that allow using lower-degree polynomials than possible with standard Chebyshev expansion of the window function and kernel smoothing. With these polynomials, the degree, and thus the number of matrix–vector multiplications, typically can be reduced by roughly one half, resulting in comparable savings in runtime.

Martin Galgon, Lukas Krämer, Bruno Lang, Andreas Alvermann, Holger Fehske, Andreas Pieper, Georg Hager, Moritz Kreutzer, Faisal Shahzad, Gerhard Wellein, Achim Basermann, Melven Röhrig-Zöllner, Jonas Thies

Eigenspectrum Calculation of the O(a)-Improved Wilson-Dirac Operator in Lattice QCD Using the Sakurai-Sugiura Method

We have developed a computer code to find eigenvalues and eigenvectors of non-Hermitian sparse matrices arising in lattice quantum chromodynamics (lattice QCD). The Sakurai-Sugiura (SS) method (Sakurai and Sugiura, J Comput Appl Math 159:119, 2003) is employed here, which is based on a contour integral, allowing us to obtain desired eigenvalues located inside a given contour of the complex plane. We apply the method here to calculating several low-lying eigenvalues of the non-Hermitian O(a)-improved Wilson-Dirac operator D (Sakurai et al., Comput Phys Commun 181:113, 2010). Evaluation of the low-lying eigenvalues is crucial since they determine the sign of its determinant detD, important quantity in lattice QCD. We are particularly interested in such cases as finding the lowest eigenvalues to be equal or close to zero in the complex plane. Our implementation is tested for the Wilson-Dirac operator in free case, for which the eigenvalues are analytically known. We also carry out several numerical experiments using different sets of gauge field configurations obtained in quenched approximation as well as in full QCD simulation almost at the physical point. Various lattice sizes L x L y L z L t are considered from 83 × 16 to 964, amounting to the matrix order 12L x L y L z L t from 98,304 to 1,019,215,872.

Hiroya Suno, Yoshifumi Nakamura, Ken-Ichi Ishikawa, Yoshinobu Kuramashi, Yasunori Futamura, Akira Imakura, Tetsuya Sakurai

Properties of Definite Bethe–Salpeter Eigenvalue Problems

The Bethe–Salpeter eigenvalue problem is solved in condense matter physics to estimate the absorption spectrum of solids. It is a structured eigenvalue problem. Its special structure appears in other approaches for studying electron excitation in molecules or solids also. When the Bethe–Salpeter Hamiltonian matrix is definite, the corresponding eigenvalue problem can be reduced to a symmetric eigenvalue problem. However, its special structure leads to a number of interesting spectral properties. We describe these properties that are crucial for developing efficient and reliable numerical algorithms for solving this class of problems.

Meiyue Shao, Chao Yang

Preconditioned Iterative Methods for Eigenvalue Counts

We describe preconditioned iterative methods for estimating the number of eigenvalues of a Hermitian matrix within a given interval. Such estimation is useful in a number of applications. It can also be used to develop an efficient spectrum-slicing strategy to compute many eigenpairs of a Hermitian matrix. Our method is based on the Lanczos- and Arnoldi-type of iterations. We show that with a properly defined preconditioner, only a few iterations may be needed to obtain a good estimate of the number of eigenvalues within a prescribed interval. We also demonstrate that the number of iterations required by the proposed preconditioned schemes is independent of the size and condition number of the matrix. The efficiency of the methods is illustrated on several problems arising from density functional theory based electronic structure calculations.

Eugene Vecharynski, Chao Yang

Comparison of Tridiagonalization Methods Using High-Precision Arithmetic with MuPAT

In general, when computing the eigenvalues of symmetric matrices, a matrix is tridiagonalized using some orthogonal transformation. The Householder transformation, which is a tridiagonalization method, is accurate and stable for dense matrices, but is not applicable to sparse matrices because of the required memory space. The Lanczos and Arnoldi methods are also used for tridiagonalization and are applicable to sparse matrices, but these methods are sensitive to computational errors. In order to obtain a stable algorithm, it is necessary to apply numerous techniques to the original algorithm, or to simply use accurate arithmetic in the original algorithm. In floating-point arithmetic, computation errors are unavoidable, but can be reduced by using high-precision arithmetic, such as double-double (DD) arithmetic or quad-double (QD) arithmetic. In the present study, we compare double, double-double, and quad-double arithmetic for three tridiagonalization methods; the Householder method, the Lanczos method, and the Arnoldi method. To evaluate the robustness of these methods, we applied them to dense matrices that are appropriate for the Householder method. It was found that using high-precision arithmetic, the Arnoldi method can produce good tridiagonal matrices for some problems whereas the Lanczos method cannot.

Ryoya Ino, Kohei Asami, Emiko Ishiwata, Hidehiko Hasegawa

Computation of Eigenvectors for a Specially Structured Banded Matrix

For a specially structured nonsymmetric banded matrix, which is related to a discrete integrable system, we propose a novel method to compute all the eigenvectors. We show that the eigenvector entries are arranged radiating out from the origin on the complex plane. This property enables us to efficiently compute all the eigenvectors. Although the intended matrix has complex eigenvalues, the proposed method can compute all the complex eigenvectors using only arithmetic of real numbers.

Hiroshi Takeuchi, Kensuke Aihara, Akiko Fukuda, Emiko Ishiwata

Monotonic Convergence to Eigenvalues of Totally Nonnegative Matrices in an Integrable Variant of the Discrete Lotka-Volterra System

The Lotka-Volterra (LV) system describes a simple predator-prey model in mathematical biology. The hungry Lotka-Volterra (hLV) system assumed that each predator preys on two or more species is an extension; those involving summations and products of nonlinear terms are referred to as summation-type and product-type hLV systems, respectively. Time-discretizations of these systems are considered in the study of integrable systems, and have been shown to be applicable to computing eigenvalues of totally nonnegative (TN) matrices. Monotonic convergence to eigenvalues of TN matrices, however, has not yet been observed in the time-discretization of the product-type hLV system. In this paper, we show the existence of a center manifold associated with the time-discretization of the product-type hLV system, and then clarify how the solutions approach an equilibrium corresponding to the eigenvalues of TN matrices in the final phase of convergence.

Akihiko Tobita, Akiko Fukuda, Emiko Ishiwata, Masashi Iwasaki, Yoshimasa Nakamura

Accuracy Improvement of the Shifted Block BiCGGR Method for Linear Systems with Multiple Shifts and Multiple Right-Hand Sides

We consider solving linear systems with multiple shifts and multiple right-hand sides. In order to solve these linear systems efficiently, we develop the Shifted Block BiCGGR method. This method is based on the shift-invariance property of Block Krylov subspaces. Using this property, the Shifted systems can be solved in the process of solving the Seed system without matrix-vector multiplications. The Shifted Block BiCGGR method can generate high accuracy approximate solutions of the Seed system. However, the accuracy of the approximate solutions of the Shifted systems may deteriorate due to the error of the matrix multiplications appearing in the algorithm. In this paper, we improve the accuracy of the approximate solutions of the Shifted systems generated by the Shifted Block BiCGGR method.

Hiroto Tadano, Shusaku Saito, Akira Imakura

Memory-Saving Technique for the Sakurai–Sugiura Eigenvalue Solver Using the Shifted Block Conjugate Gradient Method

In recent years, a numerical quadrature-based sparse eigensolver—the so-called Sakurai–Sugiura method—and its variants have attracted attention because of their highly coarse-grained parallelism. In this paper, we propose a memory-saving technique for a variant of the Sakurai–Sugiura method. The proposed technique can be utilized when inner linear systems are solved with the shifted block conjugate gradient method. Using our technique, eigenvalues and residual norms can be obtained without the explicit need to compute the eigenvector. This technique saves a considerable amount of memory space when eigenvectors are unnecessary. Our technique is also beneficial in cases where eigenvectors are necessary, because the residual norms of the target eigenpairs can be cheaply computed and monitored during each iteration step of the inner linear solver.

Yasunori Futamura, Tetsuya Sakurai

Filter Diagonalization Method by Using a Polynomial of a Resolvent as the Filter for a Real Symmetric-Definite Generalized Eigenproblem

For a real symmetric-definite generalized eigenproblem of size N matrices Av = λBv (B > 0), we solve those pairs whose eigenvalues are in a real interval [a, b] by the filter diagonalization method.In our present study, the filter which we use is a real-part of a polynomial of a resolvent: F = Re ∑ k = 1 k {R(ρ)}k . Here R(ρ) = (A −ρB)−1B is the resolvent with a non-real complex shift ρ, and γ k are coefficients. In our experiments, the (half) degree n is 15 or 20.By tuning the shift ρ and coefficients {γ k } well, the filter passes those eigenvectors well whose eigenvalues are in a neighbor of [a, b], but strongly reduces those ones whose eigenvalues are separated from the interval.We apply the filter to a set of sufficiently many B-orthonormal random vectors {x(ℓ)} to obtain another set {y(ℓ)}. From both sets of vectors and properties of the filter, we construct a basis which approximately spans an invariant subspace whose eigenvalues are in a neighbor of [a, b]. An application of the Rayleigh-Ritz procedure to the basis gives approximations of all required eigenpairs.Experiments for banded problems showed this approach worked in success.

Hiroshi Murakami

Off-Diagonal Perturbation, First-Order Approximation and Quadratic Residual Bounds for Matrix Eigenvalue Problems

When a symmetric block diagonal matrix A1A2 $$\big[\begin{matrix}\scriptstyle A_{1} \\ \scriptstyle &\scriptstyle A_{2}\end{matrix}\big]$$ undergoes an off-diagonal perturbation A1E12E12TA2, $$\big[\begin{matrix}\scriptstyle A_{1} &\scriptstyle E_{12} \\ \scriptstyle E_{12}^{T}&\scriptstyle A_{ 2} \end{matrix}\big],$$ the eigenvalues of these matrices are known to differ only by O(∥E12∥2gap) $$O(\frac{\|E_{12}\|^{2}} {\mathrm{gap}} )$$ , which scales quadratically with the norm of the perturbation. Here gap measures the distance between eigenvalues, and plays a key role in the constant. Closely related is the first-order perturbation expansion for simple eigenvalues of a matrix. It turns out that the accuracy of the first-order approximation is also O(∥E∥2gap) $$O( \frac{\|E\|^{2}} {\mathrm{gap}})$$ , where E is the perturbation matrix. Also connected is the residual bounds of approximate eigenvalues obtained by the Rayleigh-Ritz process, whose accuracy again scales quadratically in the residual, and inverse-proportionally with the gap between eigenvalues. All these are tightly linked, but the connection appears to be rarely discussed. This work elucidates this connection by showing that all these results can be understood in a unifying manner via the quadratic perturbation bounds of block diagonal matrices undergoing off-diagonal perturbation. These results are essentially known for a wide range of eigenvalue problems: symmetric eigenproblems (for which the explicit constant can be derived), nonsymmetric and generalized eigenvalue problems. We also extend such results to matrix polynomials, and show that the accuracy of a first-order expansion also scales as O(∥E∥2gap) $$O( \frac{\|E\|^{2}} {\mathrm{gap}})$$ , and argue that two-sided projection methods are to be preferred to one-sided projection for nonsymmetric eigenproblems, to obtain higher accuracy in the computed eigenvalues.

Yuji Nakatsukasa

An Elementary Derivation of the Projection Method for Nonlinear Eigenvalue Problems Based on Complex Contour Integration

The Sakurai-Sugiura (SS) projection method for the generalized eigenvalue problem has been extended to the nonlinear eigenvalue problem A(z)w = 0, where A(z) is an analytic matrix valued function, by several authors. To the best of the authors’ knowledge, existing derivations of these methods rely on canonical forms of an analytic matrix function such as the Smith form or the theorem of Keldysh. While these theorems are powerful tools, they require advanced knowledge of both analysis and linear algebra and are rarely mentioned even in advanced textbooks of linear algebra. In this paper, we present an elementary derivation of the SS-type algorithm for the nonlinear eigenvalue problem, assuming that the wanted eigenvalues are all simple. Our derivation uses only the analyticity of the eigenvalues and eigenvectors of a parametrized matrix A(z), which is a standard result in matrix perturbation theory. Thus we expect that our approach will provide an easily accessible path to the theory of nonlinear SS-type methods.

Yusaku Yamamoto

Fast Multipole Method as a Matrix-Free Hierarchical Low-Rank Approximation

There has been a large increase in the amount of work on hierarchical low-rank approximation methods, where the interest is shared by multiple communities that previously did not intersect. This objective of this article is two-fold; to provide a thorough review of the recent advancements in this field from both analytical and algebraic perspectives, and to present a comparative benchmark of two highly optimized implementations of contrasting methods for some simple yet representative test cases. The first half of this paper has the form of a survey paper, to achieve the former objective. We categorize the recent advances in this field from the perspective of compute-memory tradeoff, which has not been considered in much detail in this area. Benchmark tests reveal that there is a large difference in the memory consumption and performance between the different methods.

Rio Yokota, Huda Ibeid, David Keyes

Recent Progress in Linear Response Eigenvalue Problems

Linear response eigenvalue problems arise from the calculation of excitation states of many-particle systems in computational materials science. In this paper, from the point of view of numerical linear algebra and matrix computations, we review the progress of linear response eigenvalue problems in theory and algorithms since 2012.

Zhaojun Bai, Ren-Cang Li

Backmatter

Weitere Informationen

Premium Partner

Neuer Inhalt

BranchenIndex Online

Die B2B-Firmensuche für Industrie und Wirtschaft: Kostenfrei in Firmenprofilen nach Lieferanten, Herstellern, Dienstleistern und Händlern recherchieren.

Whitepaper

- ANZEIGE -

Best Practices für die Mitarbeiter-Partizipation in der Produktentwicklung

Unternehmen haben das Innovationspotenzial der eigenen Mitarbeiter auch außerhalb der F&E-Abteilung erkannt. Viele Initiativen zur Partizipation scheitern in der Praxis jedoch häufig. Lesen Sie hier  - basierend auf einer qualitativ-explorativen Expertenstudie - mehr über die wesentlichen Problemfelder der mitarbeiterzentrierten Produktentwicklung und profitieren Sie von konkreten Handlungsempfehlungen aus der Praxis.
Jetzt gratis downloaden!

Bildnachweise