Usage Notes
An ordinary linear eigensystem problem is represented by the equation Ax = λx where A denotes an n × n matrix. The value λ is an eigenvalue and x  0 is the corresponding eigenvector. The eigenvector is determined up to a scalar factor. In all functions, we have chosen this factor so that x has Euclidean length one, and the component of x of largest magnitude is positive. The eigenvalues and corresponding eigenvectors are sorted then returned in the order of largest to smallest complex magnitude. If x is a complex vector, this component of largest magnitude is scaled to be real and positive. The entry where this component occurs can be arbitrary for eigenvectors having nonunique maximum magnitude values.
A generalized linear eigensystem problem is represented by Ax = λBx where A and B are n × n matrices. The value λ is a generalized eigenvalue, and x is the corresponding generalized eigenvector. The generalized eigenvectors are normalized in the same manner as the ordinary eigensystem problem.
Error Analysis and Accuracy
The remarks in this section are for ordinary eigenvalue problems. Except in special cases, functions will not return the exact eigenvalue-eigenvector pair for the ordinary eigenvalue problem Ax = λx. Typically, the computed pair
are an exact eigenvector-eigenvalue pair for a "nearby” matrix A + E. Information about E is known only in terms of bounds of the form E2 f (n) A2ɛ. The value of f(n) depends on the algorithm, but is typically a small fractional power of n. The parameter ɛ is the machine precision. By a theorem due to Bauer and Fike (see Golub and Van Loan 1989, p. 342),
where σ(A) is the set of all eigenvalues of A (called the spectrum of A), X is the matrix of eigenvectors, ∥⋅∥2 is Euclidean length, and κ(X) is the condition number of X defined as κ(X) = X2X-12. If A is a real symmetric or complex Hermitian matrix, then its eigenvector matrix X is respectively orthogonal or unitary. For these matrices, κ(X) = 1.
The accuracy of the computed eigenvalues
and eigenvectors
can be checked by computing their performance index τ. The performance index is defined to be
where ɛ is again the machine precision.
The performance index τ is related to the error analysis because
where E is the “nearby” matrix discussed above.
While the exact value of τ is precision and data dependent, the performance of an eigensystem analysis function is defined as excellent if τ < 1, good if 1  τ  100, and poor if τ > 100. This is an arbitrary definition, but large values of τ can serve as a warning that there is a significant error in the calculation.
If the condition number κ(X) of the eigenvector matrix X is large, there can be large errors in the eigenvalues even if τ is small. In particular, it is often difficult to recognize near multiple eigenvalues or unstable mathematical problems from numerical results. This facet of the eigenvalue problem is often difficult for users to understand. Suppose the accuracy of an individual eigenvalue is desired. This can be answered approximately by computing the condition number of an individual eigenvalue (see Golub and Van Loan 1989, pp. 344 - 345). For matrices A, such that the computed array of normalized eigenvectors X is invertible, the condition number of λj is
the Euclidean length of the j-th row of X-1. Users can choose to compute this matrix using function imsl_c_lin_sol_gen in Chapter 1, “Linear Systems.” An approximate bound for the accuracy of a computed eigenvalue is then given by κjɛ∥A. To compute an approximate bound for the relative accuracy of an eigenvalue, divide this bound by ∣λj.
Reformulating Generalized Eigenvalue Problems
The eigenvalue problem Ax = λBx is often difficult for users to analyze because it is frequently ill-conditioned. Occasionally, changes of variables can be performed on the given problem to ease this ill-conditioning. Suppose that B is singular, but A is nonsingular. Define the reciprocal μ = λ-1. Then assuming A is definite, the roles of A and B are interchanged so that the reformulated problem Bx = μAx is solved. Those generalized eigenvalues μj = 0 correspond to eigenvalues λj = . The remaining λj = μj-1. The generalized eigenvectors for λj correspond to those for μj.
Now suppose that B is nonsingular. The user can solve the ordinary eigenvalue problem Cx = λx where C = B-1A. The matrix C is subject to perturbations due to ill-conditioning and rounding errors when computing B-1A. Computing the condition numbers of the eigenvalues for C may, however, be helpful for analyzing the accuracy of results for the generalized problem.
There is another method that users can consider to reduce the generalized problem to an alternate ordinary problem. This technique is based on first computing a matrix decomposition B = PQ where both P and Q are matrices that are “simple” to invert. Then, the given generalized problem is equivalent to the ordinary eigenvalue problem Fy = λy. The matrix F = P-1AQ-1 and the unnormalized eigenvectors of the generalized problem are given by x = Q-1y. An example of this reformulation is used in the case where A and B are real and symmetric, with B positive definite. The function imsl_f_eig_symgen uses P = RT and Q = R where R is an upper-triangular matrix obtained from a Cholesky decomposition, B = RTR. The matrix F = R-TAR-1 is symmetric and real. Computation of the eigenvalue-eigenvector expansion for F is based on function imsl_f_eig_sym.