Usage Notes
Solving Systems of Linear Equations
A square system of linear equations has the form Ax = b, where A is a user-specified n × n matrix, b is a given right-hand side n vector, and x is the solution n vector. Each entry of A and b must be specified by the user. The entire vector x is returned as output.
When A is invertible, a unique solution to Ax = b exists. The most commonly used direct method for solving Ax = b factors the matrix A into a product of triangular matrices and solves the resulting triangular systems of linear equations. Functions that use direct methods for solving systems of linear equations all compute the solution to Ax = b. Thus, if function imsl_f_superlu or a function with the prefix “imsl_f_lin_sol” is called with the required arguments, a pointer to x is returned by default. Additional tasks, such as only factoring the matrix A into a product of triangular matrices, can be done using keywords.
Matrix Factorizations
In some applications, it is desirable to just factor the
n ×
n matrix
A into a product of two triangular matrices. This can be done by calling the appropriate function for solving the system of linear equations
Ax =
b. Suppose that in addition to the solution
x of a linear system of equations
Ax =
b, the
LU factorization of
A is desired. Use the keyword
IMSL_FACTOR in the function
imsl_f_lin_sol_gen to obtain access to the factorization. If only the factorization is desired, use the keywords
IMSL_FACTOR_ONLY and
IMSL_FACTOR. For function
imsl_f_superlu, use keyword
IMSL_RETURN_SPARSE_LU_FACTOR in order to get the
LU factorization. If only the factorization is desired, then keywords
IMSL_RETURN_SPARSE_LU_FACTOR and
IMSL_FACTOR_SOLVE with value 1 are required.
Besides the basic matrix factorizations, such as
LU and
LLT, additional matrix factorizations also are provided. For a real matrix
A, its
QR factorization can be computed by the function
imsl_f_lin_least_squares_gen. Functions for computing the singular value decomposition (SVD) of a matrix are discussed in a later section.
Matrix Inversions
The inverse of an n × n nonsingular matrix can be obtained by using the keyword IMSL_INVERSE in functions for solving systems of linear equations. The inverse of a matrix need not be computed if the purpose is to solve one or more systems of linear equations. Even with multiple right-hand sides, solving a system of linear equations by computing the inverse and performing matrix multiplication is usually more expensive than the method discussed in the next section.
Multiple Right-Hand Sides
Consider the case where a system of linear equations has more than one right-hand side vector. It is most economical to find the solution vectors by first factoring the coefficient matrix
A into products of triangular matrices. Then, the resulting triangular systems of linear equations are solved for each right-hand side. When
A is a real general matrix, access to the
LU factorization of
A is computed by using the keywords
IMSL_FACTOR and
IMSL_FACTOR_ONLY in function
imsl_f_lin_sol_gen. The solution
xk for the
k-th right-hand side vector
bk is then found by two triangular solves,
Lyk =
bk and
Uxkk =
yk. The keyword
IMSL_SOLVE_ONLY in the function
imsl_f_lin_sol_gen is used to solve each right-hand side. These arguments are found in other functions for solving systems of linear equations. For function
imsl_f_superlu, use the keywords
IMSL_RETURN_SPARSE_LU_FACTOR and
IMSL_FACTOR_SOLVE with value 1 to get the
LU factorization, and then keyword
IMSL_FACTOR_SOLVE with value 2 to get the solution for different right-hand sides.
Least-Squares Solutions and QR Factorizations
Least-squares solutions are usually computed for an over-determined system of linear equations
Am×n x =
b, where
m >
n. A least-squares solution
x minimizes the Euclidean length of the residual vector
r =
Ax − b. The function
imsl_f_lin_least_squares_gen computes a unique least-squares solution for
x when
A has full column rank. If
A is rank-deficient, then the
base solution for some variables is computed. These variables consist of the resulting columns after the interchanges. The
QR decomposition, with column interchanges or pivoting, is computed such that
AP =
QR. Here,
Q is orthogonal,
R is upper-trapezoidal with its diagonal elements nonincreasing in magnitude, and
P is the permutation matrix determined by the pivoting. The base solution
xB is obtained by solving
R(
PT)
x =
QTb for the base variables. For details, see the “Description” section of function
imsl_f_lin_least_squares_gen. The
QR factorization of a matrix
A such that
AP =
QR with
P specified by the user can be computed using keywords.
Least-squares problems with linear constraints and one right-hand side can be solved. These equations are
Am×n x = b,
subject to constraints and simple bounds
bl ≤ Cx ≤ bu
xl ≤ x ≤ xu
Here
A is the coefficient matrix of the least-squares equations,
b is the right-hand side, and
C is the coefficient matrix of the constraints. The vectors
bl,
bu,
xl and
xu are the lower and upper bounds on the constraints and the variables. This general problem is solved with
imsl_f_lin_lsq_lin_constraints.
For the special case of where there are only non-negative constraints,
x ≥ 0, solve the problem with
imsl_f_nonneg_least_squares.
Non-Negative Matrix Factorization
If the matrix
Am × n ≥ 0, factor it as a product of two matrices,
Am × n =
Fm × k Gk × n. The matrices
F and
G are both non-negative and
k ≤ min(
m,
n). The factors are computed so that the residual matrix
E =
A ‑ F G has a sum of squares norm that is minimized. There are normalizations of
Fm × k and
Gk × n described in the documentation of
imsl_f_nonneg_matrix_factorization.
Singular Value Decompositions and Generalized Inverses
The SVD of an
m ×
n matrix
A is a matrix decomposition
A =
USVT. With
q = min(
m,
n), the factors
Um×q and
Vn×q are orthogonal matrices, and
Sq×q is a nonnegative diagonal matrix with nonincreasing diagonal terms. The function
imsl_f_lin_svd_gen computes the singular values of
A by default. Using keywords, part or all of the
U and
V matrices, an estimate of the rank of
A, and the generalized inverse of
A, also can be obtained.
Ill-Conditioning and Singularity
An m × n matrix A is mathematically singular if there is an x ≠ 0 such that Ax = 0. In this case, the system of linear equations Ax = b does not have a unique solution. On the other hand, a matrix A is numerically singular if it is “close” to a mathematically singular matrix. Such problems are called ill-conditioned. If the numerical results with an ill-conditioned problem are unacceptable, users can either use more accuracy if it is available (for type float accuracy switch to double) or they can obtain an approximate solution to the system. One form of approximation can be obtained using the SVD of A: If q = min(m, n) and
then the approximate solution is given by the following:
The scalars ti,i are defined below.
The user specifies the value of tol. This value determines how “close” the given matrix is to a singular matrix. Further restrictions may apply to the number of terms in the sum, k ≤ q. For example, there may be a value of k ≤ q such that the scalars |(bTui)|, i > k are smaller than the average uncertainty in the right-hand side b. This means that these scalars can be replaced by zero; and hence, b is replaced by a vector that is within the stated uncertainty of the problem.