Click or drag to resize
ConjugateGradient Class
Solves a real symmetric definite linear system using the conjugate gradient method with optional preconditioning.
Inheritance Hierarchy
SystemObject
  Imsl.MathConjugateGradient

Namespace: Imsl.Math
Assembly: ImslCS (in ImslCS.dll) Version: 6.5.2.0
Syntax
[SerializableAttribute]
public class ConjugateGradient

The ConjugateGradient type exposes the following members.

Constructors
  NameDescription
Public methodConjugateGradient
Conjugate gradient constructor.
Top
Methods
  NameDescription
Public methodEquals
Determines whether the specified object is equal to the current object.
(Inherited from Object.)
Protected methodFinalize
Allows an object to try to free resources and perform other cleanup operations before it is reclaimed by garbage collection.
(Inherited from Object.)
Public methodGetHashCode
Serves as a hash function for a particular type.
(Inherited from Object.)
Public methodGetJacobi
Returns the Jacobi preconditioning matrix.
Public methodGetType
Gets the Type of the current instance.
(Inherited from Object.)
Protected methodMemberwiseClone
Creates a shallow copy of the current Object.
(Inherited from Object.)
Public methodSetJacobi
Defines a Jacobi preconditioner as the preconditioning matrix, that is, M is the diagonal of A.
Public methodSolve
Solves a real symmetric positive or negative definite system Ax=b using a conjugate gradient method with or without preconditioning.
Public methodToString
Returns a string that represents the current object.
(Inherited from Object.)
Top
Properties
  NameDescription
Public propertyIterations
The number of iterations needed by the conjugate gradient algorithm.
Public propertyMaxIterations
The maximum number of iterations allowed.
Public propertyRelativeError
The relative error used for stopping the algorithm.
Top
Remarks

Class ConjugateGradient solves the symmetric positive or negative definite linear system Ax = b using the conjugate gradient method with optional preconditioning. This method is described in detail by Golub and Van Loan (1983, Chapter 10), and in Hageman and Young (1981, Chapter 7).

The preconditioning matrix M is a matrix that approximates A, and for which the linear system Mz=r is easy to solve. These two properties are in conflict; balancing them is a topic of current research. If no preconditioning matrix is specified, M is set to the identity, i.e. M=I.

The number of iterations needed depends on the matrix and the error tolerance. As a rough guide,

{\rm{itmax}}=\sqrt{n}\;\mbox{for}\; n\gg 1,
where n is the order of matrix A. See the references for details.

Let M be the preconditioning matrix, let b,p,r,x and z be vectors and let \tau be the desired relative error. Then the algorithm used is as follows:

  • \lambda = -1
  • p_0 = x_0
  • r_1 = b-Ap_0
  • for k=1,\ldots,\mbox{{\tt{itmax}}}
    • z_k = M^{-1}r_k
    • if k=1 then
      • \beta_k = 1
      • p_k = z_k
    • else
      • \beta_k=(z_k^Tr_k)/(z_{k-1}^Tr_{k-1})
      • p_k=z_k+\beta_kp_{k-1}
    • \mbox{endif}
    • \alpha_k=(r_k^Tz_k)/(p_k^TAp_k)
    • x_k=x_{k-1}+\alpha_kp_k
    • r_{k+1}=r_k-\alpha_kAp_k
    • if (\|Ap_k\|_2 \leq \tau (1-\lambda)\|x_k\|_2) then
      • recompute \lambda
      • if (\|Ap_k\|_2 \leq \tau(1-\lambda)\|x_k\|_2) exit
    • endif
  • endfor

Here, \lambda is an estimate of \lambda_{\max}(\Gamma), the largest eigenvalue of the iteration matrix \Gamma=I-M^{-1}A. The stopping criterion is based on the result (Hageman and Young 1981, pp. 148-151)


             \frac{\|x_k-x\|_M}{\|x\|_M} \leq \left(\frac{1}{1-\lambda_{\max}(\Gamma)}\right)\left(\frac{\|z_k\|_M}{\|x_k\|_M}\right),
where

             \|x\|_M^2=x^TMx\,.
It is also known that
\lambda_{\max}(T_1) \leq \lambda_{\max}(T_2) \leq \ldots \leq \lambda_{\max}(\Gamma) \lt 1,
where the T_l are the symmetric, tridiagonal matrices

             T_l = \left[ \begin{array}{ccccc}
            		  \mu_1 & \omega_2 & & & \\
                   \omega_2 & \mu_2 & \omega_3 & & \\
                   & \omega_3 & \mu_3 & \raisebox{-1ex}{$\ddots$} & \\
            		  & & \ddots & \ddots & \omega_l \\
            		  & & & \omega_l & \mu_l
                  \end{array} \right]
with \mu_1=1-1/\alpha_1 and, for k=2,\ldots,l,

             \mu_k=1-\beta_k/\alpha_{k-1}-1/\alpha_k \quad \mbox{and} \quad \omega_k=\sqrt{\beta_k}/\alpha_{k-1}.

Usually, the eigenvalue computation is needed for only a few of the iterations.

See Also