UMCGG
Minimizes a function of N variables using a conjugate gradient algorithm and a user-supplied gradient.
Required Arguments
FCN — User-supplied subroutine to evaluate the function to be minimized. The usage is
CALL FCN (N, X, F), where
N – Length of X. (Input)
X – The point at which the function is evaluated. (Input)
X should not be changed by FCN.
F – The computed function value at the point X. (Output)
FCN must be declared EXTERNAL in the calling program.
GRAD — User-supplied subroutine to compute the gradient at the point X. The usage is
CALL GRAD (N, X, G), where
N – Length of X and G. (Input)
X – The point at which the gradient is evaluated. (Input)
X should not be changed by GRAD.
G – The gradient evaluated at the point X. (Output)
GRAD must be declared EXTERNAL in the calling program.
DFPRED — A rough estimate of the expected reduction in the function. (Input)
DFPRED is used to determine the size of the initial change to X.
X — Vector of length N containing the computed solution. (Output)
Optional Arguments
N — Dimension of the problem. (Input)
Default: N = SIZE (X,1).
XGUESS — Vector of length N containing the initial guess of the minimum. (Input)
Default: XGUESS = 0.0.
GRADTL — Convergence criterion. (Input)
The calculation ends when the sum of squares of the components of G is less than GRADTL.
Default: GRADTL = 1.e-4.
MAXFN — Maximum number of function evaluations. (Input)
Default: MAXFN = 100.
G — Vector of length N containing the components of the gradient at the final parameter estimates. (Output)
FVALUE — Scalar containing the value of the function at the computed solution. (Output)
FORTRAN 90 Interface
Generic: CALL UMCGG (FCN, GRAD, DFPRED, X [, …])
Specific: The specific interface names are S_UMCGG and D_UMCGG.
FORTRAN 77 Interface
Single: CALL UMCGG (FCN, GRAD, N, XGUESS, GRADTL, MAXFN, DFPRED, X, G, FVALUE)
Double: The double precision name is DUMCGG.
Description
The routine UMCGG uses a conjugate gradient method to find the minimum of a function f (x) of n variables. Function values and first derivatives are required.
The routine is based on the version of the conjugate gradient algorithm described in Powell (1977). The main advantage of the conjugate gradient technique is that it provides a fast rate of convergence without the storage of any matrices. Therefore, it is particularly suitable for unconstrained minimization calculations where the number of variables is so large that matrices of dimension n cannot be stored in the main memory of the computer. For smaller problems, however, a subroutine such as IMSL routine UMING, is usually more efficient because each iteration makes use of additional information from previous iterations.
Comments
1. Workspace may be explicitly provided, if desired, by use of U2CGG/DU2CGG. The reference is:
CALL U2CGG (FCN, GRAD, N, XGUESS, GRADTL, MAXFN, DFPRED, X, G, FVALUE, S, RSS, RSG, GINIT, XOPT, GOPT)
The additional arguments are as follows:
S — Vector of length N used for the search direction in each iteration.
RSS — Vector of length N containing conjugacy information.
RSG — Vector of length N containing conjugacy information.
GINIT — Vector of length N containing the gradient values at the start on an iteration.
XOPT — Vector of length N containing the parameter values which yield the least calculated value for FVALUE.
GOPT — Vector of length N containing the gradient values which yield the least calculated value for FVALUE.
2. Informational errors
Type |
Code |
Description |
4 |
1 |
The line search of an integration was abandoned. This error may be caused by an error in gradient. |
4 |
2 |
The calculation cannot continue because the search is uphill. |
4 |
3 |
The iteration was terminated because MAXFN was exceeded. |
3 |
4 |
The calculation was terminated because two consecutive iterations failed to reduce the function. |
3. The routine includes no thorough checks on the part of the user program that calculates the derivatives of the objective function. Therefore, because derivative calculation is a frequent source of error, the user should verify independently the correctness of the derivatives that are given to the routine.
4. Because of the close relation between the conjugate-gradient method and the method of steepest descent, it is very helpful to choose the scale of the variables in a way that balances the magnitudes of the components of a typical gradient vector. It can be particularly inefficient if a few components of the gradient are much larger than the rest.
5. If the value of the parameter GRADTL in the argument list of the routine is set to zero, then the subroutine will continue its calculation until it stops reducing the objective function. In this case, the usual behavior is that changes in the objective function become dominated by computer rounding errors before precision is lost in the gradient vector. Therefore, because the point of view has been taken that the user requires the least possible value of the function, a value of the objective function that is small due to computer rounding errors can prevent further progress. Hence, the precision in the final values of the variables may be only about half the number of significant digits in the computer arithmetic, but the least value of FVALUE is usually found to be quite accurate.
Example
The function
is minimized and the solution is printed.
USE UMCGG_INT
USE UMACH_INT
IMPLICIT NONE
! Declaration of variables
INTEGER N
PARAMETER (N=2)
!
INTEGER I, NOUT
REAL DFPRED, FVALUE, G(N), GRADTL, X(N), &
XGUESS(N)
EXTERNAL ROSBRK, ROSGRD
!
DATA XGUESS/-1.2E0, 1.0E0/
!
DFPRED = 0.2
GRADTL = 1.0E-7
! Minimize the Rosenbrock function
CALL UMCGG (ROSBRK, ROSGRD, DFPRED, X, xguess=xguess, &
gradtl=gradtl, g=g, fvalue=fvalue)
! Print the results
CALL UMACH (2, NOUT)
WRITE (NOUT,99999) (X(I),I=1,N), FVALUE, (G(I),I=1,N)
99999 FORMAT (' The solution is ', 2F8.3, //, ' The function ', &
'evaluated at the solution is ', F8.3, //, ' The ', &
'gradient is ', 2F8.3, /)
!
END
!
SUBROUTINE ROSBRK (N, X, F)
INTEGER N
REAL X(N), F
!
F = 1.0E2*(X(2)-X(1)*X(1))**2 + (1.0E0-X(1))**2
RETURN
END
!
SUBROUTINE ROSGRD (N, X, G)
INTEGER N
REAL X(N), G(N)
!
G(1) = -4.0E2*(X(2)-X(1)*X(1))*X(1) - 2.0E0*(1.0E0-X(1))
G(2) = 2.0E2*(X(2)-X(1)*X(1))
!
RETURN
END
The solution is 1.000 1.000
The function evaluated at the solution is 0.000
The gradient is 0.000 -0.000