MinUncon Class |
Namespace: Imsl.Math
The MinUncon type exposes the following members.
Name | Description | |
---|---|---|
MinUncon |
Unconstrained minimum constructor for a smooth function of a single
variable of type double.
|
Name | Description | |
---|---|---|
ComputeMin |
Return the minimum of a smooth function of a single variable of type
double using function values only or using function values and
derivatives.
| |
Equals | Determines whether the specified object is equal to the current object. (Inherited from Object.) | |
Finalize | Allows an object to try to free resources and perform other cleanup operations before it is reclaimed by garbage collection. (Inherited from Object.) | |
GetHashCode | Serves as a hash function for a particular type. (Inherited from Object.) | |
GetType | Gets the Type of the current instance. (Inherited from Object.) | |
MemberwiseClone | Creates a shallow copy of the current Object. (Inherited from Object.) | |
ToString | Returns a string that represents the current object. (Inherited from Object.) |
Name | Description | |
---|---|---|
Accuracy |
The required absolute accuracy in the final value returned by the
ComputeMin method.
| |
Bound |
The amount by which X may be changed from its initial value,
Guess.
| |
DerivTolerance |
The derivative tolerance used by member method ComputeMin to
decide if the current point is a local minimum.
| |
Guess |
The initial guess of the minimum point of the input function.
| |
Step |
The stepsize to use when changing x.
|
MinUncon uses two separate algorithms to compute the minimum depending on what the user supplies as the function f.
If f defines the function whose minimum is to be found MinUncon uses a safeguarded quadratic interpolation method to find a minimum point of a univariate function. Both the code and the underlying algorithm are based on the routine ZXLSF written by M.J.D. Powell at the University of Cambridge.
MinUncon finds the least value of a univariate function, f, where f is a MinUncon.IFunction. Optional data include an initial estimate of the solution, and a positive number specified by the Bound property. Let where Guess is specified by the Guess property and , then x is restricted to the interval . Usually, the algorithm begins the search by moving from to , where s = Step. Step is set by the Step property. If Step is not called then Step is set to 0.1. Step may be positive or negative. The first two function evaluations indicate the direction to the minimum point, and the search strides out along this direction until a bracket on a minimum point is found or until x reaches one of the bounds . During this stage, the step length increases by a factor of between two and nine per function evaluation; the factor depends on the position of the minimum point that is predicted by quadratic interpolation of the three most recent function values.
When an interval containing a solution has been found, we will have three points, , , and , with and and . There are three main ingredients in the technique for choosing the new x from these three points. They are (i) the estimate of the minimum point that is given by quadratic interpolation of the three function values, (ii) a tolerance parameter , that depends on the closeness of f to a quadratic, and (iii) whether is near the center of the range between and or is relatively close to an end of this range. In outline, the new value of x is as near as possible to the predicted minimum point, subject to being at least from , and subject to being in the longer interval between and or and when is particularly close to or . There is some elaboration, however, when the distance between these points is close to the required accuracy; when the distance is close to the machine precision; or when is relatively large.
The algorithm is intended to provide fast convergence when f has a positive and continuous second derivative at the minimum and to avoid gross inefficiencies in pathological cases, such as
The algorithm can make large automatically in the pathological cases. In this case, it is usual for a new value of x to be at the midpoint of the longer interval that is adjacent to the least calculated function value. The midpoint strategy is used frequently when changes to f are dominated by computer rounding errors, which will almost certainly happen if the user requests an accuracy that is less than the square root of the machine precision. In such cases, the routine claims to have achieved the required accuracy if it knows that there is a local minimum point within distance of x, where , specified by the Accuracy property even though the rounding errors in f may cause the existence of other local minimum points nearby. This difficulty is inevitable in minimization routines that use only function values, so high precision arithmetic is recommended.
If f is a MinUncon.IDerivative then MinUncon uses a descent method with either the secant method or cubic interpolation to find a minimum point of a univariate function. It starts with an initial guess and two endpoints. If any of the three points is a local minimum point and has least function value, the routine terminates with a solution. Otherwise, the point with least function value will be used as the starting point.
From the starting point, say , the function value , the derivative value , and a new point defined by are computed. The function , and the derivative are then evaluated. If either or has the opposite sign of , then there exists a minimum point between and ; and an initial interval is obtained. Otherwise, since is kept as the point that has lowest function value, an interchange between and is performed. The secant method is then used to get a new point
Let and repeat this process until an interval containing a minimum is found or one of the convergence criteria is satisfied. The convergence criteria are as follows:
Criterion 1:
Criterion 2:
where , is a relative error tolerance and is a gradient tolerance.
When convergence is not achieved, a cubic interpolation is performed to obtain a new point. The function and derivative are then evaluated at that point; and accordingly, a smaller interval that contains a minimum point is chosen. A safeguarded method is used to ensure that the interval reduces by at least a fraction of the previous interval. Another cubic interpolation is then performed, and this procedure is repeated until one of the stopping criteria is met.