minUnconDeriv¶
Finds the minimum point of a smooth function f(x) of a single variable using both function and first derivative evaluations.
Synopsis¶
minUnconDeriv (fcn, grad, a, b)
Required Arguments¶
- float
fcn
(floatx
) (Input/Output) - User-supplied function to compute the value of the function to be
minimized where
x
is the point at which the function is evaluated, andfcn
is the computed function value at the pointx
. - float
grad
(floatx
) (Input/Output) - User-supplied function to compute the first derivative of the function
where
x
is the point at which the derivative is evaluated, andgrad
is the computed value of the derivative at the pointx
. - float
a
(Input) - The lower endpoint of the interval in which the minimum point of
fcn
is to be located. - float
b
(Input) - The upper endpoint of the interval in which the minimum point of
fcn
is to be located.
Return Value¶
The point at which a minimum value of fcn
is found. If no value can be
computed, NaN is returned.
Optional Arguments¶
xguess
, float (Input)An initial guess of the minimum point of
fcn
.Default:
xguess
= (a
+b
)/
2errRel
, float (Input)The required relative accuracy in the final value of
x
. This is the first stopping criterion. On a normal return, the solutionx
is in an interval that contains a local minimum and is less than or equal to maxerr_rel
. When the givenerrRel
is less than zero,\sqrt{\varepsilon}is used as
errRel
where ɛ is the machine precision.Default:
errRel
= \sqrt{\varepsilon}gradTol
, float (Input)The derivative tolerance used to decide if the current point is a local minimum. This is the second stopping criterion.
x
is returned as a solution whengrad
is less than or equal togradTol
.gradTol
should be nonnegative; otherwise, zero would be used.Default:
gradTol
= \sqrt{\varepsilon} where ɛ is the machine precisionmaxFcn
, int (Input)Maximum number of function evaluations allowed.
Default:
maxFcn
= 1000fvalue
(Output)- The function value at point
x
. gvalue
(Output)- The derivative value at point
x
.
Description¶
The function minUnconDeriv
uses a descent method with either the secant
method or cubic interpolation to find a minimum points of a univariate
function. It starts with an initial guess and two endpoints. If any of the
three points is a local minimum point and has least function value, the
function terminates with a solution. Otherwise, the point with least
function value will be used as the starting point.
From the starting point, say x_c, the function value f_c= f(x_c), the derivative value g_c=g(x_c), and a new point x_n defined by x_n=x_c-g_c are computed. The function f_n=f(x_n), and the derivative g_n=g(x_n) are then evaluated. If either f_n\geq f_c or g_n has the opposite sign of g_c, then there exists a minimum point between x_c and x_n, and an initial interval is obtained. Otherwise, since x_c is kept as the point that has lowest function value, an interchange between x_n and x_c is performed. The secant method is then used to get a new point
Let x_n=x_s, and repeat this process until an interval containing a minimum is found or one of the convergence criteria is satisfied. The convergence criteria are as follows:
Criterion 1: |x_c-x_n|\leq\varepsilon _c
Criterion 2: |g_c|\leq\varepsilon _g
where \varepsilon_c=\max\{1.0,|x_c|\} \varepsilon, ɛ is an error tolerance, and \varepsilon_g is a gradient tolerance.
When convergence is not achieved, a cubic interpolation is performed to obtain a new point. Function and derivative are then evaluated at that point, and accordingly a smaller interval that contains a minimum point is chosen. A safeguarded method is used to ensure that the interval be reduced by at least a fraction of the previous interval. Another cubic interpolation is then performed, and this function is repeated until one of the stopping criteria is met.
Examples¶
Example 1¶
In this example, a minimum point of f(x)=e^x-5x is found.
from __future__ import print_function
from numpy import *
from pyimsl.math.minUnconDeriv import minUnconDeriv
def fcn(x):
return exp(x) - 5.0 * x
def deriv(x):
return exp(x) - 5.0
a = -10.0
b = 10.0
x = minUnconDeriv(fcn, deriv, a, b)
fx = fcn(x)
gx = deriv(x)
print("The solution is: %7.3f" % (x))
print("The function evaluated at the solution is: %9.3f" % (fx))
print("The derivative evaluated at the solution is: %7.3f" % (gx))
Output¶
The solution is: 1.609
The function evaluated at the solution is: -3.047
The derivative evaluated at the solution is: 0.000
Example 2¶
A minimum point of f(x)=x(x^3-1)+10 is found with an initial guess x_0=3.
from __future__ import print_function
from numpy import *
from pyimsl.math.minUnconDeriv import minUnconDeriv
def fcn(x):
return x * (pow(x, 3) - 1.0) + 10.0
def deriv(x):
return 4.0 * pow(x, 3) - 1.0
max_fcn = 50
a = -10.0
b = 10.0
xguess = 3.0
fx = []
gx = []
x = minUnconDeriv(fcn, deriv, a, b,
xguess=xguess,
maxFcn=max_fcn,
fvalue=fx,
gvalue=gx)
print("The solution is: %7.3f" % (x))
print("The function evaluated at the solution is: %7.3f" % (fx[0]))
print("The derrivative evaluated at the solution is: %7.3f" % (gx[0]))
Output¶
The solution is: 0.630
The function evaluated at the solution is: 9.528
The derrivative evaluated at the solution is: -0.000
Warning Errors¶
IMSL_MIN_AT_LOWERBOUND |
The final value of x is at the lower
bound. |
IMSL_MIN_AT_UPPERBOUND |
The final value of x is at the upper
bound. |
IMSL_TOO_MANY_FCN_EVAL |
Maximum number of function evaluations exceeded. |
Fatal Errors¶
IMSL_STOP_USER_FCN |
Request from user supplied function to stop algorithm. User flag = “#”. |