zeroUnivariate¶
Finds a zero of a real univariate function.
Synopsis¶
zeroUnivariate (fcn, a, b)
Required Arguments¶
- float
fcn
(x
) (Input/Output) - User-supplied function to compute the value of the function of which the zero will be found.
- float
a
(Input/Output) - See
b
. - float
b
(Input/Output) Two points at which the user-supplied function can be evaluated.
On input, if
fcn(a)
andfcn(b)
are of opposite sign, the zero will be found in the interval[a, b]
and on outputb
will contain the value ofx
at whichfcn(x)
= 0. Iffcn(a)
×fcn(b)
>
0, anda ≠ b
then a search along the x number line is initiated for a point at which there is a sign change and|b – a|
will be used in setting the step size for the initial search. Ifa = b
on entry then the search is started as described in the description section below.On output,
b
is the abscissa at which|fcn(x)|
had the smallest value. Iffcn(b)
≠
0 on output,a
will contain the nearest abscissa to outputb
at whichfcn(x)
was evaluated and found to have the opposite sign fromfcn(b)
.
Optional Arguments¶
errTol
, float (Input)Error tolerance. If
errTol
> 0.0, the zero is to be isolated to an interval of length less thanerrTol
. IferrTol
< 0.0, anx
is desired for which|fcn(x)|
is ≤|errTol|
. IferrTol
= 0.0, the iteration continues until the zero offcn(x)
is isolated as accurately as possible.Default:
errTol
= 0.0maxEvals
, int (Input)An upper bound on the number of function evaluations required for convergence. Set
maxEvals
to 0 if the number of function evaluations is to be unbounded.Default:
maxEvals
=
0nEvals
(Output)- The actual number of function evaluations used.
Description¶
The function zeroUnivariate
is based on the JPL Library routine
SZERO
. The algorithm used is attributed to Dr. Fred T. Krogh, JPL, 1972.
Tests have shown zeroUnivariate
to require fewer function evaluations, on
average, than a number of other algorithms for finding a zero of a continuous
function. Let f be a continuous univariate function. zeroUnivariate
will accept any two points a
and b
and initiate a search on the
number line for an x such that \(f(x)=0\) when there is no sign
difference between f(a
) and f(b
). In either case, b
is
updated with a new value on each successive iteration. The algorithm
description follows.
When f(a
) × f(b
) >0 at the initial point, iterates for x
are generated according to the formula \(x=x_{min}+(x_{min}-
x_{max})\times\rho\), where the subscript “min” is associated with the
\(\left(f,x\right)\) pair that has the smallest value for \(|f|\),
the subscript “max” is associated with the \((f,x)\) pair that has the
largest value for \(|f|\), and ρ is 8 if \(r=f_{min}/(f_{max}-
f_{min})\geq 8\), else \(\rho=\max(\kappa/4,r)\), where κ is a count of
the number of iterations that have been taken without finding
f’s with opposite signs. If a
and b
have the same value
initially, then the next x is a distance \(0.008+|x_{min}|/4\) from
\(x_{min}\) taken toward 0. (If a
= b
= 0, the next x is
-.008.)
Let \(x_1\) and \(x_2\) denote the first two x values that give f
values with different signs. Let \(\alpha<\beta\) be the two values of
x that bracket the zero as tightly as is known. Thus \(\alpha=x_1\) or
\(\alpha=x_2\) and β is the other when computing \(x_3\). The next
point, \(x_3\), is generated by treating x as the linear function
\(q(f)\) that interpolates the points \((f(x_1),x_1)\) and
\((f(x_2),x_2)\), and then computing \(x_3=q(0)\), subject to the
condition that \(\alpha+\varepsilon\leq x_3\leq\beta -\varepsilon\),
where ɛ = 0.875 × max(errTol
, machine precision). (This condition on
\(x_3\) with updated values for α and β is also applied to future
iterates.)
Let \(x_4,x_5,\ldots,x_m\) denote the abscissae on the following iterations. Let \(a=x_m\), \(b=x_{m-1}\), and \(c= x_{m-2}\). Either α or β (defined as above) will coincide with a, and β will frequently coincide with either b or c. Let \(p(x)\) be the quadratic polynomial in x that passes through the values of f evaluated at a, b, and c. Let \(q(f)\) be the quadratic polynomial in f that passes through the points \((f(a),a)\), \((f(b),b)\), and \((f(c), c)\).
Let \(\zeta=\alpha\text{ or } \beta\), selected so that \(\zeta \neq a\). If the sign of f has changed in the last 4 iterations and \(p'(a)\times q'(f(a))\) and \(p'(\zeta))\times q'(f(\zeta))\) are both in the interval \(\left[1/4,4\right]\), then x is set to \(q(0)\). (Note that if p is replaced by f and q is replaced by x, then both products have the value 1.) Otherwise x is set to \(a- (a-\zeta) (\phi/(1+\phi))\), where ϕ is selected based on past behavior and is such that \(0<\phi\). If the sign of \(f()\) does not change for an extended period, ϕ gets large.
Example¶
This example finds a zero of the function
in the interval [ − 10.0, 0.0].
from __future__ import print_function
from numpy import *
from pyimsl.math.zeroUnivariate import zeroUnivariate
def fcn(x):
return x * x + x - 2.0
a = [-10.0]
b = [0.0]
nevals = []
x = zeroUnivariate(fcn, a, b, nEvals=nevals)
print("The best approximation to the zero of f is equal to %6.3f" % b[0])
print("The number of function evaluations required was %d" % nevals[0])
Output¶
The best approximation to the zero of f is equal to -2.000
The number of function evaluations required was 12
Fatal Errors¶
IMSL_ERROR_TOL_NOT_SATISFIED |
The error tolerance criteria was
not satisfied. “b ”
contains the abscissa at which
“|fcn(x)| ” had the
smallest value. |
IMSL_DISCONTINUITY_IDENTIFIED |
Apparently “fcn ” has a
discontinuity between “a ”
= # and “b ” = #. No zero
has been identified. |
IMSL_ZERO_NOT_FOUND |
“fcn(a) * fcn(b) ” > 0
where “a ” = #” and
“b ” = #, but the
algorithm is unable to identify
function values with opposite
signs. |
IMSL_MAX_FCN_EVAL_EXCEEDED |
The maximum number of function
evaluations, “maxEvals ” =
#, has been exceeded. |
IMSL_STOP_USER_FCN |
Request from user supplied function to stop algorithm. User flag = “#”. |