boundedLeastSquares

../../_images/OpenMP.png

Solves a nonlinear least-squares problem subject to bounds on the variables using a modified Levenberg-Marquardt algorithm.

Synopsis

boundedLeastSquares (fcn, m, n, ibtype, xlb, xub)

Required Arguments

void fcn (m, n, x[], f[]) (Input/Output)
User-supplied function to evaluate the function that defines the least-squares problem where x is a vector of length n at which point the function is evaluated, and f is a vector of length m containing the function values at point x.
int m (Input)
Number of functions.
int n (Input)
Number of variables where nm.
int ibtype (Input)
Scalar indicating the types of bounds on the variables.
ibtype Action
0 User will supply all the bounds.
1 All variables are nonnegative
2 All variables are nonpositive.
3 User supplies only the bounds on 1st variable, all other variables will have the same bounds
float xlb[] (Input, Output, or Input/Output)

Array with n components containing the lower bounds on the variables. (Input, if ibtype = 0; output, if ibtype = 1 or 2; Input/Output, if ibtype = 3)

If there is no lower bound on a variable, then the corresponding xlb value should be set to \(-10^6\).

float xub[] (Input, Output, or Input/Output)

Array with n components containing the upper bounds on the variables. (Input, if ibtype = 0; output, if ibtype 1 or 2; Input/Output, if ibtype = 3)

If there is no upper bound on a variable, then the corresponding xub value should be set to \(10^6\).

Return Value

The solution x of the nonlinear least-squares problem. If no solution can be computed, then None is returned.

Optional Arguments

xguess, float[] (Input)

Array with n components containing an initial guess.

Default: xguess = 0

jacobian, void jacobian (m, n, x[], fjac[]) (Input)
User-supplied function to compute the Jacobian where x is a vector of length n at which point the Jacobian is evaluated, fjac is the computed m × n Jacobian at the point x. Note that each partial derivative \(\partial f_i/\partial x_j\) should be returned in fjac[(i‑1)].
xscale, float[] (Input)

Array with n components containing the scaling vector for the variables. Argument xscale is used mainly in scaling the gradient and the distance between two points. See keywords gradTol and stepTol for more details.

Default: xscale[] = 1

fscale, float[] (Input)

Array with m components containing the diagonal scaling matrix for the functions. The i-th component of fscale is a positive scalar specifying the reciprocal magnitude of the i-th component function of the problem.

Default: fscale[] = 1

gradTol, float (Input)

Scaled gradient tolerance.

The second boundedLeastSquares stopping criterion occurs when \(\max\nolimits_{i=1}^{n} \left[g_{si}(x)\right]\)gradTol

where \(g_{si}(x)\) is component i of the scaled gradient of F at x, defined as:

\[g_{si}(x) = \frac {\left|g_i(x)\right| * \max\left(\left|x_i\right|,1/s_i\right)} {\tfrac{1}{2} \left\|F(x)\right\|_2^2}\]
\[g_i(x) = f_{si}^2 * \nabla_i \left[\tfrac{1}{2} \|F(x)\|_2^2\right] = f_{si}^2 * \left(J^T F\right)_i\]
\[\|F(x)\|_2^2 = \sum_{j=1}^{m} f_{\mathrm{j}}(x)^2\]

and where J is the Jacobian matrix for m-component function vector \(F(x)\) with n‑component argument x with \(J_{ji}= \nabla_i f_j(x)\), \(s_i\) = xscale[i-1], and \(f_{si}\) = fscale[i-1].

Default: gradTol = \(\sqrt{\varepsilon},\sqrt[3]{\varepsilon}\) in double where ɛ is the machine precision.

stepTol, float (Input)

Scaled step tolerance.

The third boundedLeastSquares stopping criterion occurs when \(\max\nolimits_{i=1}^{n} \left[\Delta x_i\right]\)stepTol, where:

x and y, and s = xscale.

Default: stepTol = \(\varepsilon^{2/3}\), where ɛ is the machine precision

relFcnTol, float relFcnTol (Input)

Relative function tolerance.

Default: relFcnTol = \(\max(10^{-10},\varepsilon^{2/3})\), \(\max(10^{-20},\varepsilon^{2/3})\) in double, where ɛ is the machine precision

absFcnTol, float (Input)

Absolute function tolerance.

The first boundedLeastSquares stopping criterion occurs when objective function \(\tfrac{1}{2} \|F(x)\|_2^2\)absFcnTol.

Default: absFcnTol = \(\max(10^{-20},\varepsilon^2)\), \(\max(10^{-40},\varepsilon^2)\) in double, where ɛ is the machine precision

maxStep, float (Input)

Maximum allowable step size.

Default: maxStep = \(1000 \max(\varepsilon_1,\varepsilon_2)\), where

\[\varepsilon_1 = \left(\sum\nolimits_{i=1}^{n}\left(s_i t_i\right)^2\right)^{1/2}, \varepsilon_2 = \|s\|_2\]

for s = xscale and t = xguess.

initTrustRegion, float (Input)
Size of initial trust region radius. The default is based on the initial scaled Cauchy step.
goodDigit, int (Input)

Number of good digits in the function.

Default: machine dependent

maxItn, int (Input)

Maximum number of iterations.

Default: maxItn = 100

maxFcn, int (Input)

Maximum number of function evaluations.

Default: maxFcn = 400

maxJacobian, int (Input)

Maximum number of Jacobian evaluations.

Default: maxJacobian = 400

internScale
Internal variable scaling option. With this option, the values for xscale are set internally.
fvec (Output)
A real array of length m containing the residuals at the approximate solution.
fjac (Output)
An array of size m × n containing the Jacobian at the approximate solution.

Description

The function boundedLeastSquares uses a modified Levenberg-Marquardt method and an active set strategy to solve nonlinear least-squares problems subject to simple bounds on the variables. The problem is stated as follows:

\[\min \tfrac{1}{2} F(x)^T F(x) = \tfrac{1}{2} \sum_{i=1}^{m} f_i(x)^2\]
\[\text{subject to } l\leq x\leq u\]

where \(m\geq n\), \(F : \Re^n\rightarrow\Re ^m\), and \(f_i(x)\) is the i-th component function of \(F(x)\). From a given starting point, an active set IA, which contains the indices of the variables at their bounds, is built. A variable is called a “free variable” if it is not in the active set. The routine then computes the search direction for the free variables according to the formula

\[d =-(J^TJ +μ I)^{-1} J^TF\]

where μ is the Levenberg-Marquardt parameter, \(F=F(x)\), and J is the Jacobian with respect to the free variables. The search direction for the variables in IA is set to zero. The trust region approach discussed by Dennis and Schnabel (1983) is used to find the new point. Finally, the optimality conditions are checked. The conditions are

\[∥g (x_i)∥ ≤ɛ, l_i < x_i < u_i\]
\[g (x_i) < 0, x_i =u_i\]
\[g (x_i) >0, x_i =l_i\]

where ɛ is a gradient tolerance. This process is repeated until the optimality criterion is achieved.

The active set is changed only when a free variable hits its bounds during an iteration or the optimality condition is met for the free variables but not for all variables in IA, the active set. In the latter case, a variable that violates the optimality condition will be dropped out of IA. For more detail on the Levenberg-Marquardt method, see Levenberg (1944) or Marquardt (1963). For more detail on the active set strategy, see Gill and Murray (1976).

The first stopping criterion for boundedLeastSquares occurs when the norm of the function is less than the absolute function tolerance. The second stopping criterion occurs when the norm of the scaled gradient is less than the scaled gradient tolerance. The third stopping criterion occurs when the scaled distance between the last two steps is less than the step tolerance. See options absFcnTol, gradTol, and stepTol for details.

Since a finite-difference method is used to estimate the Jacobian for some single-precision calculations, an inaccurate estimate of the Jacobian may cause the algorithm to terminate at a noncritical point. In such cases, high-precision arithmetic is recommended. Also, whenever the exact Jacobian can be easily provided, the option jacobian should be used.

On some platforms, boundedLeastSquares can evaluate the user-supplied functions fcn and jacobian in parallel. This is done only if the function ompOptions is called to flag user-defined functions as thread-safe. A function is thread-safe if there are no dependencies between calls. Such dependencies are usually the result of writing to global or static variables.

Examples

Example 1

In this example, the nonlinear least-squares problem

\[\begin{split}\begin{array}{l} \min \tfrac{1}{2} \sum\limits_{i=0}^{1} f_i(x)^2 \\ -2 \leq x_0 \leq 0.5 \\ -1 \leq x_1 \leq 2 \\ \end{array}\end{split}\]

where

\[f_0(x) = 10 \left(x_1 - x_0^2\right) \text{ and } f_1(x) = \left(1 - x_0\right)\]

is solved with an initial guess (-1.2, 1.0).

from __future__ import print_function
from numpy import *
from pyimsl.math.boundedLeastSquares import boundedLeastSquares


def rosbck(m, n, x, f):
    f[0] = 10.0 * (x[1] - x[0] * x[0])
    f[1] = 1.0 - x[0]


m = 2
n = 2
ldfjac = 2
ibtype = 0
xlb = [-2.0, -1.0]
xub = [0.5, 2.0]

x = boundedLeastSquares(rosbck, m, n, ibtype, xlb, xub)

print("x[0] = %f" % (x[0]))
print("x[1] = %f" % (x[1]))

Output

x[0] = 0.500000
x[1] = 0.250000

Example 2

This example solves the nonlinear least-squares problem

\[\begin{split}\begin{array}{l} \min \tfrac{1}{2} \sum\limits_{i=0}^{1} f_i(x)^2 \\ -2 \leq x_0 \leq 0.5 \\ -1 \leq x_1 \leq 2 \\ \end{array}\end{split}\]

where

\[f_0(x) = 10 \left(x_1 - x_0^2\right) \text{ and } f_1(x) = \left(1 - x_0\right)\]

This time, an initial guess (-1.2, 1.0) is supplied, as well as the analytic Jacobian. The residual at the approximate solution is returned.

from __future__ import print_function
from numpy import *
from pyimsl.math.boundedLeastSquares import boundedLeastSquares


def rosbck(m, n, x, f):
    f[0] = 10.0 * (x[1] - x[0] * x[0])
    f[1] = 1.0 - x[0]


def jacobian(m, n, x, fjac, fjac_col_dim):
    fjac[0] = -20.0 * x[0]
    fjac[1] = 10.0
    fjac[2] = -1.0
    fjac[3] = 0.0


m = 2
n = 2
ibtype = 0
xlb = [-2.0, -1.0]
xub = [0.5, 2.0]
xguess = [-1.2, 1.0]
fvec = []

x = boundedLeastSquares(rosbck, m, n, ibtype, xlb, xub,
                        jacobian=jacobian,
                        xguess=xguess,
                        fvec=fvec)

print("x[0] = %f" % (x[0]))
print("x[1] = %f" % (x[1]))
print("")
print("fvec[0] = %f" % (fvec[0]))
print("fvec[1] = %f" % (fvec[1]))

Output

x[0] = 0.500000
x[1] = 0.250000

fvec[0] = 0.000000
fvec[1] = 0.500000

Fatal Errors

IMSL_STOP_USER_FCN

Request from user supplied function to stop algorithm.

User flag = “#”.