minConGenLin¶
Minimizes a general objective function subject to linear equality/inequality constraints.
Synopsis¶
minConGenLin (fcn, neq, a, b, xlb, xub)
Required Arguments¶
- void
fcn
(n
,x[]
,f
) (Input/Output) - User-supplied function to evaluate the function to be minimized. Argument
x
is a vector of lengthn
at which point the function is evaluated, andf
contains the function value atx
. - int
neq
(Input) - Number of linear equality constraints.
- float
a[[]]
(Input) - Array of size
ncon
×nvar
containing the equality constraint gradients in the firstneq
rows followed by the inequality constraint gradients. - float
b[]
(Input) - Array of size
ncon
containing the right-hand sides of the linear constraints. Specifically, the constraints on the variables \(x_i\), i = 0,nvar
-1, are \(a_{k,0} x_0+\ldots+a_{k,nvar-1} x_{nvar-1}=b_k\), k = 0, …,neq
- 1 and \(a_{k,0} x_0+\ldots +a_{k,nvar-1} x_{nvar-1}\leq b_k\), k =neq
, …,ncon
‑ 1. Note that the data that define the equality constraints come before the data of the inequalities. - float
xlb[]
(Input) - Array of length
nvar
containing the lower bounds on the variables; choose a very large negative value if a component should be unbounded below or setxub
[i] =xub
[i] to freeze the i-th variable. Specifically, these simple bounds arexlb
[i] ≤ \(x_i\), for i = 1, …,nvar
. - float
xub[]
(Input) - Array of length
nvar
containing the upper bounds on the variables; choose a very large positive value if a component should be unbounded above. Specifically, these simple bounds are \(x_i\) ≤xub
[i], for i = 1,nvar
.
Return Value¶
The solution x. If no solution can be computed, then None
is returned.
Optional Arguments¶
xguess
, float[]
(Input)Array with
n
components containing an initial guess.Default:
xguess
= 0gradient
, voidgradient
(n
,x[]
,g[]
) (Input)- User-supplied function to compute the gradient at the point
x
, wherex
is a vector of lengthn
, andg
is the vector of lengthn
containing the values of the gradient of the objective function. maxFcn
, int (Input)Maximum number of function evaluations.
Default:
maxFcn
= 400numberActiveConstraints
(Output)- Final number of active constraints.
activeConstraints
(Output)- A vector containing the
numberActiveConstraints
indices of the final active constraints. lagrangeMultipliers
(Output)- An array containing the Lagrange multiplier estimates of the final active
constraints in the first
numberActiveConstraints
locations. tolerance
, float (Input)The nonnegative tolerance on the first order conditions at the calculated solution.
Default:
tolerance
= \(\sqrt{\varepsilon}\), where ɛ is machine epsilonobj
(Output)- The value of the objective function.
Description¶
The function minConGenLin
is based on M.J.D. Powell’s TOLMIN, which
solves linearly constrained optimization problems, i.e., problems of the
form
subject to
given the vectors \(b_1\), \(b_2\), \(x_l\),and \(x_u\) and the matrices \(A_1\) and \(A_2\).
The algorithm starts by checking the equality constraints for inconsistency and redundancy. If the equality constraints are consistent, the method will revise \(x^0\), the initial guess, to satisfy
Next, \(x^0\) is adjusted to satisfy the simple bounds and inequality constraints. This is done by solving a sequence of quadratic programming subproblems to minimize the sum of the constraint or bound violations.
Now, for each iteration with a feasible \(x^k\), let \(J^k\) be the set of indices of inequality constraints that have small residuals. Here, the simple bounds are treated as inequality constraints. Let \(I_k\) be the set of indices of active constraints. The following quadratic programming problem
subject to
is solved to get (\(d^k\), \(\lambda^k\)) where \(a_j\) is a row vector representing either a constraint in \(A_1\) or \(A_2\) or a bound constraint on x. In the latter case, the \(a_j=e_i\) for the bound constraint \(x_i\leq(x_u)_i\) and \(a_j=-e_i\) for the constraint \(-x_i\leq(x_l)_i\). Here, \(e_i\) is a vector with 1 as the i-th component, and zeros elsewhere. Variables \(\lambda^k\) are the Lagrange multipliers, and \(B^k\) is a positive definite approximation to the second derivative \(\nabla^2 f(x^k)\).
After the search direction \(d^k\) is obtained, a line search is performed to locate a better point. The new point \(x^{k+1}=x^k+ \alpha^k d^k\) has to satisfy the conditions
and
The main idea in forming the set \(J_k\) is that, if any of the equality constraints restricts the step-length \(\alpha^k\), then its index is not in \(J_k\). Therefore, small steps are likely to be avoided.
Finally, the second derivative approximation \(B^K\), is updated by the BFGS formula, if the condition
holds. Let \(x^k\leftarrow x^{k+1}\), and start another iteration.
The iteration repeats until the stopping criterion
is satisfied. Here \(\tau\) is the supplied tolerance. For more details, see Powell (1988, 1989).
Since a finite difference method is used to approximate the gradient for
some single precision calculations, an inaccurate estimate of the gradient
may cause the algorithm to terminate at a noncritical point. In such cases,
high precision arithmetic is recommended. Also, if the gradient can be
easily provided, the option gradient
should be used.
Examples¶
Example 1¶
In this example, the problem
is solved.
from numpy import *
from pyimsl.math.minConGenLin import minConGenLin
from pyimsl.math.writeMatrix import writeMatrix
def fcn(n, x, f):
f[0] = x[0] * x[0] + x[1] * x[1] + x[2] * x[2] + x[3] * x[3] + x[4] * x[4] \
- 2.0 * x[1] * x[2] - 2.0 * x[3] * x[4] - 2.0 * x[0]
neq = 2
ncon = 2
nvar = 5
a = [[1.0, 1.0, 1.0, 1.0, 1.0],
[0.0, 0.0, 1.0, -2.0, -2.0]]
xlb = [0.0, 0.0, 0.0, 0.0, 0.0]
xub = [10.0, 10.0, 10.0, 10.0, 10.0]
b = [5.0, -3.0]
x = minConGenLin(fcn, neq, a, b, xlb, xub)
writeMatrix("Solution", x)
Output¶
Solution
1 2 3 4 5
1 1 1 1 1
Example 2¶
In this example, the problem from Schittkowski (1987)
is solved with an initial guess of \(x_0=10\), \(x_1=10\) and \(x_2=10\).
from __future__ import print_function
from numpy import *
from pyimsl.math.minConGenLin import minConGenLin
from pyimsl.math.writeMatrix import writeMatrix
def fcn(n, x, f):
f[0] = -x[0] * x[1] * x[2]
def grad(n, x, g):
g[0] = -x[1] * x[2]
g[1] = -x[0] * x[2]
g[2] = -x[0] * x[1]
neq = 0
ncon = 2
nvar = 3
lda = 2
obj = []
a = [[-1.0, -2.0, -2.0],
[1.0, 2.0, 2.0]]
xlb = [0.0, 0.0, 0.0]
xub = [20.0, 11.0, 42.0]
xguess = [10.0, 10.0, 10.0]
b = [0.0, 72.0]
x = minConGenLin(fcn, neq, a, b, xlb, xub,
gradient=grad,
xguess=xguess,
obj=obj)
writeMatrix("Solution", x)
print("")
print("Objective value = %f" % (obj[0]))
Output¶
Objective value = -3300.000000
Solution
1 2 3
20 11 15
Fatal Errors¶
IMSL_STOP_USER_FCN |
Request from user supplied function to stop algorithm. User flag = “#”. |