minConGenLin¶

Minimizes a general objective function subject to linear equality/inequality constraints.
Synopsis¶
minConGenLin (fcn, neq, a, b, xlb, xub)
Required Arguments¶
- void
fcn
(n
,x[]
,f
) (Input/Output) - User-supplied function to evaluate the function to be minimized. Argument
x
is a vector of lengthn
at which point the function is evaluated, andf
contains the function value atx
. - int
neq
(Input) - Number of linear equality constraints.
- float
a[[]]
(Input) - Array of size
ncon
×nvar
containing the equality constraint gradients in the firstneq
rows followed by the inequality constraint gradients. - float
b[]
(Input) - Array of size
ncon
containing the right-hand sides of the linear constraints. Specifically, the constraints on the variables xi, i = 0,nvar
-1, are ak,0x0+…+ak,nvar−1xnvar−1=bk, k = 0, …,neq
- 1 and ak,0x0+…+ak,nvar−1xnvar−1≤bk, k =neq
, …,ncon
‑ 1. Note that the data that define the equality constraints come before the data of the inequalities. - float
xlb[]
(Input) - Array of length
nvar
containing the lower bounds on the variables; choose a very large negative value if a component should be unbounded below or setxub
[i] =xub
[i] to freeze the i-th variable. Specifically, these simple bounds arexlb
[i] ≤ xi, for i = 1, …,nvar
. - float
xub[]
(Input) - Array of length
nvar
containing the upper bounds on the variables; choose a very large positive value if a component should be unbounded above. Specifically, these simple bounds are xi ≤xub
[i], for i = 1,nvar
.
Return Value¶
The solution x. If no solution can be computed, then None
is returned.
Optional Arguments¶
xguess
, float[]
(Input)Array with
n
components containing an initial guess.Default:
xguess
= 0gradient
, voidgradient
(n
,x[]
,g[]
) (Input)- User-supplied function to compute the gradient at the point
x
, wherex
is a vector of lengthn
, andg
is the vector of lengthn
containing the values of the gradient of the objective function. maxFcn
, int (Input)Maximum number of function evaluations.
Default:
maxFcn
= 400numberActiveConstraints
(Output)- Final number of active constraints.
activeConstraints
(Output)- A vector containing the
numberActiveConstraints
indices of the final active constraints. lagrangeMultipliers
(Output)- An array containing the Lagrange multiplier estimates of the final active
constraints in the first
numberActiveConstraints
locations. tolerance
, float (Input)The nonnegative tolerance on the first order conditions at the calculated solution.
Default:
tolerance
= √ε, where ɛ is machine epsilonobj
(Output)- The value of the objective function.
Description¶
The function minConGenLin
is based on M.J.D. Powell’s TOLMIN, which
solves linearly constrained optimization problems, i.e., problems of the
form
subject to
given the vectors b1, b2, xl,and xu and the matrices A1 and A2.
The algorithm starts by checking the equality constraints for inconsistency and redundancy. If the equality constraints are consistent, the method will revise x0, the initial guess, to satisfy
Next, x0 is adjusted to satisfy the simple bounds and inequality constraints. This is done by solving a sequence of quadratic programming subproblems to minimize the sum of the constraint or bound violations.
Now, for each iteration with a feasible xk, let Jk be the set of indices of inequality constraints that have small residuals. Here, the simple bounds are treated as inequality constraints. Let Ik be the set of indices of active constraints. The following quadratic programming problem
subject to
is solved to get (dk, λk) where aj is a row vector representing either a constraint in A1 or A2 or a bound constraint on x. In the latter case, the aj=ei for the bound constraint xi≤(xu)i and aj=−ei for the constraint −xi≤(xl)i. Here, ei is a vector with 1 as the i-th component, and zeros elsewhere. Variables λk are the Lagrange multipliers, and Bk is a positive definite approximation to the second derivative ∇2f(xk).
After the search direction dk is obtained, a line search is performed to locate a better point. The new point xk+1=xk+αkdk has to satisfy the conditions
and
The main idea in forming the set Jk is that, if any of the equality constraints restricts the step-length αk, then its index is not in Jk. Therefore, small steps are likely to be avoided.
Finally, the second derivative approximation BK, is updated by the BFGS formula, if the condition
holds. Let xk←xk+1, and start another iteration.
The iteration repeats until the stopping criterion
is satisfied. Here \tau is the supplied tolerance. For more details, see Powell (1988, 1989).
Since a finite difference method is used to approximate the gradient for
some single precision calculations, an inaccurate estimate of the gradient
may cause the algorithm to terminate at a noncritical point. In such cases,
high precision arithmetic is recommended. Also, if the gradient can be
easily provided, the option gradient
should be used.
Examples¶
Example 1¶
In this example, the problem
is solved.
from numpy import *
from pyimsl.math.minConGenLin import minConGenLin
from pyimsl.math.writeMatrix import writeMatrix
def fcn(n, x, f):
f[0] = x[0] * x[0] + x[1] * x[1] + x[2] * x[2] + x[3] * x[3] + x[4] * x[4] \
- 2.0 * x[1] * x[2] - 2.0 * x[3] * x[4] - 2.0 * x[0]
neq = 2
ncon = 2
nvar = 5
a = [[1.0, 1.0, 1.0, 1.0, 1.0],
[0.0, 0.0, 1.0, -2.0, -2.0]]
xlb = [0.0, 0.0, 0.0, 0.0, 0.0]
xub = [10.0, 10.0, 10.0, 10.0, 10.0]
b = [5.0, -3.0]
x = minConGenLin(fcn, neq, a, b, xlb, xub)
writeMatrix("Solution", x)
Output¶
Solution
1 2 3 4 5
1 1 1 1 1
Example 2¶
In this example, the problem from Schittkowski (1987)
is solved with an initial guess of x_0=10, x_1=10 and x_2=10.
from __future__ import print_function
from numpy import *
from pyimsl.math.minConGenLin import minConGenLin
from pyimsl.math.writeMatrix import writeMatrix
def fcn(n, x, f):
f[0] = -x[0] * x[1] * x[2]
def grad(n, x, g):
g[0] = -x[1] * x[2]
g[1] = -x[0] * x[2]
g[2] = -x[0] * x[1]
neq = 0
ncon = 2
nvar = 3
lda = 2
obj = []
a = [[-1.0, -2.0, -2.0],
[1.0, 2.0, 2.0]]
xlb = [0.0, 0.0, 0.0]
xub = [20.0, 11.0, 42.0]
xguess = [10.0, 10.0, 10.0]
b = [0.0, 72.0]
x = minConGenLin(fcn, neq, a, b, xlb, xub,
gradient=grad,
xguess=xguess,
obj=obj)
writeMatrix("Solution", x)
print("")
print("Objective value = %f" % (obj[0]))
Output¶
Objective value = -3300.000000
Solution
1 2 3
20 11 15
Fatal Errors¶
IMSL_STOP_USER_FCN |
Request from user supplied function to stop algorithm. User flag = “#”. |