Click or drag to resize
MinConGenLin Class
Minimizes a general objective function subject to linear equality/inequality constraints.
Inheritance Hierarchy
SystemObject
  Imsl.MathMinConGenLin

Namespace: Imsl.Math
Assembly: ImslCS (in ImslCS.dll) Version: 6.5.2.0
Syntax
[SerializableAttribute]
public class MinConGenLin

The MinConGenLin type exposes the following members.

Constructors
  NameDescription
Public methodMinConGenLin
Constructor for MinConGenLin.
Top
Methods
  NameDescription
Public methodEquals
Determines whether the specified object is equal to the current object.
(Inherited from Object.)
Protected methodFinalize
Allows an object to try to free resources and perform other cleanup operations before it is reclaimed by garbage collection.
(Inherited from Object.)
Public methodGetFinalActiveConstraints
Returns the indices of the final active constraints.
Public methodGetHashCode
Serves as a hash function for a particular type.
(Inherited from Object.)
Public methodGetLagrangeMultiplierEstimate
Returns the Lagrange multiplier estimates of the final active constraints.
Public methodGetSolution
Returns the computed solution.
Public methodGetType
Gets the Type of the current instance.
(Inherited from Object.)
Protected methodMemberwiseClone
Creates a shallow copy of the current Object.
(Inherited from Object.)
Public methodSetGuess
Sets an initial guess of the solution.
Public methodSolve
Minimizes a general objective function subject to linear equality/inequality constraints.
Public methodToString
Returns a string that represents the current object.
(Inherited from Object.)
Top
Properties
  NameDescription
Public propertyFinalActiveConstraintsNum
Returns the final number of active constraints.
Public propertyNumberOfProcessors
Perform the parallel calculations with the maximum possible number of processors set to NumberOfProcessors.
Public propertyObjectiveValue
Returns the value of the objective function.
Public propertyParallel
Enable or disable performing MinConGenLin.IFunctionn.F in parallel.
Public propertyTolerance
The nonnegative tolerance on the first order conditions at the calculated solution.
Top
Remarks

The class MinConGenLin is based on M.J.D. Powell's TOLMIN, which solves linearly constrained optimization problems, i.e., problems of the form

\min f (x)

subject to

A_1x = b_1

A_2x\le b_2

x_l \le x\le x_u

given the vectors b_1, b_2, x_l, and x_u and the matrices A_1 and A_2.

The algorithm starts by checking the equality constraints for inconsistency and redundancy. If the equality constraints are consistent, the method will revise x^0, the initial guess, to satisfy

A_1x =b_1

Next, x^0 is adjusted to satisfy the simple bounds and inequality constraints. This is done by solving a sequence of quadratic programming subproblems to minimize the sum of the constraint or bound violations.

Now, for each iteration with a feasible x^k, let J_k be the set of indices of inequality constraints that have small residuals. Here, the simple bounds are treated as inequality constraints. Let I_k be the set of indices of active constraints. The following quadratic programming problem

\min f\left( {x^k } \right) + d^T \nabla 
            \,f\left( {x^k } \right) + {1 \over 2}d^T B^k d

subject to

a_jd = 0, \,j \in I_k

a_jd  \le 0, \,j \in J_k

is solved to get (d^k, \lambda^k) where a_j is a row vector representing either a constraint in A_1 or A_2 or a bound constraint on x. In the latter case, the a_j = e_j for the bound constraint x_i \le 
            (x_u)_i and a_j = -e_i for the constraint -x_i \le (x_l)_i. Here, e_i is a vector with 1 as the i-th component, and zeros elsewhere. Variables \lambda^k are the Lagrange multipliers, and B^k is a positive definite approximation to the second derivative \nabla ^2  f(x^k).

After the search direction d^k is obtained, a line search is performed to locate a better point. The new point x^{k+1}= x^k +\alpha^kd^k has to satisfy the conditions

f(x^k + \alpha^kd^k) \le f(x^k) + 0.1 
            \alpha^k (d^k)^T\nabla  f(x^k)

and

(d^k)^T\nabla  f(x^k +\alpha ^kd^k) \ge 0.7 
            (d^k)^T\nabla  f(x^k)

The main idea in forming the set J_k is that, if any of the equality constraints restricts the step-length \alpha^k, then its index is not in J_k. Therefore, small steps are likely to be avoided.

Finally, the second derivative approximation B^K, is updated by the BFGS formula, if the condition

\left( {d^K } \right)^T \nabla f\left( {x^k  + 
            \alpha ^k d^k } \right) - \nabla f\left( {x^k } \right) > 0

holds. Let x^k  \leftarrow x^{k + 1}, and start another iteration.

The iteration repeats until the stopping criterion

\left\| {\nabla f(x^k ) - A^k \lambda ^K } \right\|_2  \le 
            \tau

is satisfied. Here \tau is the supplied tolerance. For more details, see Powell (1988, 1989).

See Also