sparseLinProg¶
Solves a sparse linear programming problem by an infeasible primal-dual interior-point method.
Synopsis¶
sparseLinProg (a, b, c)
Required Arguments¶
a[[]]
(Input)- An array of length
nza
containing the location and value of each nonzero coefficient in the constraint matrix A. - float
b[]
(Input) - An array of length
m
containing the right-hand side of the constraints; if there are limits on both sides of the constraints, thenb
contains the lower limit of the constraints. - float
c[]
(Input) - An array of length
n
containing the coefficients of the objective function.
Return Value¶
An array of length n
containing the solution x of the linear
programming problem. If no solution can be computed, then None
is
returned.
Optional Arguments¶
constrType
, int[]
(Input)An array of length
m
containing the types of general constraints in the matrix A. Let \(r_i=a_{i1} x_1+\ldots+a_{in} x_n\). Then, the value ofconstrType
[i
] signifies the following:constrType[i]
Constraint 0 \(r_i=b_i\) 1 \(r_i\leq b_i\) 2 \(r_i\geq b_i\) 3 \(b_i\leq r_i \leq upperLimit_i\) 4 Ignore this constraint Note that
constrType[i]
= 3 should only be used for constraintsi
with both finite lower and finite upper bound. For one-sided constraints, useconstrType[i]
= 1 orconstrType[i]
= 2. For free constraints, useconstrType[i]
= 4.Default:
constrType
= 0upperLimit
, float[]
(Input)Array of length
m
containing the upper limit of the constraints that have both a lower and an upper bound. If such a constraint exists, then optional argumentconstrType
must be used to define the type of the constraints. IfconstrType[i]
≠ 3, i.e. if constrainti
is not two-sided, then the corresponding entry inupperLimit
,upperLimit[i]
, is ignored.Default: None of the constraints has an upper bound.
lowerBound
, float[]
(Input)An array of length
n
containing the lower bound on the variables. If there is no lower bound on a variable, then \(-10^{30}\) should be set as the lower bound.Default:
lowerBound
= 0.upperBound
, float[]
(Input)An array of length
n
containing the upper bound on the variables. If there is no upper bound on a variable, then \(10^{30}\) should be set as the upper bound.Default: None of the variables has an upper bound.
objConstant
, float (Input)Value of the constant term in the objective function.
Default:
objConstant
= 0.preordering
, int (Input)The variant of the Minimum Degree Ordering (MDO) algorithm used in the preordering of the normal equations or augmented system matrix:
preordering
Method 0 A variant of the MDO algorithm using pivotal cliques. 1 A variant of George and Liu’s Quotient Minimum Degree algorithm. Default:
preordering
= 0.maxIterations
, int (Input)The maximum number of iterations allowed for the primal-dual solver.
Default:
maxIterations
= 200.optTol
, float (Input)The relative optimality tolerance.
Default:
optTol
= 1.0e-10.prinfTol
, float (Input)The primal infeasibility tolerance.
Default:
prinfTol
= 1.0e-8.dlinfTol
, float (Input)The dual infeasibility tolerance.
Default:
dlinfTol
= 1.0e-8.t_print
, int (Input)Printing option.
t_print
Action 0 No printing is performed. 1 Prints statistics on the LP problem and the solution process. Default:
t_print
= 0.presolve
, int (Input)Presolve the LP problem in order to reduce the problem size or to detect infeasibility or unboundedness of the problem. Depending on the number of presolve techniques used different presolve levels can be chosen:
presolve
Description 0 No presolving. 1 Eliminate singleton rows 2 Additionally to 1, eliminate redundant (and forcing) rows. 3 Additionally to 2, eliminate dominated variables. 4 Additionally to 3, eliminate singleton columns. 5 Additionally to 4, eliminate doubleton rows. 6 Additionally to 5, eliminate aggregate columns. Default:
presolve =0
.cscFormat
, intaColptr[]
, intaRowind[]
, floataValues[]
(Input)Accept the constraint matrix A in Harwell-Boeing format. See (Compressed Sparse Column (CSC) Format) in the Introduction to this manual for a discussion of this storage scheme.
If this optional argument is used, then required argument
a
is ignored.terminationStatus
(Output)- The termination status for the problem.
terminationStatus |
Description |
---|---|
0 | Optimal solution found. |
1 | The problem is primal infeasible (or dual unbounded). |
2 | The problem is primal unbounded (or dual infeasible). |
3 | Suboptimal solution found (accuracy problems). |
4 | Iterations limit maxIterations exceeded. |
5 | An error outside of the solution phase of the algorithm, e.g., a user input or a memory allocation error. |
obj
(Output)- Optimal value of the objective function.
iterationCount
(Output)- The number of iterations required by the primal-dual solver.
dual
(Output)- An internally allocated array of length
m
containing the dual solution. primalInfeas
(Output)- The violation of the primal constraints, described by
err_b
, the primal infeasibility of the solution, and byerr_u
, the violation of the variable bounds. dualInfeas
(Output)- The violation of the dual constraints, described by
err_c
, the dual infeasibility of the solution. cpRatioSmallest
(Output)- The ratio of the smallest complementarity product to the average.
cpRatioLargest
(Output)- The ratio of the largest complementarity product to the average.
Description¶
The function sparseLinProg
uses an infeasible primal-dual interior-point
method to solve linear programming problems, i.e., problems of the form
where c is the objective coefficient vector, A is the coefficient matrix, and the vectors \(b_l\), \(b_u\), \(x_l\), and \(x_u\) are the lower and upper bounds on the constraints and the variables, respectively.
Internally, sparseLinProg
transforms the problem given by the user into
a simpler form that is computationally more tractable. After redefining the
notation, the new form reads
Here, \(I_u\cup I_s=\{ 1,\ldots,n \}\) is a partition of the index set \(\{ 1,\ldots,n \}\) into upper bounded and standard variables.
In order to simplify the description it is assumed in the following that the problem above contains only variables with upper bounds, i.e. is of the form
The corresponding dual problem is then
The Karush-Kuhn-Tucker (KKT) optimality conditions for (P) and (D) are
where \(X=\mathit{diag}(x),Z=\mathit{diag}(z),S=\mathit{diag}(s),W=\mathit{diag}(w)\) are diagonal matrices and \(e=(1,\ldots,1)^T\) is a vector of ones.
Function sparseLinProg
, like all infeasible interior point methods,
generates a sequence
of iterates, that satisfy \(\left( x_k,s_k,y_k,z_k,w_k \right)>0\) for all \(k\), but are in general not feasible, i.e. the linear constraints (1.1)-(1.3) are only satisfied in the limiting case \(k\to\infty\).
The barrier parameter \(\mu\), defined by
measures how good the complementarity conditions (1.4), (1.5) are satisfied.
Mehrotra’s predictor-corrector algorithm is a variant of Newton’s method
applied to the KKT conditions (1.1)-(1.5). Function sparseLinProg
uses a
modified version of this algorithm to compute the iterates \(\left(
x_k,s_k,y_k,z_k,w_k \right)\). In every step of the algorithm, the search
direction vector
is decomposed into two parts, \(\mathit{\Delta}=\mathit{\Delta}_a+\mathit{\Delta}_c^\omega\), where \(\mathit{\Delta}_a\) and \(\Delta_c^\omega\) denote the affine-scaling and a weighted centering component, respectively. Here,
where \(\omega_P\) and \(\omega_D\) denote the primal and dual corrector weights, respectively.
The vectors \(\mathit{\Delta}_a\) and \(\mathit{\Delta}_c :=\left( \mathit{\Delta}x_c,\mathit{\Delta}s_c,\mathit{\Delta}y_c,\mathit{\Delta}z_c,\mathit{\Delta}w_c \right)\) are determined by solving the linear system
for two different right-hand sides.
For \(\mathit{\Delta}_a\), the right-hand side is defined as
Here, \(r_b\) and \(r_u\) are the violations of the primal constraints and \(r_c\) defines the violations of the dual constraints.
The resulting direction \(\mathit{\Delta}_a\) is the pure Newton step applied to the system (1.1)-(1.5).
In order to obtain the corrector direction \(\mathit{\Delta}_c\), the maximum stepsizes \(\alpha_{Pa}\) in the primal and \(\alpha_{Da}\) in the dual space preserving nonnegativity of \((x,s)\) and \((z,w)\) respectively, are determined and the predicted complementarity gap
is computed. It is then used to determine the barrier parameter
where \(g=x^Tz+s^Tw\) denotes the current complementarity gap.
The direction \(\mathit{\Delta}_c\) is then computed by choosing
as the right-hand side in the linear system (2).
Function sparseLinProg
now uses a linesearch to find the optimal weight
\(\hat{\omega}=\left( \hat{\omega}_P,\hat{\omega}_D \right)\) that
maximizes the stepsizes \(\left( \alpha_P,\alpha_D \right)\) in the
primal and dual directions of
\(\mathit{\Delta}=\mathit{\Delta}_a+\mathit{\Delta}_c^{\omega}\),
respectively.
A new iterate is then computed using a step reduction factor \(\alpha_0=0.99995\):
In addition to the weighted Mehrotra predictor-corrector, sparseLinProg
also uses multiple centrality correctors to enlarge the primal-dual
stepsizes per iteration step and to reduce the overall number of iterations
required to solve an LP problem. The maximum number of centrality
corrections depends on the ratio of the factorization and solve efforts for
system (2) and is therefore problem dependent. For a detailed description of
multiple centrality correctors, refer to Gondzio(1994).
The linear system (2) can be reduced to more compact forms, the augmented system (AS)
or further by elimination of \(\mathit{\Delta}x\) to the normal equations (NE) system
where
The matrix on the left-hand side of (3), which is symmetric indefinite, can be transformed into a symmetric quasidefinite matrix by regularization. Since these types of matrices allow for a Cholesky-like factorization, the resulting linear system can be solved easily for \(( \mathit{\Delta} x,\mathit{\Delta} y )\) by triangular substitutions. For more information on the regularization technique, see Altman and Gondzio (1998). For the NE system, matrix \(A \Theta A^T\) is positive definite, and therefore a sparse Cholesky algorithm can be used to factor \(A \Theta A^T\) and solve the system for \(\mathit{\Delta}y\) by triangular substitutions with the Cholesky factor \(L\).
In function sparseLinProg
, both approaches are implemented. The AS
approach is chosen if \(A\) contains dense columns, if there is a
considerable number of columns in \(A\) that are much denser than the
remaining ones or if there are many more rows than columns in the structural
part of \(A\). Otherwise, the NE approach is selected.
Function sparseLinProg
stops with optimal termination status if the
current iterate satisfies the following three conditions:
where prinfTol
, dlinfTol
and optTol
are primal infeasibility,
dual infeasibility and optimality tolerances, respectively. The default
value is 1.0e-10 for optTol
and 1.0e-8 for the two other tolerances.
Function sparseLinProg
is based on the code HOPDM developed by Jacek
Gondzio et al., see the HOPDM User’s Guide (1995).
Examples¶
Example 1¶
The linear programming problem
is solved.
from __future__ import print_function
from numpy import *
from pyimsl.math.sparseLinProg import sparseLinProg
from pyimsl.math.writeMatrix import writeMatrix
m = 2
n = 3
a = [[0, 0, 1.0],
[0, 1, 3.0],
[1, 1, 2.0],
[1, 2, 3.0],
[2, 0, 1.0],
[2, 1, 1.0],
[2, 2, 1.0]]
b = [3.0, 6.0, 2.0]
c = [2.0, -8.0, 3.0]
xlb = [-1.0, 0.0, 0.0]
xub = [5.0, 7.0, 9.0]
irtype = [1, 1, 2]
obj = []
x = sparseLinProg(a, b, c, constrType=irtype,
lowerBound=xlb, upperBound=xub,
obj=obj)
writeMatrix("x", x)
print("\nObjective: %f" % obj[0])
Output¶
Objective: -6.000000
x
1 2 3
-0.375 1.125 1.250
Example 2¶
This example demonstrates how the function readMps
can be used with sparseLinProg
to solve a linear programming problem
defined in an MPS file. The MPS file used in this example is an
uncompressed version of the file ‘afiro
’, available from
http://www.netlib.org/lp/data/.
from __future__ import print_function
from numpy import *
from pyimsl.math.sparseLinProg import sparseLinProg
from pyimsl.math.readMps import readMps
from pyimsl.math.writeMatrix import writeMatrix
# Read the QPS file.
problem = readMps("afiro.mps")
m = problem[0].nrows
n = problem[0].ncolumns
# Setup the constraint matrix.
a = []
nza = problem[0].nonzeros
for k in range(nza):
a.append([problem[0].constraint[k].row,
problem[0].constraint[k].col,
problem[0].constraint[k].val])
# Setup constraint bounds and constraint type array.
irtype = empty(m, dtype=int)
bl = empty(m, dtype=double)
bu = empty(m, dtype=double)
for i in range(m):
if ((problem[0].lower_range[i] == problem[0].negative_infinity)
and (problem[0].upper_range[i] == problem[0].positive_infinity)):
bl[i] = problem[0].negative_infinity
bu[i] = problem[0].positive_infinity
irtype[i] = 4
elif (problem[0].lower_range[i] == problem[0].negative_infinity):
bl[i] = problem[0].upper_range[i]
bu[i] = problem[0].positive_infinity
irtype[i] = 1
elif (problem[0].upper_range[i] == problem[0].positive_infinity):
bl[i] = problem[0].lower_range[i]
bu[i] = problem[0].positive_infinity
irtype[i] = 2
else:
if (problem[0].lower_range[i] == problem[0].upper_range[i]):
bl[i] = problem[0].lower_range[i]
bu[i] = problem[0].positive_infinity
irtype[i] = 0
else:
bl[i] = problem[0].lower_range[i]
bu[i] = problem[0].upper_range[i]
irtype[i] = 3
# Set up variable bounds. Be sure to account for
# how unbounded variables should be set.
xlb = empty(n, dtype=double)
xub = empty(n, dtype=double)
c = empty(n, dtype=double)
for i in range(n):
if problem[0].lower_bound[i] == problem[0].negative_infinity:
xlb[i] = -1.0e30
else:
xlb[i] = problem[0].lower_bound[i]
if problem[0].upper_bound[i] == problem[0].positive_infinity:
xub[i] = 1.0e30
else:
xub[i] = problem[0].upper_bound[i]
c[i] = problem[0].objective[i]
# Solve the LP problem.
obj = []
x = sparseLinProg(a, bl, c,
upperLimit=bu,
constrType=irtype,
lowerBound=xlb, upperBound=xub,
obj=obj, presolve=6)
print("Problem Name: ", problem[0].name.decode("utf-8"))
print("objective : %15.10e" % obj[0])
writeMatrix("Solution", x)
Output¶
Problem Name: AFIRO
objective : -4.6475314284e+02
Solution
1 2 3 4 5 6
80.0 25.5 54.5 84.8 65.4 0.0
7 8 9 10 11 12
0.0 0.0 0.0 0.0 0.0 0.0
13 14 15 16 17 18
18.2 47.2 69.4 500.0 475.9 24.1
19 20 21 22 23 24
0.0 215.0 141.7 0.0 0.0 0.0
25 26 27 28 29 30
0.0 0.0 0.0 0.0 339.9 242.3
31 32
60.9 0.0
Warning Errors¶
IMSL_ALL_FEAS_SOLS_OPTIMAL |
The coefficients of the objective function are all equal to zero. Every feasible solution is also optimal. |
IMSL_SUBOPTIMAL_SOL_FOUND |
A suboptimal solution was found after #iterations. |
IMSL _MAX_ITERATIONS_REACHED_1 |
The maximum number of iterations was reached. The best answer will be returned. “#”=#was used, a larger value may help complete the algorithm. |
Fatal Errors¶
IMSL_PRIMAL_UNBOUNDED |
The primal problem is unbounded. |
IMSL_PRIMAL_INFEASIBLE |
The primal problem is infeasible. |
IMSL_DUAL_INFEASIBLE |
The dual problem is infeasible. |
IMSL_INIT_SOL_INFEASIBLE |
The initial solution for the one-row linear program is infeasible. |
IMSL_PROB_UNBOUNDED |
The problem is unbounded. |
IMSL_DIAG_WEIGHT_TOO_SMALL |
The diagonal element #[#]=#of the diagonal weight matrix #is too small. |
IMSL_CHOL_FAC_ACCURACY |
The Cholesky factorization failed because of accuracy problems. |