linSolGenCoordinateComplex¶
Solves a system of linear equations \(Ax = b\), with sparse complex coefficient matrix A. Using optional arguments, any of several related computations can be performed. These extra tasks include returning the LU factorization of A, computing the solution of \(Ax = b\) given an LU factorization, setting drop tolerances, and controlling iterative refinement.
Synopsis¶
linSolGenCoordinateComplex (a, b)
Required Arguments¶
a[[]]
(Input)- Vector of length
nz
containing the location and value of each nonzero entry in the matrix. - complex
b
(Input) - Vector of length
n
containing the right-hand side.
Return Value¶
The solution x of the sparse linear system \(Ax = b\). If no solution was
computed, then None
is returned.
Optional Arguments¶
returnSparseLuFactor
(Output)- A structure. The contents are initialized to point to the LU
factorization by
linSolGenCoordinateComplex
. supplySparseLuFactor
, structure (Input)- A structure. This structure contains the LU factorization of the input
matrix computed by
linSolGenCoordinateComplex
with thereturnSparseLuFactor
option. freeSparseLuFactor
,- Before returning, free the linked list data structure containing the LU factorization of A. Use this option only if the factors are no longer required.
returnSparseLuInCoord
,luCoordinate[[]]
,rowPivots
,colPivots
(Output)- The LU factorization is returned in coordinate form. The disadvantage
is that during a
solveOnly
call, the internal representation of the factor must be reconstructed. If however, the factor is to be stored after the program exits, and loaded again at some subsequent run, the combination ofreturnLuInCoord
andsupplyLuInCoord
is probably the best choice, since the factors are in a format that is simple to store and read. supplySparseLuInCoord
,luCoordinate[[]]
,rowPivots
,colPivots
(Input)- Supply the LU factorization in coordinate form. See
returnSparseLuInCoord
for a description. factorOnly
- Compute the LU factorization of the input matrix and return. The
argument
b
is ignored. solveOnly
,- Solve \(Ax = b\) given the LU factorization of A. This option
requires the use of option
supplySparseLuFactor
orsupplySparseLuInCoord
. transpose
,- Solve the problem \(A^Tx = b\). This option can be used in conjunction with either of the options that supply the factorization.
condition
- Estimate the \(L_1\) condition number of A and return in the
variable
condition
. pivotingStrategy
, int (Input)Select the pivoting strategy by setting method to one of the following:
IMSL_ROW_MARKOWITZ
,IMSL_COLUMN_MARKOWITZ
, orIMSL_SYMMETRIC_MARKOWITZ
.Default:
IMSL_SYMMETRIC_MARKOWITZ
.numberOfSearchRows
, int (Input)The number of rows which have the least number of nonzero elements that will be searched for a pivot element.
Default:
numberOfSearchRows
= 3iterativeRefinement
,- Select this option if iterative refinement is desired.
dropTolerance
, float (Input)Possible fill-in is checked against tolerance. If the absolute value of the new element is less than tolerance, it will be discarded.
Default:
dropTolerance
= 0.0hybridFactorization
(Input)- A list with two elements,
hybridFactorization[0]
=density
, andhybridFactorization[1]
=orderBound
. Enable the code to switch to a dense factorization method when the density of the active submatrix reaches 0.0 ≤density
≤ 1.0 and the order of the active submatrix is less than or equal toorderBound
. growthFactorLimit
, float (Input)The computation stops if the growth factor exceeds
growthFactorLimit
.Default:
growthFactorLimit
= 1.e16growthFactor
(Output)growthFactor
is calculated as the largest element in absolute value at any stage of the Gaussian elimination divided by the largest element in absolute value in A.smallestPivot
(Output)- The value of the pivot element of smallest magnitude.
numNonzerosInFactor
(Output)- A scalar containing the total number of nonzeros in the factor.
cscFormat
, intcolPtr
, introwInd
, complexvalues
(Input)- Accept the coefficient matrix in Compressed Sparse Column (CSC) Format. See the main Introduction chapter at the beginning of this manual for a discussion of this storage scheme.
memoryBlockSize
, int (Input)If space must be allocated for fill-in, allocate enough space for
memoryBlockSize
new nonzero elements.Default:
memoryBlockSize
=nz
Description¶
The function linSolGenCoordinate
solves a system of linear equations
\(Ax = b\), where A is sparse. In its default use, it solves the
so-called one off problem, by first performing an LU factorization of A
using the improved generalized symmetric Markowitz pivoting scheme. The
factor L is not stored explicitly because the saxpy
operations
performed during the elimination are extended to the right-hand side, along
with any row interchanges. Thus, the system \(Ly = b\) is solved
implicitly. The factor U is then passed to a triangular solver which
computes the solution x from \(Ux = y\).
If a sequence of systems \(Ax = b\) are to be solved where A is
unchanged, it is usually more efficient to compute the factorization once,
and perform multiple forward and back solves with the various right-hand
sides. In this case the factor L is explicitly stored and a record of all
row as well as column interchanges is made. The solve step then solves the
two triangular systems \(Ly = b\) and \(Ux = y\). The user specifies
either the returnSparseLuFactor
or the returnLuInCoord
option to
retrieve the factorization, then calls the function subsequently with
different right-hand sides, passing the factorization back in using either
supplySparseLuFactor
or supplySparseLuInCoord
in conjunction with
solveOnly
. If returnSparseLuFactor
is used, the final call to
linSolGenCoordinate
should include freeSparseLuFactor
to release the
heap used to store L and U.
If the solution to \(A^Tx = b\) is required, specify the option
transpose
. This keyword only alters the forward elimination and back
substitution so that the operations \(U^Ty = b\) and \(L^Tx = y\) are
performed to obtain the solution. So, with one call to produce the
factorization, solutions to both \(Ax = b\) and \(A^Tx = b\) can be
obtained.
The option condition
is used to calculate and return an estimation of the
\(L_1\) condition number of A. The algorithm used is due to Higham.
Specification of condition
causes a complete L to be computed and
stored, even if a one off problem is being solved. This is due to the fact
that Higham’s method requires solution to problems of the form \(Az = r\)
and \(A^Tz = r\).
The default pivoting strategy is symmetric Markowitz. If a row or column
oriented problem is encountered, there may be some reduction in fill-in by
selecting either IMSL_ROW_MARKOWITZ
or IMSL_COLUMN_MARKOWITZ
. The
Markowitz strategy will search a pre-elected number of row or columns for
pivot candidates. The default number is three, but this can be changed by
using numberOfSearchRows
.
The option dropTolerance
can be used to set a tolerance which can reduce
fill-in. This works by preventing any new fill element which has magnitude
less than the specified drop tolerance from being added to the
factorization. Since this can introduce substantial error into the
factorization, it is recommended that iterativeRefinement
be used to
recover more accuracy in the final solution. The trade-off is between space
savings from the drop tolerance and the extra time needed in repeated solve
steps needed for refinement.
The function linSolGenCoordinate
provides the option of switching to a
dense factorization method at some point during the decomposition. This
option is enabled by choosing hybridFactorization
. One of the two values
required by this option, density
, specifies a minimum density for the
active submatrix before a format switch will occur. A density of 1.0
indicates complete fill-in. The other value, orderBound
, places an upper
bound on the order of the active submatrix which will be converted to dense
format. This is used to prevent a switch from occurring too early, possibly
when the \(\mathcal{O}(n^3)\) nature of the dense factorization will cause
performance degradation. Note that this option can significantly increase
heap storage requirements.
Examples¶
Example 1¶
As an example, consider the following matrix:
Let
so that
from __future__ import print_function
from numpy import *
from pyimsl.math.linSolGenCoordinateComplex import linSolGenCoordinateComplex
from pyimsl.math.writeMatrixComplex import writeMatrixComplex
a = [[0, 0, (10.0 + 7.0j)],
[1, 1, (3.0 + 2.0j)],
[1, 2, (-3.0 + 0.0j)],
[1, 3, (-1.0 + 2.0j)],
[2, 2, (4.0 + 2.0j)],
[3, 0, (-2.0 - 4.0j)],
[3, 3, (1.0 + 6.0j)],
[3, 4, (-1.0 + 3.0j)],
[4, 0, (-5.0 + 4.0j)],
[4, 3, (-5.0 + 0.0j)],
[4, 4, (12.0 + 2.0j)],
[4, 5, (-7.0 + 7.0j)],
[5, 0, (-1.0 + 12.0j)],
[5, 1, (-2.0 + 8.0j)],
[5, 5, (3.0 + 7.0j)]]
b = array([(3.0 + 17.0j), (-19.0 + 5.0j), (6.0 + 18.0j),
(-38.0 + 32.0j), (-63.0 + 49.0j), (-57.0 + 83.0j)])
x = linSolGenCoordinateComplex(a, b)
print(x)
writeMatrixComplex("solution", x, column=True)
Output¶
[1.+1.j 2.+2.j 3.+3.j 4.+4.j 5.+5.j 6.+6.j]
solution
1 ( 1, 1)
2 ( 2, 2)
3 ( 3, 3)
4 ( 4, 4)
5 ( 5, 5)
6 ( 6, 6)
Example 2¶
This example sets \(A = E(1000, 10)\). A linear system is solved and the LU factorization returned. Then a second linear system is solved using the same coefficient matrix A just factored. Maximum absolute errors and execution time ratios are printed showing that forward and back solves take a small percentage of the computation time of a factor and solve. This ratio can vary greatly, depending on the order of the coefficient matrix, the initial number of nonzeros, and especially on the amount of fill-in produced during the elimination. Be aware that timing results are highly machine dependent.
from __future__ import print_function
from numpy import *
from pyimsl.math.ctime import ctime
from pyimsl.math.generateTestCoordinateComplex import generateTestCoordinateComplex
from pyimsl.math.linSolGenCoordinateComplex import linSolGenCoordinateComplex
from pyimsl.math.matMulRectCoordinateComplex import matMulRectCoordinateComplex
from pyimsl.math.vectorNormComplex import vectorNormComplex
n = 1000
c = 10
# Get the coefficient matrix
a = generateTestCoordinateComplex(n, c)
# Set two different predetermined solutions
mod_five = empty(n, dtype=complex)
mod_ten = empty(n, dtype=complex)
for i in range(0, n, 1):
mod_five[i] = ((i % 5) + 0.0j)
mod_ten[i] = ((i % 10) + 0.0j)
# Choose b so that x will approximate mod_five
b = matMulRectCoordinateComplex("A*x",
aMatrix={'nrowa': n, 'ncola': n, 'a': a},
xVector=mod_five)
# Time the factor/solve
time_factor_solve = ctime()
lu_factor = []
x = linSolGenCoordinateComplex(a, b,
returnSparseLuFactor=lu_factor)
time_factor_solve = ctime() - time_factor_solve
# Compute max abolute error
index = []
error_factor_solve = vectorNormComplex(x,
secondVector=mod_five,
infNorm=index)
# Get new right hand side -- b = A * mod_ten
b = matMulRectCoordinateComplex("A*x",
aMatrix={'nrowa': n, 'ncola': n, 'a': a},
xVector=mod_ten)
# Use the previously computed factorization
# to solve Ax = b
time_solve = ctime()
x = linSolGenCoordinateComplex(a, b,
supplySparseLuFactor=lu_factor[0],
freeSparseLuFactor=True,
solveOnly=True)
time_solve = ctime() - time_solve
error_solve = vectorNormComplex(x,
secondVector=mod_ten,
infNorm=index)
# Print errors and ratio of execution times
print("absolute error (factor/solve) = %e" % error_factor_solve)
print("absolute error (solve) = %e" % error_solve)
print("time_solve/time_factor_solve = %f" % (time_solve / time_factor_solve))
Output¶
absolute error (factor/solve) = 5.330972e-15
absolute error (solve) = 1.332271e-14
time_solve/time_factor_solve = 0.164138