Using MPI Routines
Users of the IMSL Fortran Numerical Library benefit by having a standard (MPI) Message Passing Interface environment. This is needed to accomplish parallel computing within parts of the Library. Either of the icons above clues the reader when this is the case. If parallel computing is not required, then the IMSL Library suite of dummy MPI routines can be substituted for standard MPI routines. All requested MPI routines called by the IMSL Library are in this dummy suite. Warning messages will appear if a code or example requires more than one process to execute. Typically users need not be aware of the parallel codes.
NOTE: A standard MPI environment is not part of the IMSL Fortran Numerical Library. The standard includes a library of MPI Fortran and C routines, MPI “include” files, usage documentation, and other run-time utilities. |
Details on linking to the appropriate libraries are explained in the online README file of the product distribution.
There are three situations of MPI usage in the IMSL Fortran Numerical Library:
1. There are some computations that are performed with the ‘box’ data type that benefit from the use of parallel processing. For computations involving a single array or a single problem, there is no IMSL use of parallel processing or MPI codes. The box type data type implies that several problems of the same size and type are to be computed and solved. Each rack of the box is an independent problem. This means that each problem could potentially be solved in parallel. The default for computing a box data type calculation is that a single processor will do all of the problems, one after the other. If this is acceptable there should be no further concern about which version of the libraries is used for linking. If the problems are to be solved in parallel, then the user must link with a working version of an MPI Library and the appropriate IMSL Library. Examples demonstrating the use of box type data may be found in
Chapter 10, “Linear Algebra Operators and Generic Functions”.
NOTE: Box data type routines are marked with the MPI Capable icon. |
2. Various routines in
Chapter 1, “Linear Systems” allow the user to interface with the ScaLAPACK Library routines. If the user chooses to run on only one processor then these routines will utilize either IMSL Library code or LAPACK Library code based on the libraries the user chooses to use during linking. If the user chooses to run on multiple processors then working versions of MPI, ScaLAPACK, PBLAS, and Blacs will need to be present. These routines are marked with the MPI Capable icon.
3. There are some routines or operators in the Library that require that a working MPI Library be present in order for them to run. Examples are the large-scale parallel solvers and the ScaLAPACK utilities. Routines of this type are marked with the MPI Required icon. For these routines, the user must link with a working version of an MPI Library and the appropriate IMSL Library.
In all cases described above it is the user’s responsibility to supply working versions of the aforementioned third party libraries when those libraries are required.
Table 1 below lists the chapters and IMSL routines calling MPI routines or the replacement non-parallel package.
Table 1 — IMSL Routines Calling MPI Routines or Replacement Non-Parallel Package
Chapter Name and Number | Routine with MPI Utilized |
---|
Linear Systems, 1 | PARALLEL_NONNEGATIVE_LSQ |
Linear Systems, 1 | PARALLEL_BOUNDED_LSQ |
Linear Systems, 1 | Those routines which utilize ScaLAPACK listed in Table D below. |
Linear Algebra and Generic Functions, 10 | |
Utilities, 11 | ScaLAPACK_SETUP |
Utilities, 11 | ScaLAPACK_GETDIM |
Utilities, 11 | ScaLAPACK_READ |
Utilities, 11 | ScaLAPACK_WRITE |
Utilities, 11 | ScaLAPACK_MAP |
Utilities, 11 | ScaLAPACK_UNMAP |
Utilities, 11 | ScaLAPACK_EXIT |
Reference Material | Entire Error Processor Package for IMSL Library, if MPI is utilized |