Dense Matrix Parallelism Using MPI
General Remarks
The central theme we use for the computing functions of the box data type is that of delivering results to a distinguished node of the machine. One of the design goals was to shield much of the complexity of distributed computing from the user.
The nodes are numbered by their “ranks.” Each node has rank value MP_RANK. There are MP_NPROCS nodes, so MP_RANK = 0, 1,...,MP_NPROCS-1. The root node has MP_RANK = 0. Most of the elementary MPI material is found in Gropp, Lusk, and Skjellum (1994) and Snir, Otto, Huss-Lederman, Walker, and Dongarra (1996). Although IMSL Fortran Numerical Library users are for the most part shielded from the complexity of MPI, it is desirable for some users to learn this important topic. Users should become familiar with any referenced MPI routines and the documentation of their usage. MPI routines are not discussed here, because that is best found in the above references.
The IMSL Fortran Numerical Library algorithm for allocating the racks of the box to the processors consists of creating a schedule for the processors, followed by communication and execution of this schedule. The efficiency may be improved by using the nodes according to a specific
priority order. This order can reflect information such as a powerful machine on the network other than the user’s work station, or even transient network behavior. The IMSL Fortran Numerical Library allows users to define this order, but a default order is provided. A setup function establishes an order based on timing matrix products of a size given by the user. See
Parallel Example 4 for an illustration of this usage.
Getting Started with Modules MPI_setup_int and MPI_node_int
The MPI_setup_int and MPI_node_int modules are part of the IMSL Fortran Numerical Library and not part of MPI itself. Following a call to the function MP_SETUP(), the module MPI_node_int will contain information about the number of processors, the rank of a processor, the communicator for IMSL Fortran Numerical Library, and the usage priority order of the node machines. Since MPI_node_int is used by MPI_setup_int, it is not necessary to explicitly use this module. If neither MP_SETUP() nor MPI_Init() is called, then the box data type will compute entirely on one node. No routine from MPI will be called.
MODULE MPI_NODE_INT
INTEGER, ALLOCATABLE :: MPI_NODE_PRIORITY(:)
INTEGER, SAVE :: MP_LIBRARY_WORLD = huge(1)
LOGICAL, SAVE :: MPI_ROOT_WORKS = .TRUE.
INTEGER, SAVE :: MP_RANK = 0, MP_NPROCS = 1
END MODULE
When the function MP_SETUP() is called with no arguments, the following events occur:
If MPI has not been initialized, it is first initialized. This step uses the routines
MPI_Initialized() and possibly
MPI_Init(). Users who choose not to call
MP_SETUP() must make the required initialization call before using any IMSL Fortran Numerical Library code that relies on MPI for its execution. If the user’s code calls an IMSL Fortran Numerical Library function utilizing the box data type and MPI has not been initialized, then the computations are performed on the root node. The only MPI routine always called in this context is
MPI_Initialized(). The name
MP_SETUP is pushed onto the subprogram or call stack.
If
MP_LIBRARY_WORLD equals its initial value
(=huge(1)) then
MPI_COMM_WORLD, the default MPI communicator, is duplicated and becomes its handle. This uses the routine
MPI_Comm_dup(). Users can change the handle of
MP_LIBRARY_WORLD as required by their application code. Often this issue can be ignored.
The integers
MP_RANK and
MP_NPROCS are respectively the node’s rank and the number of nodes in the communicator,
MP_LIBRARY_WORLD. Their values require the routines
MPI_Comm_size() and
MPI_Comm_rank(). The default values are important when MPI is not initialized and a box data type is computed. In this case the root node is the only node and it will do all the work. No calls to MPI communication routines are made when
MP_NPROCS = 1 when computing the box data type functions. A program can temporarily assign this value to force box data type computation entirely at the root node. This is desirable for problems where using many nodes would be less efficient than using the root node exclusively.
The array
MPI_NODE_PRIORITY(:) is unallocated unless the user allocates it. The IMSL Fortran Numerical Library codes use this array for assigning tasks to processors, if it is allocated. If it is not allocated, the default priority of the nodes is
(0,1,...,MP_NPROCS‑1). Use of the function call
MP_SETUP(N) allocates the array, as explained below. Once the array is allocated its size is
MP_NPROCS. The contents of the array is a permutation of the integers
0,...,MP_NPROCS‑1. Nodes appearing at the start of the list are used first for parallel computing. A node other than the root can avoid any computing, except receiving the schedule, by setting the value
MPI_NODE_PRIORITY(I)< 0. This means that node
|MPI_NODE_PRIORITY(I)| will be sent the task schedule but will not perform any significant work as part of box data type function evaluations.
The
LOGICAL flag
MPI_ROOT_WORKS designates whether or not the root node participates in the major computation of the tasks. The root node communicates with the other nodes to complete the tasks but can be designated to do no other work. Since there may be only one processor, this flag has the default value .
TRUE., assuring that one node exists to do work. When more than one processor is available users can consider assigning
MPI_ROOT_WORKS=.FALSE.. This is desirable when the alternate nodes have equal or greater computational resources compared with the root node.
Parallel Example 4 illustrates this usage. A single problem is given a box data type, with one rack. The computing is done at the node, other than the root, with highest priority. This example requires more than one processor since the root does no work.
When the generic function MP_SETUP(N) is called, where N is a positive integer, a call to MP_SETUP() is first made, using no argument. Use just one of these calls to MP_SETUP(). This initializes the MPI system and the other parameters described above. The array MPI_NODE_PRIORITY(:) is allocated with size MP_NPROCS. Then DOUBLE PRECISION matrix products C = AB, where A and B are N by N matrices, are computed at each node and the elapsed time is recorded. These elapsed times are sorted and the contents of MPI_NODE_PRIORITY(:) are permuted in accordance with the shortest times yielding the highest priority. All the nodes in the communicator MP_LIBRARY_WORLD are timed. The array MPI_NODE_PRIORITY(:) is then broadcast from the root to the remaining nodes of MP_LIBRARY_WORLD using the routine MPI_Bcast(). Timing matrix products to define the node priority is relevant because the effort to compute C is comparable to that of many linear algebra computations of similar size. Users are free to define their own node priority and broadcast the array MPI_NODE_PRIORITY(:) to the alternate nodes in the communicator.
To print any IMSL Fortran Numerical Library error messages that have occurred at any node, and to finalize MPI, use the function call
MP_SETUP(‘Final’). Case of the string ‘
Final’ is not important. Any error messages pending will be discarded after printing on the root node. This is triggered by popping the name ‘
MP_SETUP’ from the subprogram stack or returning to Level 1 in the stack. Users can obtain error messages by popping the stack to Level 1 and still continuing with MPI calls. This requires executing
call e1pop (‘MP_SETUP’). To continue on after summarizing errors execute
call e1psh (‘MP_SETUP’). More details about the error processor are found in
Reference Material chapter of this manual.
Messages are printed by nodes from largest rank to smallest, which is the root node. Use of the routine MPI_Finalize() is made within MP_SETUP(‘Final’), which shuts down MPI. After MPI_Finalize() is called, the value of MP_NPROCS = 0. This flags that MPI has been initialized and terminated. It cannot be initialized again in the same program unit execution. No MPI routine is defined when MP_NPROCS has this value.
Using Processors
There are certain pitfalls to avoid when using IMSL Fortran Numerical Library and box data types as implemented with MPI. A fundamental requirement is to allow all processors to participate in parts of the program where their presence is needed for correctness. It is incorrect to have a program unit that restricts nodes from executing a block of code required when computing with the box data type. On the other hand it is appropriate to restrict computations with rank-2 arrays to the root node. This is not required, but the results for the alternate nodes are normally discarded. This will avoid gratuitous error messages that may appear at alternate nodes.
Observe that only the root has a correct result for a box data type function. Alternate nodes have the constant value one as the result. The reason for this is that during the computation of the functions, sub-problems are allocated to the alternate nodes by the root, but for only the root to utilize the result. If a user needs a value at the other nodes, then the root must send it to the nodes. See Parallel Example 3 for an illustration of this usage. Convergence information is computed at the root node and broadcast to the others. Without this step some nodes would not terminate the loop even when corrections at the root become small. This would cause the program to be incorrect.