CNLMath : Introduction : OpenMP Usage
OpenMP Usage
Thread safety of the IMSL C Numerical Library is based on OpenMP. Users of the IMSL C Numerical Library are also able to leverage shared-memory parallelism by means of native support for the OpenMP API specification within parts of the Library. Those parts are flagged by the OpenMP icon shown below.
Parallelism in OpenMP is implemented by means of threads. In the OpenMP programming model, it is assumed that memory is shared among threads, such as in multi-core machines. These threads are spawned by OpenMP in response to directives embedded in source code.
The Library’s use of OpenMP is largely transparent to the user. Codes that have been enhanced with OpenMP directives will still work properly in serial execution environments. Error handling routines have been extended so that the most severe error during a parallel run will be returned to the user.
OpenMP is used by the Library in these main ways:
1. To implement thread safety within the C Numerical Library.
2. To speed up computationally intensive functions by exploiting data parallelism in their processing.
3. To give users more control of scheduling by using the "schedule(runtime)" clause for the parallelized for-loops. The scheduling option chosen, set by using the OMP_SCHEDULE environment variable, can significantly affect the performance of user's program depending on the workload of the system during execution. If OMP_SCHEDULE is not set, the default behavior depends on implementation. Please refer to OpenMP specifications on schedule type and chunk.
4. To set and control the number of threads to use for parallel region and nested parallel region by using the OMP_NUM_THREADS and OMP_NESTED environment variables. If OMP_NUM_THREADS and OMP_NESTED are not set, the default behavior depends on the implementation. Thus, all computing resources may be used, affecting other applications' performance on the system. Please refer to OpenMP specifications for more information.
5. To parallelize the evaluation of user-supplied functions in routines that use them, e.g. in numerical integration routines.
In the last case, the user must explicitly signal to the Library that the user-supplied functions themselves are thread-safe, or by default the user’s function(s) will not evaluate in parallel. The utility imsl_omp_options allows the user to assert that all routines passed to the library are thread-safe.
Thread safety implies that function(s) may be executed simultaneously by multiple threads and still function correctly. Requiring that user-supplied functions be thread-safe is crucial, because the different threads spawned by OpenMP may call user-supplied functions simultaneously, and/or in an arbitrary order, and/or with differing inputs. Care must therefore be taken to ensure that the parallelized algorithm acts in the same way as its serial “ancestor”. Functions whose results depend on the order in which they are executed are not thread-safe and are thus not good candidates for parallelization; neither are functions which access and modify global data.
Specifications of the OpenMP standards are provided at (http://www.openmp.org/specifications/).