Compiling MPI Software on Sol
Overview
The Phoenix and Sol supercomputers both utilize high-speed interconnects supporting MPI. On Sol, the interconnect is called Infiniband.
Software that utilizes MPI must be built using parallelization-aware software compilers, which are ready for use through the module system.
Choosing an appropriate compiler and MPI standard is an important decision at the outset of using the application. Mixing and matching compilers between applications and their dependencies could render non-reproducible results, or perhaps not even create a usable result.
When building software for MPI, the two critical choices for the user are:
a) compiler suite (e.g., gcc
, aocc
, intel
, nvhpc
, oneapi
) and
b) MPI standard (e.g., OpenMPI
, mpich
, mvapich
, intel-mpi
).
Your application likely will provide guidance about which compiler to use, but if the preferred MPI is not given, it is reasonable to choose from the above list in order of appearance (with OpenMPI
being the most general, compatible use-case).
Building Software
Instructions vary greatly depending on the software being built.
Identify a preferred compiler from the developer documentation
Start an interactive session with at least 4 cores
Search for the compiler among available modules using the command:
module avail
Load the compiler module file, e.g.,
module load openmpi/4.1.5
Change directory to your build directory and run
make
(or other provided instructions)
Recommended & Supported Compilers
October 2023 MPI UPDATE: Since the scheduled maintenance, the interconnect software stack has been updated to use the most recent, recommended software and drivers. This means that only MPI compilers built since then will operate performantly, if at all.
We now provide and support these six MPI compilers on Sol:
OpenMPI, MPICH, MVAPICH, Intel+MPI, PlatformMPI, NVHPC
There are many other compilers variants available on Sol which are in the process of being removed, and in the meantime are not-recommended.
MPI Modules
All usable MPI compilers are the following and can be used with the normal module load
syntax:
Index | MPI module name |
1 | openmpi/4.1.5 |
2 | mpich/4.2.2 |
3 | mpich/4.1.2 |
4 | platform-mpi/09.01.04.03 |
5 | intel/parallel-studio-2020.4 |
6 | intel-oneapi-mpi-2021.8.0-gcc-12.1.0 |
7 | intel-oneapi-mpi-2021.9.0-gcc-12.1.0 |
8 | intel-oneapi-mpi-2021.10.0-gcc-12.1.0 |
9 | nvhpc/24.7-cuda12 |
10 | nvhpc/22.7 |
Loading Related Modules
Sometimes, you will want to load additional modules to work in concert with these compilers. Listed below is any additional information you might be interested to match if you want to use additional packages:
module load openmpi/4.1.5
Recommendedopenmpi/4.1.5
is compatible withmodule load gcc-11.2.0-gcc-11.2.0
module load intel/parallel-studio-2020.4
Intel Parallel Studio is wholly self-contained. It is not recommended to load other compiler-related modules in addition to this module.intel/intel-oneapi-2021.10.0
Intel One API is also wholly self-contained. It is not recommended to load other compiler-related modules in addition to this module.mpich/4.1.2
MPICH is built with the system-providedgcc-12.1.0-gcc-11.2.0
. This means thatmpich/4.1.2
does not need any additional module loads to compile non-mpi software. One caveat is that no compiler libraries are installed on login nodes; ensure you are on a compute node for this module to work.platform-mpi/09.01.04.03
Platform-mpi is also wholly self-contained. It is not recommended to load other compiler-related modules in addition to this module. This is a niche compiler, and in most circumstances, other MPI modules will be preferred.nvhpc/22.7
The NVidia HPC Compiler Collection is self-contained. This is a rebranded version of what used to be known as the PGI (Portland Group) compilers. Like with Platform MPI, most software will be compatible with the much-more commonly-known and supported compilers above, andnvhpc
fits a narrow use-case.mvapich/3.0b
(Not supported on Sol at the moment)
MVAPICH is built with the system-provided GNU GCC 8.5.0. This means thatmvapich/3.0b
does not need any additional module loads to compile non-mpi software. One caveat is that no compiler libraries are installed on login nodes; ensure you are on a compute node for this module to work.
Choosing a Compiler
It is almost always preferable for the compiler to be selected based on provided documentation. However, there are reasons why different compilers may be considered:
Processor architecture mismatch
Agave compute nodes are almost exclusively Intel processors where Sol nodes are exclusively AMD Epycs.Compiler is unavailable
Compiler is incompatible with dependencies (interconnect or otherwise)
The gcc
compiler suite is the most universal option and often the most supported compiler.
Choosing an Interconnect
OpenMPI is the most highly supported and successful general use case for Infiniband. Unless advised by the software itself, openmpi
will almost certainly be the most straightforward way to build an app with MPI support.
Be sure to reach out to Research Computing admins for assistance when in doubt about how to build software; in many cases, the software can be built by our HPC Software manager, spack, alleviating the need to build software on one’s own.
Additional Help
Â