Running MPI Software on Phx

Running MPI Software on Phx

Depending on the MPI stack used to build the application, the process of running MPI software will differ. Below is a concise guide outlining the various MPI stacks along with the corresponding module and run commands to employ.

 

openmpi 5.0.6

Load with: module load openmpi/5.0.6

Run with: mpirun -np $SLURM_NTASKS "./my-simulation"

Built with gcc-12.3.0, with support for both InfiniBand and OmniPath fabrics

Provides: OpenMPI 5.0.6 compilers

Example script: /packages/public/phx-sbatch-templates/templates/example-mpi-with-openmpi5-job/main.sh

OpenMPI 5 prefers mpirun over srun; see the official documentation

openmpi 4.1.5 / hpcx 2.17.1

Load with: module load openmpi/4.1.5 or module load hpcx/2.17.1

Run with: srun --export=ALL --mpi=pmix "./my-simulation"

Built with gcc-12.3.0

Provides: OpenMPI 4.1.5 compilers

Example script: /packages/public/phx-sbatch-templates/templates/example-mpi-with-hpcx-job/main.sh

mpich 4.1.2

Load with: module load mpich/4.1.2

Run with: mpirun -launcher slurm "./my-simulation"

Built with gcc-12.3.0

Provides: MPICH 4.1.2 compilers

Example script: /packages/public/phx-sbatch-templates/templates/example-mpi-with-mpich-job/main.sh

Note: If using the Hydra Process ManagerYou may need to add export HYDRA_LAUNCHER_EXTRA_ARGS="--export=ALL"

intel oneapi 2022.1.0

Load with: module load intel/oneapi

Run with: mpiexec.hydra -genvall "./my-simulation"

Built with gcc-12.3.0

Packaged with: mpiicx

Provides: Intel MPI Compilers

Example script: /packages/public/phx-sbatch-templates/templates/example-mpi-with-intel-job/main.sh

Note: There have been seen a few cases where users have needed to add the following to their job scripts:

export HYDRA_LAUNCHER_EXTRA_ARGS="--export=ALL"

export FI_PROVIDER=verbs #Multi-node jobs only

export FI_PROVIDER=smh #Single-node jobs only