Building WRF on Sol (fully manual)

There are two pages for using WRF on Sol:

The process outlined in this page will utilize fully manual building of all libraries and binaries. It describes a process in which the libraries are built by the user and placed in user storage (e.g., scratch).

The benefit of the fully-manual approach includes:

  1. these instructions can be ported to any supercomputer/workstation

If this is not a particularly important feature, consider using the pre-compiled library approach.

WRF can be compiled on Sol without any special permissions. You can build it entirely start-to-finish as your own unprivileged supercomputer user.

The steps outlined in this page compile WRF 4.2.2 & WPS 4.2, though the scripts are not limited to only this version. The scripts have also been successfully deployed for WRF 4.3.3 & WPS 4.3. However, to change either the WRF or WPS versions (older or newer)--or changing WRF to a different source tree altogether--may warrant changes this tutorial cannot anticipate.

It is recommended to complete this tutorial with the unchanged files to familiarize yourself with the process and the steps involved.

Setting up the Compilation Process

We will start by copying over the wrf-building scripts to our own scratch space. This space is designated as /scratch/$USER, such as /scratch/wdizon/. Let’s start by moving to a compute node to copy the scripts:

wdizon@login01 $ interactive -c 20 wdizon@c001 $ cp -R /packages/apps/letsbuildwrf /scratch/$USER/

-c 20 is chosen as it is the maximum number of cores WRF will compile in-parallel for. This is a separate limitation from how many cores the built binaries can run on--the completed binary will not be limited by the number chosen here.

Let’s review the files we have copied:

$ cd /scratch/$USER/letsbuildwrf $ ls build_deps* build_wrf* compiler_variables src/ tarballs/ $ cat compiler_variables #!/bin/bash export WRF_VER=4.2.2 export WPS_VER=4.2 export WRF_SRC=/scratch/$USER/letsbuildwrf export WRF_INSTALL=/scratch/$USER/wrf_compiles export WRF_TARBALL_DIRNAME=WRF-$WRF_VER export WRF_PREFERRED_DIRNAME=WRF-4.2.2-gcc # COMPILER SETUP export MAKE_PROCS=20 export CC=gcc export CXX=g++ export FC=gfortran export FCFLAGS=-m64 export F77=gfortran export FFLAGS=-m64

The file compiler_variables is the only file that generally will require any user editing. That said, the defaults of this file are known to work and properly compile these versions using MPICH and the GCC compiler without any modifications at all:

  • WRF 4.2.2 & WPS 4.2

  • WRF 4.3.3 & WPS 4.3

Extracting and Compiling Libraries

Execute the following lines:

cd /scratch/$USER/letsbuildwrf ./build_deps

If there are no issues (no conflicting module load or other environment issues), the last few lines should indicate SUCCESS twice. All known builds of WRF on Sol have been built with the standard system compiler (gcc 8.5.0), which means that there were no additional module loads needed or desired.

If you see SUCCESS twice, as above, this means at least the following:

  1. All files have been extracted from their tarballs (/scratch/$USER/letsbuildwrf/tarballs/4.x.x into /scratch/$USER/letsbuildwrf/src)

  2. The GCC compiler was successfully able to compile a c and fortran code, showing readiness to continue.

If you do not see output matching above, do not continue.

If necessary, start a new terminal session and ensure no conflicting modules are loaded (module purge).

What’s happened so far

At this point in the process, MPICH, NETCDF-C, NETCDF-FORTRAN, JASPER, ZLIB and LIBPNG have successfully built. They are stored in /scratch/$USER/wrf_compiles/libraries . Should you choose to have multiple WRF builds, these libraries can be reused to save the time of recompiling again.

Compiling WRF and WPS

The remaining step is to compile WRF and WPS. These are consolidated into a single script. You can start the process with the following line:

Upon running this command, you will be asked about which compiler to use and whether to use nesting. This interactive step cannot be automated, so it is key to ensure proper input here:

In this example, we will select 35 (GNU (gfortran/gcc)) and 1 (nesting basic).

Compiling WPS

After an amount of time has passed, you will be prompted again for WPS:

Select 1, the gfortran serial option. Choose a serial variant, no matter what compiler.

At the end of this step, you will have a working WRF compilation built with MPICH and GCC located at:

WRF => /scratch/$USER/wrf_compiles/WRF-4.2.2 ***

WPS => /scratch/$USER/wrf_compiles/WRF-4.2.2/WPS-4.2

These match the path set in compiler_variables under the name WRF_INSTALL. WPS is saved within the WRF directory so that you may have any number of co-existing WRF installs in parallel all within the same wrf_compiles dir.

*** If WRF_TARBALL_DIRNAME is modified by the user, the directory name will match that value.

Indication of Success

At the end of the script, you should see Compilation Complete and you will be returned to the prompt.

Usage Notes

Alternate Modules

These steps build MPICH manually and does not use any system modules (e.g., from module load). Usage of these binaries often will necessitate using full paths of the compiled binaries for your SBATCH scripts and interactive use. Example:

mpiexec or mpiexec.hydra might be your preferred MPI launcher, but you must invoke it with:

/scratch/$USER/wrf_compiles/WRF-4.2.2/libraries/mpich/bin/mpiexec

$USER variable

The $USER variable will translate to your login username, which matches your ASURITE ID. The $USER variable therefore is to simplify copy/paste operations, rather than expecting the user to type in, for example, /scratch/wdizon , which is a completely permissible/workable alternative.

Repeated Builds

If your WRF code changes, but your library/dependencies are remaining constant, you can speed up your testing by following these steps (full overview):

  1. Identify tarball needed for WRF source code, place in /scratch/$USER/letsbuildwrf/tarballs/4.x.x

  2. ./build_deps to completion

  3. Identify the newly created directory name in /scratch/$USER/letsbuildwrf/src
    In this example, the extracted tarball created a dir called WRF_2_THERMAL_URBCOL_CBC, where standard source files might have created WRF-4.2.2.

  4. In compiler.variables, make the changes to reflect #3’s dirname:

  5. ./build_wrf

  6. Test wrf, use wrf, and when needing to make changes…

    1. Make changes to source in /scratch/$USER/letsbuildwrf/src/<dir>

    2. ./build_wrf

    3. Repeat #6

Now that it is built:

WRF’s Files are located at /scratch/$USER/wrf_compiles/WRF-X.Y.Z/run.

WRF is built using MPICH, and mpiexec.hydra, e.g., /scratch/$USER/wrf_compiles/LIBRARIES/mpich/bin/mpiexec.hydra -np 12 ./wrf.exe

If you run this interactively, be sure to choose -c <num cores> to match -np <num cores>. If you are submitting this with a batch job, makes sure your #SBATCH -c <num cores> matches.