Building WRF on Sol (fully manual)

Building WRF on Sol (fully manual)

There are two pages for using WRF on Sol:

The process outlined in this page will utilize fully manual building of all libraries and binaries. It describes a process in which the libraries are built by the user and placed in user storage (e.g., scratch).

The benefit of the fully-manual approach includes:

  1. these instructions can be ported to any supercomputer/workstation

If this is not a particularly important feature, consider using the pre-compiled library approach.

WRF can be compiled on Sol without any special permissions. You can build it entirely start-to-finish as your own unprivileged supercomputer user.

The steps outlined in this page compile WRF 4.2.2 & WPS 4.2, though the scripts are not limited to only this version. The scripts have also been successfully deployed for WRF 4.3.3 & WPS 4.3. However, to change either the WRF or WPS versions (older or newer)--or changing WRF to a different source tree altogether--may warrant changes this tutorial cannot anticipate.

It is recommended to complete this tutorial with the unchanged files to familiarize yourself with the process and the steps involved.

Setting up the Compilation Process

We will start by copying over the wrf-building scripts to our own scratch space. This space is designated as /scratch/$USER, such as /scratch/wdizon/. Let’s start by moving to a compute node to copy the scripts:

wdizon@login01 $ interactive -c 20 wdizon@c001 $ cp -R /packages/apps/letsbuildwrf /scratch/$USER/

-c 20 is chosen as it is the maximum number of cores WRF will compile in-parallel for. This is a separate limitation from how many cores the built binaries can run on--the completed binary will not be limited by the number chosen here.

Let’s review the files we have copied:

$ cd /scratch/$USER/letsbuildwrf $ ls build_deps* build_wrf* compiler_variables src/ tarballs/ $ cat compiler_variables #!/bin/bash export WRF_VER=4.2.2 export WPS_VER=4.2 export WRF_SRC=/scratch/$USER/letsbuildwrf export WRF_INSTALL=/scratch/$USER/wrf_compiles export WRF_TARBALL_DIRNAME=WRF-$WRF_VER export WRF_PREFERRED_DIRNAME=WRF-4.2.2-gcc # COMPILER SETUP export MAKE_PROCS=20 export CC=gcc export CXX=g++ export FC=gfortran export FCFLAGS=-m64 export F77=gfortran export FFLAGS=-m64

The file compiler_variables is the only file that generally will require any user editing. That said, the defaults of this file are known to work and properly compile these versions using MPICH and the GCC compiler without any modifications at all:

  • WRF 4.2.2 & WPS 4.2

  • WRF 4.3.3 & WPS 4.3

Any of the following changes may warrant edits to compiler_variables or even the build_scripts themselves:

  1. change in WRF version (upgrading or downgrading from 4.2.2)

  2. change in WRF source tree (using a different source tree than the WRF-4.2.2.tgz included)

  3. using a different compiler other than GCC

  4. using a different interconnect other than MPICH to utilize Sol’s InfiniBand

If any of these apply, build instructions could vary greatly, rendering these scripts (and this tutorial) obsolete. However, the steps as a general guide are still useful to outline the start-to-finish process.

Extracting and Compiling Libraries

Execute the following lines:

cd /scratch/$USER/letsbuildwrf ./build_deps
[produces a lot of compilation log info...] testing library dependency sanity C function called by Fortran Values are xx = 2.00 and ii = 1 SUCCESS test 1 fortran + c + netcdf C function called by Fortran Values are xx = 2.00 and ii = 1 status = 2 SUCCESS test 2 fortran + c + netcdf + mpi $

If there are no issues (no conflicting module load or other environment issues), the last few lines should indicate SUCCESS twice. All known builds of WRF on Sol have been built with the standard system compiler (gcc 8.5.0), which means that there were no additional module loads needed or desired.

If you see SUCCESS twice, as above, this means at least the following:

  1. All files have been extracted from their tarballs (/scratch/$USER/letsbuildwrf/tarballs/4.x.x into /scratch/$USER/letsbuildwrf/src)

  2. The GCC compiler was successfully able to compile a c and fortran code, showing readiness to continue.

If you do not see output matching above, do not continue.

If necessary, start a new terminal session and ensure no conflicting modules are loaded (module purge).

What’s happened so far

At this point in the process, MPICH, NETCDF-C, NETCDF-FORTRAN, JASPER, ZLIB and LIBPNG have successfully built. They are stored in /scratch/$USER/wrf_compiles/libraries . Should you choose to have multiple WRF builds, these libraries can be reused to save the time of recompiling again.

These can be thought of as standalone/reusable libraries; in addition to working with any WRF builds built by these scripts, they can just as equally be used to support a WRF build compiled manually from a different source, different version, etc.

Compiling WRF and WPS

The remaining step is to compile WRF and WPS. These are consolidated into a single script. You can start the process with the following line:

./build_wrf

Upon running this command, you will be asked about which compiler to use and whether to use nesting. This interactive step cannot be automated, so it is key to ensure proper input here:

[s:0] [wdizon@c001:/scratch/wdizon/letsbuildwrf]$ ./build_wrf setting up compiler vars... initiating configure wrf steps (interactive)... [omitting extra loglines5] ------------------------------------------------------------------------ Please select from among the following Linux x86_64 options: 1. (serial) 2. (smpar) 3. (dmpar) 4. (dm+sm) PGI (pgf90/gcc) 5. (serial) 6. (smpar) 7. (dmpar) 8. (dm+sm) PGI (pgf90/pgcc): SGI MPT 9. (serial) 10. (smpar) 11. (dmpar) 12. (dm+sm) PGI (pgf90/gcc): PGI accelerator 13. (serial) 14. (smpar) 15. (dmpar) 16. (dm+sm) INTEL (ifort/icc) 17. (dm+sm) INTEL (ifort/icc): Xeon Phi (MIC architecture) 18. (serial) 19. (smpar) 20. (dmpar) 21. (dm+sm) INTEL (ifort/icc): Xeon (SNB with AVX mods) 22. (serial) 23. (smpar) 24. (dmpar) 25. (dm+sm) INTEL (ifort/icc): SGI MPT 26. (serial) 27. (smpar) 28. (dmpar) 29. (dm+sm) INTEL (ifort/icc): IBM POE 30. (serial) 31. (dmpar) PATHSCALE (pathf90/pathcc) 32. (serial) 33. (smpar) 34. (dmpar) 35. (dm+sm) GNU (gfortran/gcc) 36. (serial) 37. (smpar) 38. (dmpar) 39. (dm+sm) IBM (xlf90_r/cc_r) 40. (serial) 41. (smpar) 42. (dmpar) 43. (dm+sm) PGI (ftn/gcc): Cray XC CLE 44. (serial) 45. (smpar) 46. (dmpar) 47. (dm+sm) CRAY CCE (ftn $(NOOMP)/cc): Cray XE and XC 48. (serial) 49. (smpar) 50. (dmpar) 51. (dm+sm) INTEL (ftn/icc): Cray XC 52. (serial) 53. (smpar) 54. (dmpar) 55. (dm+sm) PGI (pgf90/pgcc) 56. (serial) 57. (smpar) 58. (dmpar) 59. (dm+sm) PGI (pgf90/gcc): -f90=pgf90 60. (serial) 61. (smpar) 62. (dmpar) 63. (dm+sm) PGI (pgf90/pgcc): -f90=pgf90 64. (serial) 65. (smpar) 66. (dmpar) 67. (dm+sm) INTEL (ifort/icc): HSW/BDW 68. (serial) 69. (smpar) 70. (dmpar) 71. (dm+sm) INTEL (ifort/icc): KNL MIC 72. (serial) 73. (smpar) 74. (dmpar) 75. (dm+sm) FUJITSU (frtpx/fccpx): FX10/FX100 SPARC64 IXfx/Xlfx Enter selection [1-75] : 35 ------------------------------------------------------------------------ Compile for nesting? (1=basic, 2=preset moves, 3=vortex following) [default 1]: 1

In this example, we will select 35 (GNU (gfortran/gcc)) and 1 (nesting basic).

While it is possible to use alternative compilers, do not do so unless you have made appropriate changes to compiler_variablesand even the buildscripts; the configuration within compiler_variables specifies gfortran for $FC and gcc for $CC. Other compilers, such as the Intel or AOCC compiler suites use different names entirely, so selecting a non-GNU compiler here at this step is not enough to change desired compilers.

Compiling WPS

After an amount of time has passed, you will be prompted again for WPS:

Will use NETCDF in dir: /scratch/wdizon/wrf_compiles/libraries/netcdf Using WRF I/O library in WRF build identified by $WRF_DIR: /scratch/wdizon/wrf_compiles/WRF-4.2.2 Found Jasper environment variables for GRIB2 support... $JASPERLIB = /scratch/wdizon/wrf_compiles/libraries/grib2/lib $JASPERINC = /scratch/wdizon/wrf_compiles/libraries/grib2/include ------------------------------------------------------------------------ Please select from among the following supported platforms. 1. Linux x86_64, gfortran (serial) 2. Linux x86_64, gfortran (serial_NO_GRIB2) 3. Linux x86_64, gfortran (dmpar) [omit extra listings] 38. Cray XC CLE/Linux x86_64, Intel compiler (serial_NO_GRIB2) 39. Cray XC CLE/Linux x86_64, Intel compiler (dmpar) 40. Cray XC CLE/Linux x86_64, Intel compiler (dmpar_NO_GRIB2) Enter selection [1-40] : 1

Select 1, the gfortran serial option. Choose a serial variant, no matter what compiler.

At the end of this step, you will have a working WRF compilation built with MPICH and GCC located at:

WRF => /scratch/$USER/wrf_compiles/WRF-4.2.2 ***

WPS => /scratch/$USER/wrf_compiles/WRF-4.2.2/WPS-4.2

These match the path set in compiler_variables under the name WRF_INSTALL. WPS is saved within the WRF directory so that you may have any number of co-existing WRF installs in parallel all within the same wrf_compiles dir.

*** If WRF_TARBALL_DIRNAME is modified by the user, the directory name will match that value.

Indication of Success

At the end of the script, you should see Compilation Complete and you will be returned to the prompt.

Compilation Complete [s:0] [wdizon@c001:/scratch/wdizon/letsbuildwrf]$

Usage Notes

Alternate Modules

These steps build MPICH manually and does not use any system modules (e.g., from module load). Usage of these binaries often will necessitate using full paths of the compiled binaries for your SBATCH scripts and interactive use. Example:

mpiexec or mpiexec.hydra might be your preferred MPI launcher, but you must invoke it with:

/scratch/$USER/wrf_compiles/WRF-4.2.2/libraries/mpich/bin/mpiexec

Omitting the full path means the terminal may rely on other implementations of mpiexec that might be found elsewhere in your $PATH; since WRF was not built with that binary, it will not run performantly, or potentially even at all.

$USER variable

The $USER variable will translate to your login username, which matches your ASURITE ID. The $USER variable therefore is to simplify copy/paste operations, rather than expecting the user to type in, for example, /scratch/wdizon , which is a completely permissible/workable alternative.

Repeated Builds

If your WRF code changes, but your library/dependencies are remaining constant, you can speed up your testing by following these steps (full overview):

  1. Identify tarball needed for WRF source code, place in /scratch/$USER/letsbuildwrf/tarballs/4.x.x

  2. ./build_deps to completion

  3. Identify the newly created directory name in /scratch/$USER/letsbuildwrf/src
    In this example, the extracted tarball created a dir called WRF_2_THERMAL_URBCOL_CBC, where standard source files might have created WRF-4.2.2.

  4. In compiler.variables, make the changes to reflect #3’s dirname:

    export WRF_TARBALL_DIRNAME=WRF_2_THERMAL_URBCOL_CBC export WRF_PREFERRED_DIRNAME=WRF_2_THERMAL_URBCOL_CBC
  5. ./build_wrf

  6. Test wrf, use wrf, and when needing to make changes…

    1. Make changes to source in /scratch/$USER/letsbuildwrf/src/<dir>

    2. ./build_wrf

    3. Repeat #6

Now that it is built:

WRF’s Files are located at /scratch/$USER/wrf_compiles/WRF-X.Y.Z/run.

WRF is built using MPICH, and mpiexec.hydra, e.g., /scratch/$USER/wrf_compiles/LIBRARIES/mpich/bin/mpiexec.hydra -np 12 ./wrf.exe

If you run this interactively, be sure to choose -c <num cores> to match -np <num cores>. If you are submitting this with a batch job, makes sure your #SBATCH -c <num cores> matches.

$ interactive -c 12 $ cd /scratch/$USER/wrf_compiles/WRF-X.Y.Z/run $ /scratch/$USER/wrf_compiles/libraries/mpich/bin/mpiexec.hydra -np 12 ./wrf.exe starting wrf task 0 of 12 starting wrf task 1 of 12 starting wrf task 2 of 12 starting wrf task 3 of 12 starting wrf task 4 of 12 starting wrf task 5 of 12 starting wrf task 6 of 12 starting wrf task 7 of 12 starting wrf task 9 of 12 starting wrf task 11 of 12 starting wrf task 10 of 12 starting wrf task 8 of 12