GAMESS can be compiled on Sol without any special permissions. You can build it entirely start-to-finish as your own unprivileged supercomputer user.
The steps outlined in this page compile GAMESS 2022r2, though the scripts are not limited to only this version. However, to change either the GAMESS versions (older or newer) may warrant changes this tutorial cannot anticipate.
It is recommended to complete this tutorial with the unchanged files to familiarize yourself with the process and the steps involved.
Setting up the Compilation Process
We first need to configure GAMESS to use the desired compiler and functionality. Using ./config
, we can initiate the configuration process. As the instructions state, it is valuable to have two terminals open: one for the configuration walkthrough and another to discover/validate filepaths to compilers, etc.
At the end of the process, install.info
will be generated. This file can be used directly (in place of running the configuration) or it can be used as a reference for each of the prompts in the configuration itself. Here is a working configuration for Sol using the Intel Compilers.
At least the following paths should be updated to reflect the path of the GAMESS source, which you might place in your $HOME or scratch directories.
setenv GMS_PATH /packages/apps/gamess/2022r2 setenv GMS_BUILD_DIR /packages/apps/gamess/2022r2
$ cat install.info #!/bin/csh -f # Compilation configuration for GAMESS # Generated on c010.sol.rc.asu.edu # Generated at Tue Mar 12 07:31:04 MST 2024 # GAMESS Paths # setenv GMS_PATH /packages/apps/gamess/2022r2 setenv GMS_BUILD_DIR /packages/apps/gamess/2022r2 # Machine Type # setenv GMS_TARGET linux64 setenv GMS_HPC_SYSTEM_TARGET generic # FORTRAN Compiler Setup # setenv GMS_FORTRAN ifort setenv GMS_IFORT_VERNO 19 # Mathematical Library Setup # setenv GMS_MATHLIB mkl setenv GMS_MATHLIB_PATH /packages/apps/intel/compilers_and_libraries_2020.4.304/linux/mkl/lib/intel64 setenv GMS_MKL_VERNO 12 setenv GMS_LAPACK_LINK_LINE "" # parallel message passing model setup setenv GMS_DDI_COMM mpi setenv GMS_MPI_LIB impi setenv GMS_MPI_PATH /packages/apps/intel/compilers_and_libraries_2020.4.304/linux/mpi/intel64 # Michigan State University Coupled Cluster # setenv GMS_MSUCC false # LIBCCHEM CPU/GPU Code Interface # setenv GMS_LIBCCHEM false # Intel Xeon Phi Build: none/knc/knl # setenv GMS_PHI none # Shared Memory Type: sysv/posix # setenv GMS_SHMTYPE sysv # GAMESS OpenMP support: true/false # setenv GMS_OPENMP false # GAMESS LibXC library: true/false # setenv GMS_LIBXC true # GAMESS MDI library: true/false # setenv GMS_MDI false # VM2 library: true/false # setenv GMS_VM2 false # Tinker: true/false # setenv TINKER false # VB2000: true/false # setenv VB2000 false # XMVB: true/false # setenv XMVB false # NEO: true/false # setenv NEO false # NBO: true/false # setenv NBO false #################################################### # Added any additional environmental variables or # # module loads below if needed. # #################################################### # Capture floating-point exceptions # #setenv GMS_FPE_FLAGS '-fpe0' setenv GMS_FPE_FLAGS ''
Modifying rungms
for Use
rungms
contains all the run-time elements of engaging MPI and other components to ensure GAMESS efficiently uses all the resources allocated for it. The basic rungms
file contains logic to handle all interconnects, alternative options like using SSH, and other highly-hardware-dependent functionality.
For simplicity, the following rungms
has been stripped down for Intel MPI only.
As before, the required changes to configuration are found at the top of the file:
set TARGET=mpi set SCR=/scratch/$USER/gamess/scr set USERSCR=/scratch/$USER/gamess/restart set GMSPATH=/packages/apps/gamess/2022r2
$ cat rungms #!/bin/csh -f # # last update = 17 Aug 2016 # # This is a C-shell script to execute GAMESS, by typing # rungms JOB VERNO NCPUS PPN LOGN >& JOB.log & # JOB is the name of the 'JOB.inp' file to be executed, # VERNO is the number of the executable you chose at 'lked' time, # NCPUS is the number of processors to be used, or the name of # a host list file (see an example below, starting from "node1 4". # PPN processors (actually core?) per node (MPI and cray-xt) # LOGN logical node size (how many cores per logical node), # as used by GDDI runs. # For MPI, LOGN should be between 1 and PPN. # For sockets, LOGN is only used when run on 1 multicore node. # For other cases, prepare a node file passed as NCPUS; # In this node file, repeat node names several times, for # example, for 2 physical nodes with 12 cores split into # 6 logical nodes with 4 cores, use a node file like this: # node1 4 # node1 4 # node1 4 # node2 4 # node2 4 # node2 4 # A physical node can be split into an integer number of # logical nodes. # For 8 cores, meaningful values of LOGN are 1, 2 and 4. # Simple guide: you may like to define NGROUP logical nodes, # where NGROUP is a parameter in $GDDI. # FSAVE extra files to save, example: F10, F06 etc. # Multiple choices are allowed, separated by commas, as in # rungms exam01 00 1 1 0 F10,F40 # Pertinent files with rank extensions are saved from all nodes. # If you are not sure what PPN and/or LOGN may be, set them to 0 # as place holders if you want to define only FSAVE. # # Unfortunately execution is harder to standardize than compiling, # so you have to do a bit more than name your machine type here: # # a) choose the target for execution from the following list: # sockets, mpi, ga, altix, cray-xt, ibm64-sp, sgi64, serial # IBM Blue Gene uses separate execution files: ~/gamess/machines/ibm-bg # # choose "sockets" if your compile time target was any of these: # ibm64, mac64 # as all of these systems use TCP/IP sockets. Do not name your # specific compile time target, instead choose "sockets". # # If your target was 'linux64', you may chose "sockets" or "mpi", # or "serial", according to how you chose to compile. The MPI # example below should be carefully matched against info found # in 'readme.ddi'! # # Choose 'ga' if and only if you did a 'linux64' build linked # to the LIBCCHEM software for CPU/GPU computations. # # Search on the words typed in capital letters just below # in order to find the right place to choose each one: # b) choose a directory SCR where large temporary files can reside. # This should be the fastest possible disk access, very spacious, # and almost certainly a local disk. # Translation: do not put these files on a slow network file system! # c) choose a directory USERSCR on the file server where small ASCII # supplementary output files should be directed. # Translation: it is OK to put this on a network file system! # d) name the location GMSPATH of your GAMESS binary. # e) change the the VERNO default to the version number you chose when # running "lked" as the VERNO default, and maybe NCPUS' default. # f) make sure that the ERICFMT file name and MCPPATH pathname point to # your file server's GAMESS tree, so that all runs can find them. # Again, a network file system is quite OK for these two. # g) customize the execution section for your target below, # each has its own list of further requirements. # h) it is unwise to have every user take a copy of this script, as you # can *NEVER* update all the copies later on. Instead, it is better # to ask other users to create an alias to point to a common script, # such as this in their C-shell .login file, # alias gms '/u1/mike/gamess/rungms' # i) it is entirely possible to make 'rungms' run in a batch queue, # be it PBS, DQS, et cetera. This is so installation dependent # that we leave it to up to you, although we give examples. # See ~/gamess/tools, where there are two examples of "front-end" # scripts which can use this file as the "back-end" actual job. # We use the front-end "gms" on local Infiniband clusters using # both Sun Grid Engine (SGE), and Portable Batch System (PBS). # See also a very old LoadLeveler "ll-gms" for some IBM systems. # set TARGET=mpi set SCR=/scratch/$USER/gamess/scr set USERSCR=/scratch/$USER/gamess/restart set GMSPATH=/packages/apps/gamess/2022r2 # # Get any MDI-related options and remove them from the argument list # set newargv=() set iarg=0 setenv GAMESS_MDI_OPTIONS "" set mdi_next=false while ( "$iarg" != "$#argv" ) @ iarg++ echo "$iarg : $argv[$iarg]" if ($mdi_next == true) then setenv GAMESS_MDI_OPTIONS "$argv[$iarg]" set mdi_next=false continue endif if ("$argv[$iarg]" == "-mdi") then set mdi_next=true continue endif set newargv=( $newargv $argv[$iarg] ) end set argv=( $newargv ) # # Catch unallowed characters in input file name if ( "$1" =~ *[^A-Za-z0-9\-_./\\]* ) then echo "Your input file name contains non-allowed symbols." >> /dev/stderr echo "Allowed symbols are the following: '0-9A-Za-z-_./\'" >> /dev/stderr echo "Please, rename input file and restart your calculation." >> /dev/stderr exit 4 endif # # Check VERNO name if ( "$2" =~ *[^A-Za-z0-9\-_.]* ) then echo "GAMESS executable's name contains non-allowed symbols." >> /dev/stderr echo "Allowed symbols are the following: '0-9A-Za-z-_.'" >> /dev/stderr echo "Please, rename GAMESS executable and restart your calculation." >> /dev/stderr exit 4 endif # # Check number of CPUs, allow for argument 3 to be a file name if ( ! -e "$3" && "$3" =~ *[^0-9]* ) then echo "Number of CPUs is not integer or hostfile does not exist." >> /dev/stderr echo "Please, specify proper number of CPUs or hostsfile path." >> /dev/stderr exit 4 endif # # Check processors per node if ( "$4" =~ *[^0-9]* ) then echo "Number of processors per node is not integer." >> /dev/stderr echo "Please, specify proper number of processors per node." >> /dev/stderr exit 4 endif if ( "$5" =~ *[^0-9]* ) then echo "Logical node size is not integer." >> /dev/stderr echo "Please, specify a proper number or remove the 5th argument." >> /dev/stderr exit 4 endif # set JOB=$1 # name of the input file xxx.inp, give only the xxx part set VERNO=$2 # revision number of the executable created by 'lked' step set NCPUS=$3 # number of compute processes to be run # $4 is treated below. set LOGN=$5 # number of cores per logical node set FSAVE="$6" # extra files to save, example: F10, F06 etc. # # provide defaults if last two arguments are not given to this script if (null$VERNO == null) set VERNO=00 if (null$NCPUS == null) set NCPUS=1 if (null$LOGN == null) set LOGN=0 # # ---- the top third of the script is input and other file assignments ---- # echo "----- GAMESS execution script 'rungms' -----" set master=`hostname` echo This job is running on host $master echo under operating system `uname` at `date` # # Batch scheduler, if any, should provide its own working directory, # on every assigned node (if not, modify scheduler's prolog script). # The SCHED variable, and scheduler assigned work space, is used # below only in the MPI section. See that part for more info. set SCHED=none # echo "Available scratch disk space (Kbyte units) at beginning of the job is" df -k $SCR echo "GAMESS temporary binary files will be written to $SCR" echo "GAMESS supplementary output files will be written to $USERSCR" # this added as experiment, February 2007, as 8 MBytes # increased to 32 MB in October 2013 for the VB2000 code. # its intent is to detect large arrays allocated off the stack limit stacksize 32768 # Grab a copy of the input file. # In the case of examNN jobs, file is in tests/standard subdirectory. # In the case of exam-vbNN jobs, file is in vb2000's tests subdirectory. set FULL_PATH=`readlink -f $JOB` set JOB_PATH=`dirname $FULL_PATH` if ($JOB:r.inp == $JOB) set JOB=$JOB:r # strip off possible .inp set JOB=`basename $JOB` echo "Copying input file $JOB_PATH/$JOB.inp to your run's scratch directory..." if (-e $JOB_PATH/$JOB.inp) then set echo cp $JOB_PATH/$JOB.inp $SCR/$JOB.F05 unset echo else if (-e tests/standard/$JOB.inp) then set echo cp tests/standard/$JOB.inp $SCR/$JOB.F05 unset echo else if (-e tests/$JOB.inp) then set echo cp tests/$JOB.inp $SCR/$JOB.F05 unset echo else echo "Input file $JOB.inp does not exist." >> /dev/stderr echo "This job expected the input file to be in directory `pwd`" >> /dev/stderr echo "Please fix your file name problem, and resubmit." >> /dev/stderr exit 4 endif endif endif # define many environment variables setting up file names. # anything can be overridden by a user's own choice, read 2nd. # source $GMSPATH/gms-files.csh if (-e $HOME/.gmsrc) then echo "reading your own $HOME/.gmsrc" source $HOME/.gmsrc endif # # In case GAMESS has been interfaced to the Natural Bond Orbital # analysis program (http://www.chem.wisc.edu/~nbo6), you must # specify the full path name to the NBO binary. # This value is ignored if NBO has not been linked to GAMESS. # setenv NBOEXE /u1/mike/nbo6/bin/nbo6.i8.exe # # choose remote shell execution program. # Parallel run do initial launch of GAMESS on remote nodes by the # following program. Note that the authentication keys for ssh # must have been set up correctly. # If you wish, choose 'rsh/rcp' using .rhosts authentication instead. setenv DDI_RSH ssh setenv DDI_RCP scp # # If a $GDDI input group is present, the calculation will be using # subgroups within DDI (the input NGROUP=0 means this isn't GDDI). # # The master within each group must have a copy of INPUT, which is # dealt with below (prior to execution), once we know something about # the host names where INPUT is required. The INPUT does not have # the global rank appended to its name, unlike all other files. # # OUTPUT and PUNCH (and perhaps many other files) are opened on all # processes (not just the master in each subgroup), but unique names # will be generated by appending the global ranks. Note that OUTPUT # is not opened by the master in the first group, but is used by all # other groups. Typically, the OUTPUT from the first group's master # is the only one worth saving, unless perhaps if runs crash out. # # The other files that GDDI runs might use are already defined above. # set ngddi=`grep -i '^ \$GDDI' $SCR/$JOB.F05 | grep -iv 'NGROUP=0 ' | wc -l` if ($ngddi > 0) then set GDDIjob=true echo "This is a GDDI run, keeping various output files on local disks" set echo setenv OUTPUT $SCR/$JOB.F06 setenv PUNCH $SCR/$JOB.F07 unset echo else set GDDIjob=false endif # replica-exchange molecular dynamics (REMD) # option is active iff runtyp=md as well as mremd=1 or 2. # It utilizes multiple replicas, one per subgroup. # Although REMD is indeed a GDDI kind of run, it handles its own # input file manipulations, but should do the GDDI file defs above. set runmd=`grep -i runtyp=md $SCR/$JOB.F05 | wc -l` set mremd=`grep -i mremd= $SCR/$JOB.F05 | grep -iv 'mremd=0 ' | wc -l` if (($mremd > 0) && ($runmd > 0) && ($ngddi > 0)) then set GDDIjob=false set REMDjob=true echo "This is a REMD run, keeping various output files on local disks" set echo setenv TRAJECT $SCR/$JOB.F04 setenv RESTART $USERSCR/$JOB.rst setenv REMD $USERSCR/$JOB.remd unset echo set GDDIinp=(`grep -i '^ \$GDDI' $SCR/$JOB.F05`) set numkwd=$#GDDIinp @ g = 2 @ gmax = $numkwd - 1 while ($g <= $gmax) set keypair=$GDDIinp[$g] set keyword=`echo $keypair | awk '{split($1,a,"="); print a[1]}'` if (($keyword == ngroup) || ($keyword == NGROUP)) then set nREMDreplica=`echo $keypair | awk '{split($1,a,"="); print a[2]}'` @ g = $gmax endif @ g++ end unset g unset gmax unset keypair unset keyword else set REMDjob=false endif # data left over from a previous run might be precious, stop if found. if ( (-e $CASINO) || (-e $CIMDMN) || (-e $CIMFILE) || (-e $COSDATA) \ || (-e $COSPOT) || (-e $GAMMA) || (-e $MAKEFP) \ || (-e $MDDIP) || (-e $OPTHES1) || (-e $OPTHES2) || (-e $PUNCH) \ || (-e $QMWAVE) || (-e $RESTART) || (-e $TRAJECT) ) then echo "Please save, rename, or erase these files from a previous run:" >> /dev/stderr echo " $CASINO," >> /dev/stderr echo " $CIMDMN," >> /dev/stderr echo " $CIMFILE," >> /dev/stderr echo " $COSDATA," >> /dev/stderr echo " $COSPOT," >> /dev/stderr echo " $GAMMA," >> /dev/stderr echo " $MAKEFP," >> /dev/stderr echo " $MDDIP," >> /dev/stderr echo " $OPTHES1," >> /dev/stderr echo " $OPTHES2," >> /dev/stderr echo " $PUNCH," >> /dev/stderr echo " $QMWAVE," >> /dev/stderr echo " $RESTART, and/or" >> /dev/stderr echo " $TRAJECT," >> /dev/stderr echo "and then resubmit this computation." >> /dev/stderr exit 4 endif # ---- the middle third of the script is to execute GAMESS ---- # # we show execution sections that should work for # sockets, mpi, altix, cray-xt, serial # which are not mentioned at the top of this file, as they are quite stale. # # Most workstations run DDI over TCP/IP sockets, and therefore execute # according to the following clause. The installer must # a) Set the path to point to the DDIKICK and GAMESS executables. # b) Build the HOSTLIST variable as a word separated string, i.e. ()'s. # There should be one host name for every compute process that is # to be run. DDIKICK will automatically generate a set of data # server processes (if required) on the same hosts. # An extended explanation of the arguments to ddikick.x can be found # in the file gamess/ddi/readme.ddi, if you have any trouble executing. # # - a typical MPI example - # # This section is customized to two possible MPI libraries: # Intel MPI or MVAPICH2 (choose below). # We do not know tunings to use openMPI correctly!!! # This section is customized to two possible batch schedulers: # Sun Grid Engine (SGE), or Portable Batch System (PBS) # # See ~/gamess/tools/gms, which is a front-end script to submit # this file 'rungms' as a back-end script, to either scheduler. # # if you are using some other MPI: # See ~/gamess/ddi/readme.ddi for information about launching # processes using other MPI libraries (each may be different). # Again: we do not know how to run openMPI effectively. # # if you are using some other batch scheduler: # Illustrating other batch scheduler's way's of providing the # hostname list is considered beyond the scope of this script. # Suffice it to say that # a) you will be given hostnames at run time # b) a typical way is a disk file, named by an environment # variable, containing the names in some format. # c) another typical way is an blank separated list in some # environment variable. # Either way, whatever the batch scheduler gives you must be # sliced-and-diced into the format required by your MPI kickoff. # if ($TARGET == mpi) then # # Besides the usual three arguments to 'rungms' (see top), # we'll pass in a "processers per node" value, that is, # all nodes are presumed to have equal numbers of cores. # set PPN=$4 # # Allow for compute process and data servers (one pair per core) # note that NCPUS = #cores, and NPROCS = #MPI processes # @ NPROCS = $NCPUS + $NCPUS # # User customization required here: # 1. specify your MPI choice: impi/mpich/mpich2/mvapich2/openmpi # Note that openMPI will probably run at only half the speed # of the other MPI choices, so openmpi should not be used! # 2. specify your MPI library's top level path just below, # this will have directories like include/lib/bin below it. # 3. a bit lower, perhaps specify your ifort path information. # set DDI_MPI_CHOICE=impi # # ISU's various clusters have various iMPI paths, in this order: # dynamo/chemphys2011/exalted/bolt/CyEnce/CJ if ($DDI_MPI_CHOICE == impi) then set DDI_MPI_ROOT=/packages/apps/intel/compilers_and_libraries_2020.4.304/linux/mpi/intel64 endif # # ISU's various clusters have various MVAPICH2 paths, in this order: # dynamo/exalted/bolt/thebunny/CJ # pre-pend our MPI choice to the library and execution paths. switch ($DDI_MPI_CHOICE) case impi: if ($?LD_LIBRARY_PATH) then setenv LD_LIBRARY_PATH $DDI_MPI_ROOT/lib:$LD_LIBRARY_PATH else setenv LD_LIBRARY_PATH $DDI_MPI_ROOT/lib endif set path=($DDI_MPI_ROOT/bin $path) rehash breaksw default: breaksw endsw # # you probably don't need to modify the kickoff style (see below). # if ($DDI_MPI_CHOICE == impi) set MPI_KICKOFF_STYLE=hydra # # Argonne's MPICH2, offers two possible kick-off procedures, # guided by two disk files (A and B below). # Other MPI implementations are often derived from Argonne's, # and so usually offer these same two styles. # For example, iMPI and MVAPICH2 can choose either "3steps" or "hydra", # but openMPI uses its own Open Run Time Environment, "orte". # # Kickoff procedure #1 uses mpd demons, which potentially collide # if the same user runs multiple jobs that end up on the same nodes. # This is called "3steps" here because three commands (mpdboot, # mpiexec, mpdallexit) are needed to run. # # Kickoff procedure #2 is little faster, easier to use, and involves # only one command (mpiexec.hydra). It is called "hydra" here. # # Kickoff procedure #3 is probably unique to openMPI, "orte". # # A. build HOSTFILE, # This file is explicitly used only by "3steps" initiation, # but it is always used below during file cleaning, # and while creating the PROCFILE at step B, # so we always make it. # setenv HOSTFILE $SCR/$JOB.nodes.mpd if (-f "$HOSTFILE" && -w "$HOSTFILE") rm "$HOSTFILE" touch $HOSTFILE # if ($NCPUS == 1) then # Serial run must be on this node itself! echo `hostname` >> $HOSTFILE set NNODES=1 else # Parallel run gets node names from scheduler's assigned list: set NNODES=1 if ($SCHED == SGE) then uniq $TMPDIR/machines $HOSTFILE set NNODES=`wc -l $HOSTFILE` set NNODES=$NNODES[1] endif if ($SCHED == PBS) then uniq $PBS_NODEFILE $HOSTFILE set NNODES=`wc -l $HOSTFILE` set NNODES=$NNODES[1] endif endif # uncomment next lines if you need to debug host configuration. #--echo '-----debug----' #--echo HOSTFILE $HOSTFILE contains #--cat $HOSTFILE #--echo '--------------' # # B. the next file forces explicit "which process on what node" rules. # The contents depend on the kickoff style. This file is how # we tell MPI to double-book the cores with two processes, # thus accounting for both compute processes and data servers. # setenv PROCFILE $SCR/$JOB.processes.mpd if (-f "$PROCFILE" && -w "$PROCFILE") rm "$PROCFILE" touch $PROCFILE switch ($MPI_KICKOFF_STYLE) case hydra: if (! $?PPN || $PPN == "") then echo "PPN is unset or empty, initializing to 1" set PPN = 1 endif if ($NNODES == 1) then # when all processes are inside a single node, it is simple! # all MPI processes, whether compute processes or data servers, # are just in this node. (note: NPROCS = 2*NCPUS!) @ PPN2 = $PPN + $PPN echo "`hostname`:$NPROCS" > $PROCFILE else # For more than one node, we want PPN compute processes on # each node, and of course, PPN data servers on each. # Hence, PPN2 is doubled up. # Front end script 'gms' is responsible to ensure that NCPUS # is a multiple of PPN, and that PPN is less than or equals # the actual number of cores in the node. @ PPN2 = $PPN + $PPN @ n=1 while ($n <= $NNODES) set host=`sed -n -e "$n p" $HOSTFILE` set host=$host[1] echo "${host}:$PPN2" >> $PROCFILE @ n++ end endif breaksw endsw # uncomment next lines if you need to debug host configuration. #--echo '-----debug----' #--echo PROCFILE $PROCFILE contains #--cat $PROCFILE #--echo '--------------' # # ==== values that influence the MPI operation ==== # # tunings below are specific to Intel MPI 3.2 and/or 4.0: # a very important option avoids polling for incoming messages # which allows us to compile DDI in pure "mpi" mode, # and get sleeping data servers if the run is SCF level. # trial and error showed process pinning slows down GAMESS runs, # set debug option to 5 to see messages while kicking off, # set debug option to 200 to see even more messages than that, # set statistics option to 1 or 2 to collect messaging info, # iMPI 4.0 on up defaults fabric to shm,dapl: dapl only is faster. # if ($DDI_MPI_CHOICE == impi) then set echo #seemed to cause trouble #setenv I_MPI_WAIT_MODE enable setenv I_MPI_PIN disable setenv I_MPI_DEBUG 0 setenv I_MPI_STATS 0 # next two select highest speed mode of an Infiniband setenv I_MPI_FABRICS ofi # Force use of "shared memory copy" large message transfer mechanism # The "direct" mechanism was introduced and made default for IPS 2017, # and makes GAMESS hang when DD_GSum() is called. See IPS 2017 release notes # for more details. setenv I_MPI_SHM_LMT shm # next two select TCP/IP, a slower way to use Infiniband. # The device could be eth0 if IP over IB is not enabled. unset echo endif # # ... thus ends setting up the process initiation, # tunings, pathnames, library paths, for the MPI. # # # Compiler library setup (ifort) # just ignore this (or comment out) if you're using gfortran. # ISU's various clusters have various compiler paths, in this order: # dynamo/chemphys2011/exalted/bolt/CyEnce/thebunny/CJ # setenv LD_LIBRARY_PATH /packages/apps/intel/compilers_and_libraries_2020.4.304/linux/compiler/lib/intel64:$LD_LIBRARY_PATH # the next two setups are GAMESS-related # # Set up Fragment MO runs (or other runs exploiting subgroups). # One way to be sure that the master node of each subgroup # has its necessary copy of the input file is to stuff a # copy of the input file onto every single node right here. if ($GDDIjob == true) then set nmax=`wc -l $HOSTFILE` set nmax=$nmax[1] set lasthost=$master echo GDDI has to copy your input to every node.... @ n=2 # input has already been copied into the master node. while ($n <= $nmax) set host=`sed -n -e "$n p" $HOSTFILE` set host=$host[1] if ($host != $lasthost) then echo $DDI_RCP $SCR/$JOB.F05 ${host}:$SCR/$JOB.F05 $DDI_RCP $SCR/$JOB.F05 ${host}:$SCR/$JOB.F05 set lasthost=$host endif @ n++ end # The default for the logical node size is all cores existing # in the physical node (just skip setting the value). # GDDI runs require that the number of groups should not be # less than the number of logical nodes. # For example, if you run on 2 physical nodes with 12 cores each # and you want to use 6 GDDI groups, then you would set LOGN to 4 # (12*2/6). # By doing this, you will get 6 groups with 4 cores each; # if you do not do this (and run with 2 nodes), you can only ask # for at most 2 groups. if($LOGN != 0) setenv DDI_LOGICAL_NODE_SIZE $LOGN endif if ($REMDjob == true) then source $GMSPATH/tools/remd.csh $TARGET $nREMDreplica if ($status > 0) exit $status endif # # Now, at last, we can actually kick-off the MPI processes... # echo "MPI kickoff will run GAMESS on $NCPUS cores in $NNODES nodes." echo "The binary to be executed is $GMSPATH/gamess.$VERNO.x" echo "MPI will run $NCPUS compute processes and $NCPUS data servers," echo " placing $PPN of each process type onto each node." echo "The scratch disk space on each node is $SCR, with free space" df -k $SCR # chdir $SCR # switch ($MPI_KICKOFF_STYLE) case hydra: if ($DDI_MPI_CHOICE == impi) then set echo setenv I_MPI_HYDRA_ENV all setenv I_MPI_PERHOST $PPN2 unset echo endif set echo /packages/apps/intel/compilers_and_libraries_2020.4.304/linux/mpi/intel64/bin/mpiexec.hydra -launcher slurm -np $NPROCS \ $GMSPATH/gamess.$VERNO.x unset echo breaksw case default: echo "rungms: No valid DDI-over-MPI startup procedure was chosen." >> /dev/stderr exit endsw # keep HOSTFILE, as it is passed to the file erasing step below if (-f "$PROCFILE" && -w "$PROCFILE") rm "$PROCFILE" # endif # ------ end of the MPI execution section ------- # # ---- the bottom third of the script is to clean up all disk files ---- # It is quite useful to display to users how big the disk files got to be. # echo ----- accounting info ----- # # in the case of GDDI runs, we save the first PUNCH file only. # If something goes wrong, the .F06.00x, .F07.00x, ... from the # other groups are potentially interesting to look at. if ($GDDIjob == true) cp $SCR/$JOB.F07 $USERSCR/$JOB.dat # # Clean up the master's scratch directory. # echo Files used on the master node $master were: ls -lF $SCR/$JOB.* set nonomatch # Save user specified files. set savelist="" if($SCR == $USERSCR) set FSAVE="" if(null$FSAVE != null) then echo Saving $FSAVE files on the master node. foreach i (`echo $FSAVE | tr "," " "`) set savelist=("$savelist" "$JOB.${i}*") mv $SCR/$JOB.${i}* $USERSCR end endif foreach file ("$SCR/$JOB.F"*) if (-f "$file" && -w "$file") rm "$file" end unset nonomatch unset file # # Clean/Rescue any files created by the VB2000 plug-in if (-e $SCR/$JOB.V84) mv $SCR/$JOB.V84 $USERSCR # New *.gpfep and *.dmat from VB2000 3.0 if (-e $SCR/$JOB.gpfep) mv $SCR/$JOB.gpfep $USERSCR if (-e $SCR/$JOB.dmat) mv $SCR/$JOB.dmat $USERSCR if (-e $SCR/$JOB.V80) then set nonomatch foreach file ("$SCR/$JOB.V"*) if (-f "$file" && -w "$file") rm "$file" end unset nonomatch unset file endif if (-e $SCR/$JOB.TEMP02) then set nonomatch foreach file ("$SCR/$JOB.TEMP"*) if (-f "$file" && -w "$file") rm "$file" end unset nonomatch unset file endif if (-e $SCR/$JOB.orb) mv $SCR/$JOB.orb $USERSCR if (-e $SCR/$JOB.vec) mv $SCR/$JOB.vec $USERSCR if (-e $SCR/$JOB.mol) mv $SCR/$JOB.mol $USERSCR if (-e $SCR/$JOB.molf) mv $SCR/$JOB.molf $USERSCR if (-e $SCR/$JOB.mkl) mv $SCR/$JOB.mkl $USERSCR if (-e $SCR/$JOB.xyz) mv $SCR/$JOB.xyz $USERSCR ls $SCR/${JOB}-*.cube > $SCR/${JOB}.lis if (! -z $SCR/${JOB}.lis) mv $SCR/${JOB}*.cube $USERSCR if (-f "$SCR/${JOB}.lis" && -w "$SCR/${JOB}.lis") rm "$SCR/${JOB}.lis" ls $SCR/${JOB}-*.grd > $SCR/${JOB}.lis if (! -z $SCR/${JOB}.lis) mv $SCR/${JOB}*.grd $USERSCR if (-f "$SCR/${JOB}.lis" && -w "$SCR/${JOB}.lis") rm "$SCR/${JOB}.lis" ls $SCR/${JOB}-*.csv > $SCR/${JOB}.lis if (! -z $SCR/${JOB}.lis) mv $SCR/${JOB}*.csv $USERSCR if (-f "$SCR/${JOB}.lis" && -w "$SCR/${JOB}.lis") rm "$SCR/${JOB}.lis" # # Clean up scratch directory of remote nodes. # # This may not be necessary, e.g. on a T3E where all files are in the # same directory, and just got cleaned out by the previous 'rm'. Many # batch queue managers provide cleaning out of scratch directories. # It still may be interesting to the user to see the sizes of files. # # The 'lasthost' business prevents multiple cleanup tries on SMP nodes. # # This particular example is for the combination iMPI, w/SGE or PBS. # We have inherited a file of unique node names from above. # There is an option to rescue the output files from group DDI runs, # such as FMO, in case you need to see the other group's outputs. if ($TARGET == mpi) then set nnodes=`wc -l $HOSTFILE` set nnodes=$nnodes[1] @ n=1 set master=`hostname` # burn off the .local suffix in our cluster's hostname set master=$master:r while ($n <= $nnodes) set host=`sed -n -e "$n p" $HOSTFILE` # in case of openMPI, unwanted stuff may follow the hostname set host=$host[1] if ($host != $master) then echo Files used on node $host were: #---------FMO rescue------ #--if ($GDDIjob == true) then #-- echo "========= OUTPUT from node $host is ==============" #-- ssh $host -l $USER "cat $SCR/$JOB.F06*" #--endif #---------FMO rescue------ ssh $host -l $USER "ls -l $SCR/$JOB.*" if ($GDDIjob == true && null$FSAVE != null) then echo Saving $FSAVE files on $host $DDI_RSH $host -l $USER -n "cd $SCR; mv $savelist $USERSCR" endif $DDI_RSH $host -l $USER "find '$SCR' -maxdepth 1 -type f -writable -name '$JOB.F*' -exec rm {} \;" endif @ n++ end # clean off the last file on the master's scratch disk. if (-f "$HOSTFILE" && -w "$HOSTFILE") rm "$HOSTFILE" # if ($?I_MPI_STATS) then if ($I_MPI_STATS > 0) mv $SCR/stats.txt ~/$JOB.$NCPUS.stats endif endif # # and this is the end # date time exit
Running GAMESS
You can now run GAMESS with the compiled binary. Unless you have chosen differently, the file will be named 00
, so rungms 00 ...
Enter the directory with .inp files
/path/to/rungms 00 ....