SBATCH files are--at the core--simply regular, executable scripts with scheduler resource information.
BASH SCRIPTS
Consider this example script test.sh
Code Block |
---|
#!/bin/bash
#SBATCH -N 1 # number of nodes
#SBATCH -c 4 # number of cores to allocate
#SBATCH -t 0-00:02:00 # time in d-hh:mm:ss
echo "hello" |
With the executable permission (+x
), this script can run in an ordinary terminal session.
By running the command ./test.sh
, it is processed by the bash
shell indicated in line 1. This means it will use the bash
interpreter, echo hello
, and exit.
Note, the number of cores, nodes, and length of time have no meaning here--by being interpreted by bash, each line that begins with #
is considered a comment, and ignored.
SBATCH SCRIPTS
Consider the same above script test.sh
now run through the supercomputer scheduler:
Code Block |
---|
$ sbatch test.sh |
By passing test.sh
to sbatch
, the file no longer needs executable permissions; sbatch
will read each of these lines and reproduce the results, but now within a contained process--that is, one that is confined by the amount of resources you specify.
Thus, running this command will:
Request 4 CPU cores
Ensure all cores share the same physical machine
Ensure all cores are available for 2:00 minutes.
When the job completes (within a second), the job will end and the remaining 1:59 minutes will be surrendered and left unused.
When using sbatch
, /wiki/spaces/RC/pages/1642102826 as with salloc
, so your file can include additional, highly-reproducible constraints:
Code Block |
---|
#SBATCH -o slurm.%j.out # file to save job's STDOUT (%j = JobId)
#SBATCH -e slurm.%j.err # file to save job's STDERR (%j = JobId)
#SBATCH --export=NONE # Purge the job-submitting shell environment |