Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Switches can be used in combination with each other to optimize the resources assigned to a job

Command

Effect

Usage example

Notes

-N <X>

Request X number of nodes for job to be spread across

Default = 1
Max = 36

Default lets Slurm choose. Slurm will always try to make this 1 if possible combined with other options

interactive -N 2

#SBATCH -N 2

The system will strictly enforce this even if your job could be run on a smaller number of nodes. For non-MPI jobs this may make your job run slower

e.g. specifying -N 2 -n 2 will result in getting one core on each of two compute nodes even though all nodes have multiple cores

-n <X>

Create a job with n tasks

If requested under the number of cores/system in a partition it will default -N to 1

If requested over the number of cores/system in the specified partition Slurm will find the fastest way to assign all requested cores

interactive -n 28

#SBATCH -n 28

Usually only MPI jobs require tasks n > 1. Multithreaded/multicore jobs are often best requested with -c

-c <X>

Request X cores/task

Default = 1
Max = 1008

Similar to -n, however is multiplied by the number of specified tasks

interactive -c 4

#SBATCH -c 4

-t days-hours

or

-t minutes

or

-t DD-HH:MM:SS

Requested amount of time for a job

The default is 1 day (1-0) with a maximum of 7 days.

Job will end after the specified amount of time if jobs in queue need the resources.

interactive -t 15

Requesting less time can reduce the amount of time a job waits before starting at the risk of your job ending before it is completed

-p <partition>

Specify which partition to use.

Default = (Depends)
-t <= 0-4 (4 hours) = htc/serial/parallel
-t > 0-4 = serial/parallel
-n >= [28-52] = parallel
-N > 1 = parallel
-N/-n <= 28/1 = serial/parallel

interactive -p publicgpu

#SBATCH -p publicgpu

In most cases Slurm can determine this itself.

-q <qos>

Specify the QOS to use (e.g. a queue)

Default = “normal”

interactive -p gpu -q wildfire
#SBATCH -p gpu
#SBATCH -q wildfire

The QOS must be specified with a partition when the QOS is not the default, normal.

--gres

Specify a “Generic Resource” to request with the job. Commonly used to request GPU’s

Default = None

See our GPU documentation for details

interactive -p gpu -q wildfire --gres=gpu:V100:1

#SBATCH -p gpu
#SBATCH -q wildfire
#SBATCH --gres=gpu:V100:1

Generally only used for GPUs, and since GPUs are only available with the wildfire QOS.

-o <filename>

Specifies a filename to capture all standard output to. By itself, also captures standard error messages.

This is equivalent to ‘ mycommand > filename.foo '

Default = None

interactive -o %j.out

#SBATCH -o %j.out

Not recommended for interactive jobs.

%j in example captures the job number for unique filename

-e <filename>

Specifies filename to capture all standard error messages within.

This is equivalent to ' mycommand 2>&1 > filename.foo '

Default = None

interactive -e %j.err

#SBATCH -e %j.err

Not recommended for interactive jobs.

%j in example captures the job number for unique filename

--mail-type=<$VAR>

Specify when you want to receive an e-mail update about a job even such as starting, finishing, failure etc

Default = None

interactive = N/A

#SBATCH --mail-type=ALL

Not recommended for interactive jobs

useful for status updates on SBATCH jobs

--mail-user=%u@asu.edu

Email address to send notifications to. %u automatically expands to username

interactive = N/A

#SBATCH --mail-user=%u@asu.edu

Not recommended for interactive jobs

useful for status updates on SBATCH jobs

Filter by label (Content by label)
page
showLabelsfalse
max5
spacescom.atlassian.confluence.content.render.xhtml.model.resource.identifiers.SpaceResourceIdentifier@a31
showSpacefalse
sortmodified
showSpacetypefalsepage
reversetruetype
labelskb-how-to-article
cqllabel = "kb-how-to-article" and type = "page" and space = "RC"labelskb-how-to-article
Page Properties
hiddentrue

Related issues