...
Info |
---|
There is privately owned hardware that may have slightly different specs. See the Sol Status Page for the full features of every node |
Note |
---|
Requesting too many resources would led to a long job queueing time. Using too many resources would cost a large amount of fairshare points, then led to a long job queueing time. Check the efficiency of a completed test job can help with determining an appropriate amount of resource to request. |
Requesting Resources
Excerpt | ||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Requesting CPUsTo request a given number of CPUs sharing the same node, you can use the following in your
This will create a job with 5 CPU cores on one node. To request a given number of CPUs spread across multiple nodes, you can use the following:
The above example will allocate a total of 50 cores spread across as few as 2 nodes or as many as 4 nodes. Take note of the inclusion or omission of
This reduced example will still allocate 50 cores, 5 cores per task on any number of available nodes. Note, that unless you are using MPI-aware software, you will likely prefer to always add -c and -n have similar effects in Slurm in allocating cores, but -n is the number of tasks, and -c is the number of cores per task. MPI processes bind to a task, so the general rule of thumb is for MPI jobs to allocate tasks, while serial jobs allocate cores, and hybrid jobs allocate both. See the official Slurm documentation for more information: https://slurm.schedmd.com/sbatch.html Requesting MemoryCores and memory are de-coupled: if you need only a single CPU core but ample memory, you can do so like this:
If you do not specify --mem, you will be allocated 2GiB per CPU core OR 24GiB per GPU To request more than 512GiB of memory, you will need to use the highmem partition
To request all available memory on a node:
Requesting GPUsTo request a GPU, you can specify the -G option within your job request. This will allocate the first available GPU that fits your job request:
To request multiple GPUs specify a number greater than 1:
To request a specific number of GPUs per node when running multi-node:
To request a specific type of GPU (a100 for example):
GPU Varieties AvailableBelow is a table demonstrating the available GPU instance sizes you can allocate:
The a100s can come in two varieties, as seen above. To guarantee a 80GB a100, include this feature:
|
Requesting FPGAs
Sol has two nodes with a Field Programmable Gate Array (FPGA) accelerator. One is an Intel-based node with a Bitaware 520N-MX FPGA, the other is an AMD-based node with a Xilinx U280. Because there is only FPGA per node, it is recommended to allocate the entire node.
...
Note there should not be a space between ā-Lā and the FPGA name on the web portal
Requesting the Grace Hopper ARM
The GraceHopper is a specialized unit running the ARM architecture aarch64
, which is separate and non-compatible with x86_64
applications. While this node frequently is idle, unless your application is designed for this less-common architecture, you should expect compiled applications to fail on execution.
Requesting this node requires doing so exclusively.
Code Block |
---|
#SBATCH --exclusive
#SBATCH -p highmem
#SBATCH -L gracehopper
#SBATCH -G 1
or
interactive --exclusive -L gracehopper -G 1 -p highmem |
Additional Help
Insert excerpt | ||||||||
---|---|---|---|---|---|---|---|---|
|