Phoenix Hardware - How to Request
Â
Overview
Phoenix is a heterogeneous supercomputer. Heterogeneous supercomputers feature processors and interconnects that are of different types, brands, and architectures. This can complicate system management and optimization but offers a wider range of available hardware. This page describes the publicly available hardware within Phoenix for reference
Â
Node Type | CPU | Memory | Accelerator |
---|---|---|---|
Standard Compute | 28 Cores (2x Intel Broadwell) | 128 GiB | N/A |
High Memory | 56 Cores (2x Intel Skylake Xeon Gold 6132 @ 2.6GHz) | 1500 GiB | N/A |
GPU V100 | 40 Cores (2x Intel Skylake Xeon Gold 6148 @ 2.40GHz) | 360 GiB | 4x NVIDIA V100 32GiB |
Intel Phi | 256 Cores (2x Intel Knights Landing Phi) | 128 GiB | N/A |
There is privately owned hardware that has very different specs. See the Phx Status Page for the full features of every node
Requesting Resources
Requesting CPUs
To request a given number of CPUs sharing the same node, you can use the following in your SBATCH
:
#SBATCH -N 1 # Number of Nodes
#SBATCH -c 5 #Number of Cores per task
or
interactive -N 1 -c 5
This will create a job with 5 CPU cores on one node.
To request a given number of CPUs spread across multiple nodes, you can use the following:
#SBATCH -N 2-4 # number of nodes to allow tasks to spread across (MIN & MAX)
#SBATCH -n 10 # number of TASKS
#SBATCH -c 5 # CPUs per TASK
or
interactive -N 2-4 -n 10 -c 5
The above example will allocate a total of 50 cores spread across as few as 2 nodes or as many as 4 nodes.
Take note of the inclusion or omission of -N
:
#SBATCH -c 5 # CPUs per TASK
#SBATCH -n 10 # number of TASKS
or
interactive -n 10 -c 5
This reduced example will still allocate 50 cores, 5 cores per task on any number of available nodes. Note, that unless you are using MPI-aware software, you will likely prefer to always add -N
, to ensure that each job worker has sufficient connectivity.
The -c
and -n
flags have similar effects in Slurm in allocating cores, but -n
is the number of tasks, and -c
is the number of cores per task. MPI processes bind to a task, so the general rule of thumb is for MPI jobs to allocate tasks, while serial jobs allocate cores, and hybrid jobs allocate both.
See the official Slurm documentation for more information: Slurm Workload Manager - sbatch
Requesting Memory
Cores and memory are de-coupled: if you need only a single CPU core but ample memory, you can do so like this:
If you do not specify --mem
, you will be allocated 2GiB per CPU core OR 24GiB per GPU by default.
To request more than 512GiB of memory, you will need to use the highmem partition
To request all available memory on a node:
This will allocate all CPU cores memory (up to 2TiB depending on the node) to your job. This will prevent any other jobs from landing on this node. Only use this if you truly need that much memory
Requesting GPUs
To request a GPU, you can specify the -G
option within your job request:
This will allocate the first available GPU that fits your job request
To request multiple GPU’s specify a number greater than 1:
To request a specific number of GPU’s per node when running multi node:
To request a specific type of GPU (a100 for example):
CPU Micro-Architectures
The Phoenix cluster includes CPUs of different micro-architectures, such as Cascade Lake and Broadwell. These micro-architectures represent different generations of Intel processors, with variations in performance, instruction sets, and optimization capabilities. Software may perform differently depending on the CPU architecture it was compiled for or is optimized to run on.
To specify a particular CPU architecture for your job, use the --constraint
flag (-C
).
For example, to request a cascadelake CPU,
To request any ‘newer’ CPU that supports the AVX512 instruction set,