Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

Overview

Phoenix is a heterogeneous supercomputer. Heterogeneous supercomputers feature processors and interconnects that are of different types, brands, and architectures. This can complicate system management and optimization but offers a wider range of available hardware. This page describes the publicly available hardware within Phoenix for reference

Node Type

CPU

Memory

Accelerator

Standard Compute

28 Cores (2x Intel Broadwell)

128 GiB

N/A

High Memory

56 Cores (2x Intel Skylake Xeon Gold 6132 @ 2.6GHz)

1500 GiB

N/A

GPU V100

40 Cores (2x Intel Skylake Xeon Gold 6148 @ 2.40GHz)

360 GiB

4x NVIDIA V100 32GiB

Intel Phi

256 Cores (2x Intel Knights Landing Phi)

128 GiB

There is privately owned hardware that has very different specs. See the Phx Status Page for the full features of every node

Requesting Resources

Requesting CPUs

To request a given number of CPUs sharing the same node, you can use the following in your SBATCH:

#SBATCH -N 1 # Number of Nodes
#SBATCH -c 5 #Number of Cores per task
or
interactive -N 1 -c 5

This will create a job with 5 CPU cores on one node.

To request a given number of CPUs spread across multiple nodes, you can use the following:

#SBATCH -N 2-4    # number of nodes to allow tasks to spread across (MIN & MAX)
#SBATCH -n 10    # number of TASKS
#SBATCH -c 5     # CPUs per TASK
or
interactive -N 2-4 -n 10 -c 5

The above example will allocate a total of 50 cores spread across as few as 2 nodes or as many as 4 nodes.

Take note of the inclusion or omission of -N:

#SBATCH -c 5     # CPUs per TASK
#SBATCH -n 10    # number of TASKS
or
interactive -n 10 -c 5

This reduced example will still allocate 50 cores, 5 cores per task on any number of available nodes. Note, that unless you are using MPI-aware software, you will likely prefer to always add -N, to ensure that each job worker has sufficient connectivity.

-c and -n have similar effects in Slurm in allocating cores, but -n is the number of tasks, and -c is the number of cores per task. MPI processes bind to a task, so the general rule of thumb is for MPI jobs to allocate tasks, while serial jobs allocate cores, and hybrid jobs allocate both.

See the official Slurm documentation for more information: https://slurm.schedmd.com/sbatch.html

Requesting Memory

Cores and memory are de-coupled: if you need only a single CPU core but ample memory, you can do so like this:

#SBATCH -c 1
#SBATCH -N 1
#SBATCH --mem=120G
or
interactive -N 1 -c 1 --mem=120G

If you do not specify --mem, you will be allocated 2GiB per CPU core OR 24GiB per GPU

To request more than 512GiB of memory, you will need to use the highmem partition

#SBATCH -p highmem
#SBATCH --mem=1400G

To request all available memory on a node:

This will allocate all CPU cores memory (up to 2TiB depending on the node) to your job. This will prevent any other jobs from landing on this node. Only use this if you truly need that much memory

#SBATCH --exclusive 
#SBATCH --mem=0

Requesting GPUs

To request a GPU, you can specify the -G option within your job request:

This will allocate the first available GPU that fits your job request

#SBATCH -G 1
or 
interactive -G 1

To request multiple GPU’s specify a number greater than 1:

#SBATCH -G 4
or 
interactive -G 4

To request a specific number of GPU’s per node when running multi node:

#SBATCH -N 2 # Request two nodes
#SBATCH --gpus-per-node=2 #Four total GPUs, two per node

To request a specific type of GPU (a100 for example):

#SBATCH-G a100:1
or
interactive -G a100:1

Additional Help

If you require further assistance on this topic, please contact the Research Computing Team. To create a support ticket review our RTO Request Help page. For quick inquiries, reach out via our #rc-support Slack Channel or attend our office hours for live assistance.

We also offer a series of Educational Opportunities and Workshops.

  • No labels