Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

Overview

Sol is a homogenous supercomputer. Homogeneous supercomputers feature processors and interconnects that are of the same type, brand, and architecture. This uniformity simplifies system management and optimization. This page describes the hardware within Sol for reference

Node Type

CPU

Memory

Accelerator

Standard Compute

128 Cores (2x AMD EPYC 7713 Zen3)

512 GiB

N/A

High Memory

128 Cores (2x AMD EPYC 7713 Zen3)

2048 GiB

N/A

GPU A100

48 Cores (2x AMD EPYC 7413 Zen3)

512 GiB

4x NVIDIA A100 80GiB

GPU A30

48 Cores (2x AMD EPYC 7413 Zen3)

512 GiB

3x NVIDIA A30 24GiB

GPU MIG

48 Cores (2x AMD EPYC 7413 Zen3)

512 GiB

28x NVIDIA A100 sliced into 10GiB segments

Xlienx FPGA

48 Cores (2x AMD EPYC 7443 Zen3)

256 GiB

1x Xilinx U280

Bitaware FPGA

52 Cores (Intel Xeon Gold 6230R)

376 GiB

1x BittWare 520N-MX

There is privately owned hardware that may have slightly different specs. See the Sol Status Page for the full features of every node

Requesting Resources

Requesting CPUs

By default, Slurm will attempt to schedule jobs as locally proximate as possible, so requesting -c 5 cores will try to get 5 cores on one node -N 1, unless otherwise specified.

To request a given number of CPUs sharing the same node, you can use the following in your SBATCH:

#SBATCH -c 5
#SBATCH -N 1
or
interactive -c 5 -N 1

To request a given number of CPUs spread across multiple nodes, you can use the following:

#SBATCH -c 5     # CPUs per TASK
#SBATCH -n 10    # number of TASKS
#SBATCH -N 10    # number of nodes to allow tasks to spread across (MIN & MAX)
or
interactive -N 10 -n 10 -c 5

The above example will allocate 50 cores, 5 cores per task on 10 independent nodes.

Take note of the inclusion or omission of -N:

#SBATCH -c 5     # CPUs per TASK
#SBATCH -n 10    # number of TASKS

interactive -n 10 -c 5

This reduced example will still allocate 50 cores, 5 cores per task on any number of available nodes. Note, that unless you are using MPI-aware software, you will likely prefer to always add -N 1, to ensure that each job worker has sufficient connectivity to each other.

Requesting Memory

On Sol, cores and memory are de-coupled: if you need only a single CPU core but ample memory, you can do so like this:

#SBATCH -c 1
#SBATCH -N 1
#SBATCH --mem=120G

interactive -N 1 -c 1 --mem=120G

If you do not specify --mem, you will be allocated 2GiB per CPU core

To request more than 512GiB of memory, you will need to use the highmem partition

#SBATCH -p highmem
#SBATCH --mem=1400G

To request all available memory on a node, set mem=0

This will allocate all CPU cores memory (between 512GiB and 2TiB depending on the node) to your job. This will prevent any other jobs from landing on this node. Only use this if you truly need that much memory

#SBATCH --exclusive 
#SBATCH --mem=0

Additional Help

If you require further assistance on this topic, please don't hesitate to contact the Research Computing Team. To create a support ticket, kindly send an email to rtshelp@asu.edu. For quick inquiries, you're welcome to reach out via our #rc-support Slack Channel or attend our office hours for live assistance

We also offer a series of workshops. More information here: Educational Opportunities and Workshops

  • No labels