...
Node Type | CPU | Memory | Accelerator |
---|---|---|---|
Standard Compute | 28 Cores (2x Intel Broadwell) | 128 GiB | N/A |
High Memory | 56 Cores(2x Intel Skylake Xeon Gold 6132 @ 2.6GHz) | 1500 GiB | N/A |
GPU V100 | 40 Cores (2x Intel Skylake Xeon Gold 6148 @ 2.40GHz) | 360 GiB | 4x NVIDIA V100 32GiB |
Intel Phi | 256 Cores (2x Intel Knights Landing Phi) | 128 GiB | N/A |
Info |
---|
There is privately owned hardware that has very different specs. See the Phx Status Page for the full features of every node |
...
Excerpt | ||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Requesting CPUsTo request a given number of CPUs sharing the same node, you can use the following in your
This will create a job with 5 CPU cores on one node. To request a given number of CPUs spread across multiple nodes, you can use the following:
The above example will allocate a total of 50 cores spread across as few as 2 nodes or as many as 4 nodes. Take note of the inclusion or omission of
This reduced example will still allocate 50 cores, 5 cores per task on any number of available nodes. Note, that unless you are using MPI-aware software, you will likely prefer to always add The See the official Slurm documentation for more information: https://slurm.schedmd.com/sbatch.html Requesting MemoryCores and memory are de-coupled: if you need only a single CPU core but ample memory, you can do so like this:
If you do not specify To request more than 512GiB of memory, you will need to use the highmem partition
To request all available memory on a node:
Requesting GPUsTo request a GPU, you can specify the This will allocate the first available GPU that fits your job request
To request multiple GPU’s specify a number greater than 1:
To request a specific number of GPU’s per node when running multi node:
To request a specific type of GPU (a100 for example):
|
...