Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Both Sol and Phoenix use the same partition and QoS options. Reference this page to help select the best option for your job.

Info

If not explicitly defined, jobs will default to the HTC partition and public QoS.

Partitions

general

The general-use partition comprises all Research Computing-owned nodes. This partition has a wall time limit of 7 days. CPU-only, GPU-accelerated, and FPGA-accelerated jobs will typically use this.

...

The lightwork partition is aimed at jobs that require relatively less computing power than typical supercomputing jobs and may stay idle for larger amounts of time. Great examples of this would be creating mamba environments, compiling software, VSCode tunnels, or basic software tests. The aux-interactive command will automatically allocate on the lightwork partition.

Code Block
#SBATCH -p lightwork
#SBATCH -t 1-00:00:00

interactive -p lightwork --mem=1000G10G

The maximum job time is one day, and the maximum CPU cores per node are 8:

Code Block
[spock@sg008:~]$ scontrol show partition lightwork
PartitionName=lightwork
   AllowGroups=ALL AllowAccounts=ALL AllowQos=public,debug
   AllocNodes=ALL Default=NO QoS=public
   DefaultTime=04:00:00 DisableRootJobs=NO ExclusiveUser=NO GraceTime=0 Hidden=NO
   MaxNodes=UNLIMITED MaxTime=1-00:00:00 MinNodes=0 LLN=NO MaxCPUsPerNode=8 MaxCPUsPerSocket=UNLIMITED
   Nodes=sc[001-002]
Info

Jobs that utilize cores to their full potential are more appropriately used in htc or general partitions, where cores are not shared/oversubscribed. Jobs that use full cores to > 99% for a continued duration of time, or jobs that request excessive resources prohibiting other users from using this partition, are subject to cancellation. Repeated misuse of this partition will result in ineligibility from using lightwork going forward.

...

This is a special-case QOS not available by default, it is granted on a user-by-user basis; if you are interested in using this QOS, please be ready to share a job ID demonstrating the need and effective use of existing core allocations. If you have any questions, feel free to ask staff and also explore the Slurm EFFiciency reporter: /wiki/spaces/RC/pages/395083777

class

The class QOS is a special QoS for users who have access to Sol as part of an academic course. The class QOS has additional resource limitations to ensure that jobs start sooner and resources are utilized effectively. These resource limits are:

  • Job Resource Limits:

    • Maximum of 32 CPU cores, 320 GB memory, and 4 GPUs per job

    • Maximum wall time of 24 hours per job

  • User-Level Limits:

    • Maximum of 2 jobs running concurrently per user

    • Maximum of 10 jobs in the queue per user

    • Maximum of 960 GPU running minutes per user (equivalent to 1 GPU for 16 hours or 4 GPUs for 4 hours, shared across running jobs)

Code Block
#SBATCH -p general
#SBATCH -q class

interactive -p general -q class

Users who have access to Sol as both an academic course and research account may need to specify which account to submit a job to. This can be done with the -A flag.

Code Block
interactive -p general -q class -A class_asu101spring2025
interactive -p general -q public -A grp_mylab

#SBATCH -p general 
#SBATCH -q class
#SBATCH -A grp_asu101spring2025

#SBATCH -p general
#SBATCH -q public
#SBATCH -A grp_mylab

To see which accounts you have, run the command myfairshare

Code Block
myfairshare 
Account                 User      RawUsage_CHE  RawFairShare  TargetFairShare  RealFairShare
class_asu101spring2025  jeburks2  0.0           1.000000      1.0000000        1.0000000
grp_mylab               jeburks2  61.1          0.043506      0.9957724        0.0435060

Additional Help

Insert excerpt
Contact RC Support excerpt
Contact RC Support excerpt
nameContact RC Support
nopaneltrue