/
Accessing CXFEL nodes on Sol

Accessing CXFEL nodes on Sol

  1. Access Eligibility
    Users in the grp_cxfel group have exclusive access to lab-owned hardware resources. Jobs using this QoS do not impact public fairshare calculations.

  2. Partition and QoS Setup
    Use the grp_cxfel QoS in combination with the appropriate partition (highmem or general for CPU and GPU nodes).

Requesting High-Memory Nodes

  1. Designed for memory-intensive jobs.

Example SBATCH Script:

#!/bin/bash #SBATCH -N 1                # number of nodes #SBATCH -c 4                # number of cores to allocate #SBATCH -p highmem          # Partition #SBATCH -q grp_cxfel        # QoS #SBATCH --mem=1000G       # Request 1000 GB memory #SBATCH -t 2-00:00:00       # Walltime: 2 days module load <software> command to run software or script

Example Interactive Job:

interactive -p highmem -q grp_cxfel --mem=500G -t 2-00:00:00

Requesting GPU Nodes

  1. GPU-accelerated jobs should use the general partition with the grp_cxfel QoS. These jobs will be allocated on the GPU node scg020 (which has 8x H100 GPUs)

Example SBATCH Script:

#!/bin/bash #SBATCH -N 1               # number of nodes #SBATCH -c 4               # number of cores to allocate #SBATCH -p general         # Partition #SBATCH -q grp_cxfel       # QoS #SBATCH --gres=gpu:2       # Request 2 GPUs #SBATCH -t 1-00:00:00      # Walltime: 1 day module load <software> command to run software or script

Example Interactive Job:

Adjust --gres=gpu:X to match your job's GPU requirements.

Requesting Compute nodes

  1. CPU-focused jobs that don’t require high memory or GPU resources can use the general partition.

Example SBATCH Script:

Example Interactive Job: