Accessing CXFEL nodes on Sol
Access Eligibility
Users in the grp_cxfel group have exclusive access to lab-owned hardware resources. Jobs using this QoS do not impact public fairshare calculations.Partition and QoS Setup
Use the grp_cxfel QoS in combination with the appropriate partition (highmem or general for CPU and GPU nodes).
Requesting High-Memory Nodes
Designed for memory-intensive jobs.
Example SBATCH Script:
#!/bin/bash
#SBATCH -N 1 # number of nodes
#SBATCH -c 4 # number of cores to allocate
#SBATCH -p highmem # Partition
#SBATCH -q grp_cxfel # QoS
#SBATCH --mem=1000G # Request 1000 GB memory
#SBATCH -t 2-00:00:00 # Walltime: 2 days
module load <software>
command to run software or script
Example Interactive Job:
interactive -p highmem -q grp_cxfel --mem=500G -t 2-00:00:00
Requesting GPU Nodes
GPU-accelerated jobs should use the general partition with the grp_cxfel QoS. These jobs will be allocated on the GPU node scg020 (which has 8x H100 GPUs)
Example SBATCH Script:
#!/bin/bash
#SBATCH -N 1 # number of nodes
#SBATCH -c 4 # number of cores to allocate
#SBATCH -p general # Partition
#SBATCH -q grp_cxfel # QoS
#SBATCH --gres=gpu:2 # Request 2 GPUs
#SBATCH -t 1-00:00:00 # Walltime: 1 day
module load <software>
command to run software or script
Example Interactive Job:
Adjust --gres=gpu:X to match your job's GPU requirements.
Requesting Compute nodes
CPU-focused jobs that don’t require high memory or GPU resources can use the general partition.
Example SBATCH Script:
Example Interactive Job: