New User Guide for Sol Compute Resources

Your first time using a supercomputer like Sol can be intimidating, but it does not have to be. This guide will get you started with the basics. If you run into problems or need additional help, we hold regular weekly office hours.

This document assumes you have an account on the Sol supercomputer and that you are familiar with our . Accounts can be requested at https://links.asu.edu/getHPC

Quick Start

For users who have never used a supercomputer before, we recommend reading through the “Detailed Start” section of this document.

For those who wish to get started quickly, here is the general overview:

  1. Connect to the ASU Cisco AnyConnect VPN

  2. Login with your ASURITE & password

  3. Choose a connection method (terminal / web portal)

  4. Transfer needed files

  5. Run an interactive session or create an SBATCH script

Important Terms

  • HPC: Short for “High Performance Computing” it refers to a group (or a cluster) of interconnected computers designed for parallelism across many computers at once. Publicly these are often called “supercomputers”.

  • Node: A single machine in a supercomputer. This will be either a physical machine or a virtual machine. 

  • Login Node: A node intended as a launching point to compute nodes. Login nodes have minimal resources and should not be used for any application that consumes a lot of CPU or memory. This is also known as a “head node”.

  • Compute Node: Nodes intended for heavy compute. This is where all heavy processing should be done.

  • Job: Work assigned to be done on a compute node. Any time a compute node is assigned a job is created.

  • Memory (RAM): Short for “Random-Access Memory“. This is used for the amount of memory that each calculation or computation requires in order to execute and complete successfully. The term “memory“ is not used for disk space. This is another main component that defines a node.

  • CPU: Short for “Central Processing Unit”, also called a core. This is one of the main components that defines a computing device, such as a node.

  • GPU: Short for “Graphic Processing Unit”. This is a specialized piece of hardware that can enable and accelerate certain computational research.

  • Scheduler: The application on our end that manages and assigns (allocates) compute resources for jobs. The scheduler used on the ASU Supercomputers is called Slurm.

Detailed Start

Connect through the Cisco VPN

All Research Computing resources require the user to be connected to the ASU Cisco AnyConnect VPN. This is encouraged for all users, regardless if they are on campus or off campus.

Be sure to connect to sslvpn.asu.edu/2fa. If prompted for a “second password,” provide either push to receive a DUO push request, phoneto authenticate via a phone call, or sms to authenticate via a text message.

image-20240311-180022.png
A screenshot of the ASU Cisco AnyConnect VPN client. Be sure to connect to sslvpn.asu.edu/2fa.
image-20240311-180054.png
A screenshot of the ASU Cisco AnyConnect VPN client. To log into the VPN, you will need your ASURITE, ASU password, and your DUO authentication method (push, sms, or phone).

For additional details or to install the software, please navigate to this page.

PLEASE NOTE: If you are having trouble connecting to the ASU VPN you will need to contact ASU Enterprise Technology. Research Computing cannot assist issues with the VPN.

Choosing a Connection Method

Research Computing provides two methods for connecting to the supercomputer. Each has their advantages and disadvantages.


  1. The web portal has become the standard for new users. It provides a file system viewer and editor, a job submission tool, the ability to view the job queue, and a zoo of interactive applications including a virtual desktop, Jupyter Lab, and RStudio. In the file manager, uploading files is as easy as dragging-and-dropping through the interface! This web portal is accessible through sol.asu.edu.

    The virtual desktop provided by sol.asu.edu is the best way to use graphical applications on the supercomputer. However, please try to avoid using graphical sessions unless you are first learning how to work with the supercomputer or you’re working with software that is only accessible through a graphical user interface. The goal of any interactive session on the supercomputer should be to develop a working scheduling batch (SBATCH) script so that you may properly begin to take advantage of what supercomputing offers.


  2. SSH is the most versatile method. It is ideal for submitting jobs at scale by allowing you to create custom workflows, submit multiple jobs simultaneously through job arrays, and explore options to avoid data loss through dependencies. However, it tends to be slower with interactive graphical applications. If you intend to use MATLAB graphically (as opposed to MATLAB command line only) the screen draw will be very slow. For graphical applications, we recommend our web portal instead.

Login to Sol

You are now ready to reach the login node! The login node is intended as a launching point to allocate compute nodes for your job. You only need to provide your ASURITE and password, if prompted.

Transfer Needed Files

This is optional. However, most research requires data sets or other files to be imported. For details, please see these tutorials on or using .

Run an Interactive Session or Create an SBATCH Script

If you are using an interactive app provided in the web portal, this section can be skipped. If you are using a personally installed version of RStudio or Jupyter, please continue reading this section.

There are three ways to use resources on the supercomputer:

  1. Creating an interactive session in the web portal using an interactive app, such as Jupyter, RStudio, or MATLAB. This will assign a compute node to your interactive session in an interactive app of your choice. This is a great option for users to become familiar with using the supercomputer as well as to develop, test, and debug code.

  2. in the shell. This will assign a compute node and connect your command prompt to it. This is good when working by hand to establish the commands needed to run your work. When your session disconnects, the interactive session also closes. Any unsaved work will be lost.

  3. . This is a method of telling the scheduler you want an unattended (or non-interactive) job to run. When an sbatch script is submitted, the job will run until it either completes, fails, or runs out of time. These sbatch scripts can be submitted through the shell or through the “Job Composer“ in the web portal.

This tutorial covered the basic steps of getting started on the supercomputer. Here’s a little more reading that may help you get fully started.

Modules and Software

Research Computing already has many software packages and many versions of the same software available. They can be accessed using modules.

Users can also install software to their home directory so long as it does not require a license. Users can also request a software install if they prefer to have a module available and the module is not already present. Software that is free for ASU but requires a license is acceptable for modules. Paid licenses are not covered by Research Computing.

The FairShare Score

Computational resources on the supercomputer are free for ASU faculty, students, and collaborators. To keep things fair, computational jobs are prioritized based on computational usage through a priority multiplier called FairShare, which ranges from 0 (lowest priority) to 1 (highest priority). Usage is “forgotten” via exponential decay with a half-life of one week, e.g., if a researcher instantaneously consumed 10,000 core-hour equivalents (CHE), then after one week the system would only “remember” 5,000 core hours of usage. See more on the dynamics here. CHE are tracked based on a linear combination of different hardware allocations, i.e.,

CHE = (core-hour equivalents) = ( (number of cores) + (total RAM allocated) / (4 GiB) + 3 * (number of Multi-Instance GPU slices) + 20 * (number of A30 GPUs) + 25 * (number of A100 GPUs) ) * (wall hours)

Thus, using one core with four GiB of RAM and one A100 GPU allocated for four hours would be tracked as roughly 108 CHE. Researchers that are more careful with their hardware allocations will see lower impacts on their FairShare as a result of the CHE FairShare system. Currently, the system dynamically determines the impact of CHE on FairShare as a function of total system utilization (10,000 CHE might halve FairShare this month, but only cost a quarter the following month). As the system approaches full utilization, the impact is more stable.

All jobs will always eventually run, however, researchers with higher utilization of the system may have to wait longer for their new jobs to start.

Using GPUs

Scientific research increasingly takes advantage of the power of GPUs. See our page on .

Command-line Switches

Interactive and sbatch can take some command line switches which greatly affect the resources a job is assigned.

See our wikipage for a brief list of commonly used switches, as well as a list of partitions and QOSes.

XdMod (Job Statistics)

You can see day-to-day system utilization details at https://xdmod.sol.rc.asu.edu/

Sol Node Status

See the supercomputer’s node-level status here.

File Systems

There are two, primary file systems, referred to as home and scratch. These are accessed at paths /home/<username> and /scratch/<username>. Home provides a default 100 GB of storage and scratch is provided for compute jobs: only actively computed data may reside on the scratch filesystem.

ASU provides cloud storage through an enterprise license for Google Drive, that may be used for archiving data ()

Additional details are provided on this page: .

Additional Help

Once you have gone through this document, if you still require additional assistance, you can submit a ticket.

If your job is failing, a jobID helps us significantly as we can pull detailed information about the job by using the ID.

For a great reference on building proficiency with command-line tools, we provide the following MIT link from CSAIL.