Table of Contents |
---|
Your first time using a supercomputer like Sol can be intimidating, but it does not have to be. This guide will get you started with the basics. If you run into problems or need additional help, we hold regular weekly office hours.
...
Connect to the ASU Cisco AnyConnect VPN
Login with your ASURITE & password
Choose a connection method (terminal / web portal)
Transfer needed files
Run an interactive session or create an SBATCH script
Access mamba for Python environments, and other public software modules, and /wiki/spaces/RC/pages/1754857495
Important Terms
HPC: Short for “High Performance Computing” it refers to a group (or a cluster) of interconnected computers designed for parallelism across many computers at once. Publicly these are often called “supercomputers”.
Node: A single machine in a supercomputer. This will be either a physical machine or a virtual machine.
Login Node: A node intended as a launching point to compute nodes. Login nodes have minimal resources and should not be used for any application that consumes a lot of CPU or memory. This is also known as a “head node”.
Warning |
---|
Using the login nodes for computational work will result in temporary penalties on the account, e.g., do NOT install Python packages or connect to vscode on a login node. |
Compute Node: Nodes intended for heavy computing. This is where all heavy processing should be done.
Job: Work assigned to be done on a compute node. Any time a compute node is assigned a job is created.
Memory (RAM): Short for “Random-Access Memory“. This is used for the amount of memory that each calculation or computation requires in order to execute and complete successfully. The term “memory“ is not used for disk space. This is another main component that defines a node.
CPU: Short for “Central Processing Unit”, also called a core. This is one of the main components that defines a computing device, such as a node.
GPU: Short for “Graphic Processing Unit”. This is a specialized piece of hardware that can enable and accelerate certain computational research.
Scheduler: The application on our end that manages and assigns (allocates) compute resources for jobs. The scheduler used on the ASU Supercomputers is called Slurm.
Fairshare: Jobs will cost the user’s fairshare points to run, and the lower it is, the longer the job queuing time will be. Please spend it wisely. Here is more info about it.
...
Research Computing provides two methods for connecting to the supercomputer. Each has their advantages and disadvantages.
/wiki/spaces/RC/pages/677478401 Connecting to the Supercomputer with the Web Portal
The web portal has become the standard for new users. It provides a file system viewer and editor, a job submission tool, the ability to view the job queue, and a zoo of interactive applications including a virtual desktop, Jupyter Lab, and RStudio. In the file manager, uploading files is as easy as dragging and dropping through the interface! This web portal is accessible through sol.asu.edu.The virtual desktop provided by sol.asu.edu is the best way to use graphical applications on the supercomputer. However, please try to avoid using graphical sessions unless you are first learning how to work with the supercomputer or you’re working with software that is only accessible through a graphical user interface. The goal of any interactive session on the supercomputer should be to develop a working /wiki/spaces/RC/pages/1643905055 scheduling batch (SBATCH) script so that you may properly begin to take advantage of what supercomputing offers.
/wiki/spaces/RC/pages/1643905025 Connecting to the Supercomputer with SSH
SSH is the most versatile method. It is ideal for submitting jobs at scale by allowing you to create custom workflows, submit multiple jobs simultaneously through job arrays, and explore options to avoid data loss through dependencies. However, it tends to be slower with interactive graphical applications. If you intend to use MATLAB graphically (as opposed to MATLAB command line only) the screen draw will be very slow. For graphical applications, we recommend our web portal instead.
...
This is optional. However, most research requires data sets or other files to be imported. For details, please see these tutorials on Transferring Files to and from the Supercomputer or using Google Drive & Globus.
...
There are three ways to use resources on the supercomputer:
Creating an interactive session in the web portal using an interactive app, such as Jupyter, RStudio, or MATLAB. This will assign a compute node to your interactive session in an interactive app of your choice. This is a great option for users to become familiar with using the supercomputer as well as to develop, test, and debug code.
/wiki/spaces/RC/pages/1643839520 Starting an Interactive Session in the shell. This will assign a compute node and connect your command prompt to it. This is good when working by hand to establish the commands needed to run your work. When your session disconnects, the interactive session also closes. Any unsaved work will be lost.
/wiki/spaces/RC/pages/1643905055Scheduling Batch Scripts (Example). This is a method of telling the scheduler you want an unattended (or non-interactive) job to run. When a
sbatch
script is submitted, the job will run until it either completes, fails, or runs out of time. Thesesbatch
scripts can be submitted through the shell or through the “Job Composer“ in the web portal.
...
Using Python on supercomputers is a little bit different than on workstations or local computers. Please use the system-provided mamba instead of conda or pip. Please follow our guide closely for the best practices with Python on the ASU supercomputers: Working with Python
...
If your job is failing, a Job ID helps us significantly as we can pull detailed information about the job.
If you are new to Linux, or need a refresher, Research Computing has created a guide at The Linux Shell on the Sol Supercomputer. For a great reference on building proficiency with command-line tools, we provide the following MIT link from CSAIL.
...