Your first time using an High-performance Computing environment (HPC) like Aloe can be intimidating, but it doesn’t have to be: this guide will get you started with the basics.
This article will assume a basic familiarity with the Linux command line. If you are new to linux, or need a refresher, RC has created a guide on git called The Linux Shell; the instructions provided are general enough and apply to the Aloe supercomputer.
This document also assumes you already have requested and been granted an account. If not, please see the Creating a User Account page.
Please also familiarize yourself with our Required Trainings and Acceptable Use Policy before getting started.
Choosing a connection method
[POWER USERS] The Shell | [RECOMMENDED] The Web Portal: https://ood.asre.rc.asu.edu/ |
---|---|
What is this? •Traditional supercomputing interface | What is this? •Well defined options for file system and job management •Full documentation browser tab away •Simplified access to modern interfaces like Jupyter/Rstudio/MATLAB/etc |
Benefits: •Provides superior file system and job submission, editing, processing, monitoring tools | Benefits: |
Disadvantages: •Requires knowledge of available commands and some level of nuance | Disadvantages: |
Quick Start
For users who have never used any HPC environment before, we would recommend reading through the detailed start.
For those who wish to get started quickly, here is the general overview:
Choose a connection method (ssh / Webportal)
Connect to the ASU VPN
Transfer files as needed
Log in with your WINEDS username & password
Run an interactive session or create an SBATCH script
Important Terms
Login Node: A node intended as a launching point to compute nodes. Login nodes have minimal resources and should not be used for any application that consumes a lot of CPU or memory. Also known as a head node.
Compute Node: Nodes intended for heavy compute. This is where all heavy processing should be done
HPC: Short for “High Performance Computing” it refers to a group (cluster) of computers designed for parallelism across many computers at once. Publicly these are often called “supercomputers”
Cluster: A group of interconnected computers that can work cooperatively or independently.
Job: Work assigned to be done on a compute node. Any time a compute node is assigned a job is created.
Scheduler: The application on our end that assigns compute resources for jobs.
Slurm: The brand name of our scheduler like which manages and allocates resources.