Table of Contents |
---|
...
Warning |
---|
Using the login nodes for computing computational work will result in temporary penalties on the account, e.g., do NOT install Python packages or connect to vscode on a login node. |
Compute Node: Nodes intended for heavy computing. This is where all heavy processing should be done.
Job: Work assigned to be done on a compute node. Any time a compute node is assigned a job is created.
Memory (RAM): Short for “Random-Access Memory“. This is used for the amount of memory that each calculation or computation requires in order to execute and complete successfully. The term “memory“ is not used for disk space. This is another main component that defines a node.
CPU: Short for “Central Processing Unit”, also called a core. This is one of the main components that defines a computing device, such as a node.
GPU: Short for “Graphic Processing Unit”. This is a specialized piece of hardware that can enable and accelerate certain computational research.
Scheduler: The application on our end that manages and assigns (allocates) compute resources for jobs. The scheduler used on the ASU Supercomputers is called Slurm.
Fairshare: Jobs will cost the user’s fairshare points to run, and the lower it is, the longer the job queuing time will be. Please spend it wisely. Here is more info about it.
Detailed Start
Connect through the Cisco VPN
...
Using Python on supercomputers is a little bit different than on workstations or local computers. Please use the system-provided mamba instead of conda or pip. Please follow our guide closely for the best practices with Python on the ASU supercomputers: Working with Python
...
If your job is failing, a Job ID helps us significantly as we can pull detailed information about the job.
If you are new to Linux, or need a refresher, Research Computing has created a guide at The Linux Shell on the Sol Supercomputer. For a great reference on building proficiency with command-line tools, we provide the following MIT link from CSAIL.
...