Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 10 Next »

Spring 2022

Fall 2021

Summer 2021

Spring 2021

Fall 2020

Summer 2020

  • July 6th, 2020 2pm-3pm Intro to the ASU Compute Supercomputer.  Past workshop info here

  • June 1, 2020 2pm - Intro to the ASU Compute Supercomputer.  Past workshop info here

Spring 2020

Fall 2019

Workshop Descriptions

Click on the workshop summaries below to see descriptions!

 Description: HPC1: Intro to HPC

This workshop will cover the Agave supercomputer configuration, batch and interactive access, and available software packages. Access has been greatly simplified with Open OnDemand, a browser-based portal to Agave supporting command-line shell, drag and drop file transfer, job submission, and RStudio and Jupyter interfaces. A sample of applications run on the system will demonstrate the variety of computational research Agave supports, including new GPU acceleration capability. In preparation for the workshop all attendees are encouraged to obtain an account on Agave if they do not already have one: https://cores.research.asu.edu/research-computing/get-started/create-an-account

 Description: HPC2: Scaling for HPC

This workshop covers more advanced topics for conducting research on ASU's high-performance computing supercomputer, mostly focusing on batch submission processes and benchmarking jobs through the command line. In preparation for the workshop all attendees are encouraged to obtain an account on Agave if they do not already have one: https://cores.research.asu.edu/research-computing/get-started/create-an-account

 Description: Intro to the Linux Command Line

This workshop provides training for using the Linux command-line interface. The workshop utilizes materials provided by the Software Carpentries on Unix shells, and emphasizes the bare minimum requirements to become proficient on the supercompter.

The Unix shell has been around longer than most of its users have been alive. It has survived so long because it’s a power tool that allows people to do complex things with just a few keystrokes. More importantly, it helps them combine existing programs in new ways and automate repetitive tasks so they aren’t typing the same things over and over again. Use of the shell is fundamental to using a wide range of other powerful tools and computing resources (including “high-performance computing” supercomputers). These lessons will start you on a path towards using these resources effectively.

In preparation for the workshop all attendees are encouraged to obtain an account on Agave if they do not already have one: https://cores.research.asu.edu/research-computing/get-started/create-an-account

 Description: Matlab for HPC & on GPU

This workshop will focus on approaches to porting Matlab applications to a supercomputer environment such as that of ASU's Agave supercomputer. This is not an intro to Matlab course. The intended audience member will have developed Matlab code that runs on a desktop machine but now would like to run this code in a parallel environment. This may be implemented through either:

  1. Batch submission of multiple single-threaded instances (e.g. parameter sweep)

  2. Multithreading m file using "parfor" command

  3. Confronting large datasets using distributed arrays or tall arrays

  4. Exploiting Matlab functions ported to GPU

  5. Multithreading C-code using OpenMP or writing cuda kernels and compiling with mex compiler to be called by Matlab

  6. Introduction to features of the Matlab Parallel Server

In preparation for the workshop all attendees are encouraged to obtain an account on Agave if they do not already have one: https://cores.research.asu.edu/research-computing/get-started/create-an-account

 Description: R1: Intro to HPC -- R Intro for ASU's High Performance Computing Supercomputer

This workshop will focus on best practice, profiling and benchmarking in R. This workshop will also introduce how to submit R jobs to ASU’s Agave supercomputer through the use of batch submissions, parameter sweeps, and SLURM job arrays. In preparation for the workshop all attendees are encouraged to obtain an account on Agave if they do not already have one: https://cores.research.asu.edu/research-computing/get-started/create-an-account

 Description: R2: Scaling code for HPC -- R Intro for ASU's High Performance Computing Supercomputer

This workshop will focus on approaches to porting R applications to ASU's Agave supercomputer. This is not an intro to R course. The intended audience member will have developed R code that runs on a desktop machine but now would like to run this code in a parallel environment. This may be implemented through either:

  1. Multithreading R files using "doParallel" and “foreach” packages

  2. Exploiting R functions ported to GPU

  3. Implementing batchtools

  4. Confronting large datasets

In preparation for the workshop all attendees are encouraged to obtain an account on Agave if they do not already have one: https://cores.research.asu.edu/research-computing/get-started/create-an-account

 Description: GPU1: Intro to HPC -- General GPU on ASU's Research Computing Supercomputer

This workshop will explore different low and high level approaches to accelerating existing or developing research codes through the use of Graphical Processing Units (GPUs) on the ASU High Performance Computing supercomputer.

In preparation for the workshop all attendees are encouraged to obtain an account on Agave if they do not already have one: https://cores.research.asu.edu/research-computing/get-started/create-an-account

 Description: GPU2: OpenACC & Cuda for HPC

Full description coming soon. This workshop will detail low-level approaches, specifically use of OpenACC and Cuda, for accelerating existing or developing research codes with Graphical Processing Units (GPUs) on the ASU High Performance Computing supercomputer.

In preparation for the workshop all attendees are encouraged to obtain an account on Agave if they do not already have one: https://cores.research.asu.edu/research-computing/get-started/create-an-account

 Description: Google Drive, Globus, & HPC

The ASU Research Computing Supercomputer hosts a high-speed scratch filesystem to quickly compute results and also provides 100 GB of storage in users' own personal home directories. When these filesystems fill up, it's detrimental to the user and the community. Using Globus, this workshop will interactively teach users how to transfer data from their scratch or home directories to their UTO provided unlimited Google Drive.

In preparation for the workshop all attendees are encouraged to obtain an account on Agave if they do not already have one: https://cores.research.asu.edu/research-computing/get-started/create-an-account

 Description: Python 1: Intro for HPC

Introduction to using the popular high-level language Python on the Agave supercomputer. Python has seen rapid growth in Scientific Computing over the last decade most likely due to the powerful expressions the language provides. The popular scientific notebook provided by Jupyter will be utilized to demonstrate how Python may be leveraged to infer, compute, and conduct research on modern HPC environments. Suggested reading 1, suggested reading 2, suggested reading 3.

Link to workshop materials.

In preparation for the workshop all attendees are encouraged to obtain an account on Agave if they do not already have one: https://cores.research.asu.edu/research-computing/get-started/create-an-account

 Description: Python 2: Data Handling for HPC

Workshop on handling numerical and general data on Agave with Python's numpy and pandas libraries. These libraries provide high-level data objects like the array and DataFrame. These powerful objects allow for fast scientific computation and filtering of data, providing a high-level framework for predicting events or identifying patterns. Demonstrations will be given utilizing Jupyter notebooks. Suggested reading.

Link to workshop materials.

In preparation for the workshop all attendees are encouraged to obtain an account on Agave if they do not already have one: https://cores.research.asu.edu/research-computing/get-started/create-an-account

 Description: Python 3: Machine Learning for HPC

Introduces the capabilities of Python when applying modern algorithmic techniques for understanding a vast dataset. Makes use of pandas module to filter data to train a Machine Learning model within Jupyter. GPU acceleration using Numba also will be demonstrated. Suggested reading.

Link to workshop materials.

In preparation for the workshop all attendees are encouraged to obtain an account on Agave if they do not already have one: https://cores.research.asu.edu/research-computing/get-started/create-an-account

 Description: Python 4: Deep Learning for HPC

Introduction to more advanced Machine Learning techniques. Demonstrates the ability to set up and run sophisticated neural networks on Agave GPUs by using Python's PyTorch module. Tensorflow applications will also be demonstrated. Suggested reading.

Link to workshop materials.

In preparation for the workshop all attendees are encouraged to obtain an account on Agave if they do not already have one: https://cores.research.asu.edu/research-computing/get-started/create-an-account

 Description: Software Carpentry: Using the Shell on the ASU High Performance Computing Supercomputer

Introduction to more advanced Machine Learning techniques. Demonstrates the ability to set up and run sophisticated neural networks on Agave GPUs by using Python's PyTorch module. Tensorflow applications will also be demonstrated. Suggested reading.

Link to workshop materials.

Use of the shell is fundamental to using a wide range of other powerful tools and computing resources (including "high-performance computing" supercomputers). This tutorial will start you on a path towards using these resources effectively on ASU's Agave supercomputer.

The tutorial materials are hosted here: http://links.asu.edu/shell

 Description: Software Carpentry: R

Full description coming soon.

  • No labels