Workshops

Workshops for Fall 2021

Research Computing Expo 2021

Past Workshops

See a complete catalog of past workshops: Catalog of Past Workshops

Workshop Descriptions

Click on the workshop summaries below to see descriptions!

This workshop will cover the Agave cluster configuration, batch and interactive access, and available software packages. Access has been greatly simplified with Open OnDemand, a browser-based portal to Agave supporting command-line shell, drag and drop file transfer, job submission, and RStudio and Jupyter interfaces. A sample of applications run on the system will demonstrate the variety of computational research Agave supports, including new GPU acceleration capability. In preparation for the workshop all attendees are encouraged to obtain an account on Agave if they do not already have one: https://cores.research.asu.edu/research-computing/get-started/create-an-account

This workshop covers more advanced topics for conducting research on ASU's high-performance computing cluster, mostly focusing on batch submission processes and benchmarking jobs through the command line. In preparation for the workshop all attendees are encouraged to obtain an account on Agave if they do not already have one: https://cores.research.asu.edu/research-computing/get-started/create-an-account

This workshop provides training for using the Linux command-line interface. The workshop utilizes materials provided by the Software Carpentries on Unix shells, and emphasizes the bare minimum requirements to become proficient on the supercompter.

The Unix shell has been around longer than most of its users have been alive. It has survived so long because it’s a power tool that allows people to do complex things with just a few keystrokes. More importantly, it helps them combine existing programs in new ways and automate repetitive tasks so they aren’t typing the same things over and over again. Use of the shell is fundamental to using a wide range of other powerful tools and computing resources (including “high-performance computing” supercomputers). These lessons will start you on a path towards using these resources effectively.

In preparation for the workshop all attendees are encouraged to obtain an account on Agave if they do not already have one: https://cores.research.asu.edu/research-computing/get-started/create-an-account

This workshop will focus on approaches to porting Matlab applications to a cluster environment such as that of ASU's Agave cluster. This is not an intro to Matlab course. The intended audience member will have developed Matlab code that runs on a desktop machine but now would like to run this code in a parallel environment. This may be implemented through either:

  1. Batch submission of multiple single-threaded instances (e.g. parameter sweep)

  2. Multithreading m file using "parfor" command

  3. Confronting large datasets using distributed arrays or tall arrays

  4. Exploiting Matlab functions ported to GPU

  5. Multithreading C-code using OpenMP or writing cuda kernels and compiling with mex compiler to be called by Matlab

  6. Introduction to features of the Matlab Parallel Server

In preparation for the workshop all attendees are encouraged to obtain an account on Agave if they do not already have one: https://cores.research.asu.edu/research-computing/get-started/create-an-account

This workshop will focus on best practice, profiling and benchmarking in R. This workshop will also introduce how to submit R jobs to ASU’s Agave supercomputer through the use of batch submissions, parameter sweeps, and SLURM job arrays. In preparation for the workshop all attendees are encouraged to obtain an account on Agave if they do not already have one: https://cores.research.asu.edu/research-computing/get-started/create-an-account

This workshop will focus on approaches to porting R applications to ASU's Agave supercomputer. This is not an intro to R course. The intended audience member will have developed R code that runs on a desktop machine but now would like to run this code in a parallel environment. This may be implemented through either:

1. Multithreading R files using "doParallel" and “foreach” packages
2. Exploiting R functions ported to GPU
3. Implementing batchtools
4. Confronting large datasets

In preparation for the workshop all attendees are encouraged to obtain an account on Agave if they do not already have one: https://cores.research.asu.edu/research-computing/get-started/create-an-account

This workshop will explore different low and high level approaches to accelerating existing or developing research codes through the use of Graphical Processing Units (GPUs) on the ASU High Performance Computing cluster.

In preparation for the workshop all attendees are encouraged to obtain an account on Agave if they do not already have one: https://cores.research.asu.edu/research-computing/get-started/create-an-account

Full description coming soon. This workshop will detail low-level approaches, specifically use of OpenACC and Cuda, for accelerating existing or developing research codes with Graphical Processing Units (GPUs) on the ASU High Performance Computing cluster.

In preparation for the workshop all attendees are encouraged to obtain an account on Agave if they do not already have one: https://cores.research.asu.edu/research-computing/get-started/create-an-account

The ASU Research Computing Cluster hosts a high-speed scratch filesystem to quickly compute results and also provides 100 GB of storage in users' own personal home directories. When these filesystems fill up, it's detrimental to the user and the community. Using Globus, this workshop will interactively teach users how to transfer data from their scratch or home directories to their UTO provided unlimited Google Drive.

In preparation for the workshop all attendees are encouraged to obtain an account on Agave if they do not already have one: https://cores.research.asu.edu/research-computing/get-started/create-an-account

Introduction to using the popular high-level language Python on the Agave supercomputer. Python has seen rapid growth in Scientific Computing over the last decade most likely due to the powerful expressions the language provides. The popular scientific notebook provided by Jupyter will be utilized to demonstrate how Python may be leveraged to infer, compute, and conduct research on modern HPC environments. Suggested reading 1, suggested reading 2, suggested reading 3.

Link to workshop materials.

In preparation for the workshop all attendees are encouraged to obtain an account on Agave if they do not already have one: https://cores.research.asu.edu/research-computing/get-started/create-an-account

Workshop on handling numerical and general data on Agave with Python's numpy and pandas libraries. These libraries provide high-level data objects like the array and DataFrame. These powerful objects allow for fast scientific computation and filtering of data, providing a high-level framework for predicting events or identifying patterns. Demonstrations will be given utilizing Jupyter notebooks. Suggested reading.

Link to workshop materials.

In preparation for the workshop all attendees are encouraged to obtain an account on Agave if they do not already have one: https://cores.research.asu.edu/research-computing/get-started/create-an-account

Introduces the capabilities of Python when applying modern algorithmic techniques for understanding a vast dataset. Makes use of pandas module to filter data to train a Machine Learning model within Jupyter. GPU acceleration using Numba also will be demonstrated. Suggested reading.

Link to workshop materials.

In preparation for the workshop all attendees are encouraged to obtain an account on Agave if they do not already have one: https://cores.research.asu.edu/research-computing/get-started/create-an-account

Introduction to more advanced Machine Learning techniques. Demonstrates the ability to set up and run sophisticated neural networks on Agave GPUs by using Python's PyTorch module. Tensorflow applications will also be demonstrated. Suggested reading.

Link to workshop materials.

In preparation for the workshop all attendees are encouraged to obtain an account on Agave if they do not already have one: https://cores.research.asu.edu/research-computing/get-started/create-an-account

Introduction to more advanced Machine Learning techniques. Demonstrates the ability to set up and run sophisticated neural networks on Agave GPUs by using Python's PyTorch module. Tensorflow applications will also be demonstrated. Suggested reading.

Link to workshop materials.

Use of the shell is fundamental to using a wide range of other powerful tools and computing resources (including "high-performance computing" supercomputers). This tutorial will start you on a path towards using these resources effectively on ASU's Agave cluster.

The tutorial materials are hosted here: http://links.asu.edu/shell

Full description coming soon.