Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

ASU Research Computing maintains an institutionally-supported advanced computing system that enables ASU researchers to pursue large-scale discovery. The increased speed and scale of these resources have made this processing power more accessible and easy to use.

In April 2024, Research Computing held a one-day, in-person GPU Day open to all students, staff, and faculty across all disciplines. The purpose of GPU Day was to educate students, staff, and faculty on how to use graphics processing units (GPUs) on the new Sol supercomputer and to showcase the important applications of these powerful resources.

We welcome your feedback and suggestions on what you would like to see next year (rtshelp@asu.edu) and invite you to join us at GPU Day in 2025! 

“Kick Off”, Link to Materials

“Researcher Showcase”, Link to Materials

Since its creation, ASU Research Computing has hosted nearly 6,000 researchers and students on its many supercomputing systems. This has accelerated research, academia, and science as a whole through various domains and outreach efforts. Research Computing has invited six of its most impactful and computationally intensive researchers to showcase their work in the spirit of innovation that is only found at ASU.

“Using Generative AI Productively”, Link to Materials

In this hands-on class, learn how to transform your work routine with ChatGPT and other Generative AI technologies. We'll guide you through real-world applications, showing you how to harness AI for greater productivity in your everyday tasks.

“Using NVIDIA GPUs with Python”, Link to Materials

The Python ecosystem is rich with libraries that are both easy to use and effective. In this talk we will show how you can get the most performance out of your Python codes by porting them to run on the GPU. We start with drop-in replacements for SciPy and NumPy code through the CuPy library. Then we’ll cover NVIDIA RAPIDS, which provides GPU acceleration for end-to-end data science workloads. We will dive in specifically to RAPIDS cuDF for zero code change Pandas acceleration. We will finish with discussing more involved ways to work with Python on the GPU by writing custom code with Numba. By the end of the session, you should be familiar with multiple Pythons tools and SDKs you can use to run your code on the GPU.

“Beyond Text: Harnessing the Potential of Large Language Models for Innovation”, Link to Materials

Dive into the fascinating world of Large Language Models (LLMs) – the backbone of generative artificial intelligence. These models, like Falcon, LlaMa, Alpaca, MPT, and more, operate on the cutting edge, responding to text queries based on billions of trained weight parameters. ASU Research Computing has downloaded several open-source large language models that can be loaded and used for inference and fine-tuning on the Sol supercomputer.

“Accelerating Vector Search with RAPIDS cuVS”, Link to Materials

Vector search is important because it underpins many data mining and artificial intelligence applications, particularly retrieval augmented generation (RAG) workflows. In a typical RAG pipeline, text queries are encoded into numerical embeddings. Thes embeddings are then then searched against a collection of domain specific embeddings (often stored in vector databases). The job of vector search is to find results that are similar to the query embeddings using nearest neighbor algorithms. K-nearest neighbor (kNN) algorithms are the most accurate, but they are also the most computationally intensive. Approximate nearest neighbor (ANN) algorithms sacrifice a little accuracy for huge performance gains. In this talk we will introduce RAPIDS cuVS, an open-source library for vector search, and show how to use its ANN algorithms on a GPU.

“Powering Up: Unleashing the Potential of GPUs and AI in Software Acceleration”, Link to Materials

This workshop focuses on leveraging GPU-accelerated software within the ASU supercomputer environment. Attendees will dive into the intricacies of harnessing the combined power of GPUs and AI algorithms to maximize software acceleration specifically tailored for the ASU supercomputer architecture.

“Advanced Research Acceleration with GPUs”, Link to Materials

This workshop will dive into Graphical Processing Unit (GPU) programming, focusing on OpenACC and Cuda. We will also touch on aspects of developing, benchmarking and debugging GPU codes on the ASU supercomputer.

“Closing Remarks, GPU and AI resources, and Open Office Hours”, Link to Materials

  • No labels