Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 5 Next »

ASU Research Computing maintains an institutionally-supported advanced computing system that enables ASU researchers to pursue large-scale discovery. The increased speed and scale of these resources have made this processing power more accessible and easy to use.

In April 2024, Research Computing held a one-day, in-person GPU Day open to all students, staff, and faculty across all disciplines. The purpose of GPU Day was to educate students, staff, and faculty on how to use graphics processing units (GPUs) on the new Sol supercomputer and to showcase the important applications of these powerful resources.

We welcome your feedback and suggestions on what you would like to see next year (rtshelp@asu.edu) and invite you to join us at GPU Day in 2025! 

Materials

Kick Off Slides from Research Computing

Learn about news related to the Sol Supercomputer, Research Computing, and the Research Technology Office.

Research Showcase Slides

Since its creation, ASU Research Computing has hosted nearly 6,000 researchers and students on its many supercomputing systems. This has accelerated research, academia, and science as a whole through various domains and outreach efforts. Research Computing has invited six of its most impactful and computationally intensive researchers to showcase their work in the spirit of innovation that is only found at ASU.

Deep Learning for Large-scale Prediction of Melting Temperature and Materials Properties
Qijun Hong from the School for Engineering of Matter, Transport and Energy

GPU and Remote Sensing for Earth's Water Land and Air
Jiwei Li from the School of Ocean Futures Faculty

Integrating AI/ML and Database Systems: DeepMapping and VeloxML
Jia Zou from the School of Computing and Augmented Intelligence

Challenges in solving differential games with imperfect information
Yi Ren from the School for Engineering of Matter, Transport and Energy

AnoFPDM: Anomaly Segmentation with Forward Process of Diffusion Models for Brain MRI
Yiming Che on behalf of Teresa Wu from the School of Computing & Augmented Intelligence

JORA: JAX Tensor-Parallel LoRA Library for Retrieval Augmented Fine-Tuning
Anique Tahir on behalf of Huan Liu from the School of Computing and Augmented Intelligence

Parameter-Efficient Methods for Fairness
Nathan Stromberg on behalf of Lalitha Sankar from the School of Electrical, Computer and Energy Engineering

Using Generative AI Productively Slides from Geoff Pofahl

In this hands-on class, learn how to transform your work routine with ChatGPT and other Generative AI technologies. We'll guide you through real-world applications, showing you how to harness AI for greater productivity in your everyday tasks.

See more AI examples from Geoff Pofahl here.

Using NVIDIA GPUs with Python Slides from Zoe Ryan, NVIDIA

The Python ecosystem is rich with libraries that are both easy to use and effective. In this talk we will show how you can get the most performance out of your Python codes by porting them to run on the GPU. We start with drop-in replacements for SciPy and NumPy code through the CuPy library. Then we’ll cover NVIDIA RAPIDS, which provides GPU acceleration for end-to-end data science workloads. We will dive in specifically to RAPIDS cuDF for zero code change Pandas acceleration. We will finish with discussing more involved ways to work with Python on the GPU by writing custom code with Numba. By the end of the session, you should be familiar with multiple Pythons tools and SDKs you can use to run your code on the GPU.

Beyond Text: Harnessing the Potential of Large Language Models for Innovation Slides from Gil Speyer

Dive into the fascinating world of Large Language Models (LLMs) – the backbone of generative artificial intelligence. These models, like Falcon, LlaMa, Alpaca, MPT, and more, operate on the cutting edge, responding to text queries based on billions of trained weight parameters. ASU Research Computing has downloaded several open-source large language models that can be loaded and used for inference and fine-tuning on the Sol supercomputer.

Accelerating Vector Search with RAPIDS cuVS Slides from Nathan Stephens, NVIDIA

Vector search is important because it underpins many data mining and artificial intelligence applications, particularly retrieval augmented generation (RAG) workflows. In a typical RAG pipeline, text queries are encoded into numerical embeddings. Thes embeddings are then then searched against a collection of domain specific embeddings (often stored in vector databases). The job of vector search is to find results that are similar to the query embeddings using nearest neighbor algorithms. K-nearest neighbor (kNN) algorithms are the most accurate, but they are also the most computationally intensive. Approximate nearest neighbor (ANN) algorithms sacrifice a little accuracy for huge performance gains. In this talk we will introduce RAPIDS cuVS, an open-source library for vector search, and show how to use its ANN algorithms on a GPU.

Powering Up: Unleashing the Potential of GPUs and AI in Software Acceleration Slides from Gil Speyer

This workshop focuses on leveraging GPU-accelerated software within the ASU supercomputer environment. Attendees will dive into the intricacies of harnessing the combined power of GPUs and AI algorithms to maximize software acceleration specifically tailored for the ASU supercomputer architecture.

Advanced Research Acceleration with GPUs Slides from Gil Speyer

This workshop will dive into Graphical Processing Unit (GPU) programming, focusing on OpenACC and Cuda. We will also touch on aspects of developing, benchmarking and debugging GPU codes on the ASU supercomputer.

Closing Remarks and GPU and AI Resources Slides from Research Computing

Learn about resources and services for GPU and AI from Research Computing, ASU, and national efforts funded by the U.S. National Science Foundation (NSF).

  • No labels