GPU-based fast Fourier transform (FFT) is extremely important for scientific computing and signal processing. However, we find the inefficiency of existing FFT libraries and the absence of fault tolerance against soft error. To address these issues, we introduce TurboFFT, a new FFT prototype co-designed for high performance and online fault tolerance. For FFT, we propose an architecture-aware, padding-free, and template-based prototype to maximize hardware resource utilization, achieving a competitive or superior performance compared to the state-of-the-art closed-source library, cuFFT. For fault tolerance, we 1) explore algorithm-based fault tolerance (ABFT) at the thread and threadblock levels to reduce additional memory footprint, 2) address the error propagation by introducing a two-side ABFT with location encoding, and 3) further modify the threadblock-level FFT from 1-transaction to multi-transaction in order to bring more parallelism for ABFT. Our two-side strategy enables online correction without additional global memory while our multi-transaction design averages the expensive threadblock-level reduction in ABFT with zero additional operations. Experimental results on an NVIDIA A100 server GPU and a Tesla Turing T4 GPU demonstrate that TurboFFT without fault tolerance is comparable to or up to 300% faster than cuFFT and outperforms VkFFT. TurboFFT with fault tolerance maintains an overhead of 7% to 15%, even under tens of error injections per minute for both single and double precision.

Mon 3 Mar

Displayed time zone: Pacific Time (US & Canada) change

11:20 - 12:20
Session 2: GPU I ​(Session Chair: Xipeng Shen)Main Conference at Acacia D
11:20
20m
Talk
RT–BarnesHut: Accelerating Barnes–Hut Using Ray-Tracing Hardware
Main Conference
Vani Nagarajan Purdue University, Rohan Gangaraju Purdue University, Kirshanthan Sundararajah Virginia Tech, Artem Pelenitsyn Purdue University, Milind Kulkarni Purdue University
11:40
20m
Talk
EVeREST: An Effective and Versatile Runtime Energy Saving Tool for GPUsDistinguished Paper Award
Main Conference
Anna Yue University of Minnesota at Twin Cities, Pen-Chung Yew University of Minnesota at Twin Cities, Sanyam Mehta HPE
12:00
20m
Talk
TurboFFT: Co-Designed High-Performance and Fault-Tolerant Fast Fourier Transform on GPUs
Main Conference
Shixun Wu , Yujia Zhai NVIDIA Corporation, Jinyang Liu University of California, Riverside, Jiajun Huang University of California, Riverside, Zizhe Jian University of California, Riverside, Huangliang Dai University of California, Riverside, Sheng Di Argonne National Laboratory, Franck Cappello Argonne National Laboratory, zizhong chen University of California, Riverside