Large Language Models (LLMs) have demonstrated remarkable performance in various natural language processing tasks. However, the training of these models is computationally intensive and susceptible to faults, particularly in the attention mechanism, which is a critical component of transformer-based LLMs. In this paper, we investigate the impact of faults on LLM training, focusing on INF, NaN, and near-INF values in the computation results with systematic fault injection experiments. We observe the propagation patterns of these errors, which can trigger non-trainable states in the model and disrupt training, forcing the procedure to load from checkpoints. To mitigate the impact of these faults, we propose ATTNChecker, the first Algorithm-Based Fault Tolerance (ABFT) technique tailored for the attention mechanism in LLMs. ATTNChecker is designed based on fault propagation patterns of LLM and incorporates performance optimization to adapt to both system reliability and model vulnerability while providing lightweight protection for fast LLM training. Evaluations on four LLMs show that ATTNChecker on average incurs on average 7% overhead on training while detecting and correcting all extreme errors. Compared with the state-of-the-art checkpoint/restore approach, ATTNChecker reduces recovery overhead by up to 49 times.

Tue 4 Mar

Displayed time zone: Pacific Time (US & Canada) change

10:00 - 11:00
Session 6: Large Language Models (Session Chair: Minjia Zhang)Main Conference at Acacia D
10:00
20m
Talk
MARLIN: Mixed-Precision Auto-Regressive Parallel Inference on Large Language Models
Main Conference
Elias Frantar ISTA, Roberto López Castro Universidade da Coruña, Jiale Chen ISTA, Torsten Hoefler ETH Zurich, Dan Alistarh IST Austria
10:20
20m
Talk
WeiPipe: Weight Pipeline Parallelism for Communication-Effective Long-Context Large Model Training
Main Conference
Junfeng Lin Tsinghua University, Ziming Liu National University of Singapore, Yang You National University of Singapore, Jun Wang CETHIK Group Co. Ltd., Weihao Zhang Lynxi Technologies Co. Ltd, Rong Zhao Tsinghua University
10:40
20m
Talk
ATTNChecker: Highly-Optimized Fault Tolerant Attention for Large Language Model Training
Main Conference
Yuhang Liang University of Oregon, Xinyi Li Pacific Northwest National Laboratory(PNNL), Jie Ren William & Mary, Ang Li Pacific Northwest National Laboratory, Bo Fang Pacific Northwest National Laboratory(PNNL), Jieyang Chen University of Oregon