As inference on Large Language Models (LLMs) emerges as an important workload in machine learning applications, model weight quantization has become a standard technique for efficient GPU deployment. Quantization not only reduces model size, but has also been shown to yield substantial speedups for single-user inference, due to reduced memory movement, with low accuracy impact. Yet, it remains a key open question whether speedups are achievable also in batched settings with multiple parallel clients, which are highly relevant for practical serving. It is unclear whether GPU kernels can be designed to remain practically memory-bound, while supporting the substantially increased compute requirements of batched workloads.

In this paper, we resolve this question positively by introducing a new design for Mixed-precision Auto-Regressive LINear kernels, called MARLIN. Concretely, given a model whose weights are compressed via quantization to, e.g., 4 bits per element, MARLIN shows that batchsizes up to 16-32 can be practically supported with close to maximum ($4\times$) quantization speedup, and larger batchsizes up to 64-128 with gradually decreasing, but still significant, acceleration. MARLIN accomplishes this via a combination of techniques, such as asynchronous memory access, complex task scheduling and pipelining, and bespoke quantization support. Our experiments show that MARLIN’s near-optimal performance on individual LLM layers across different scenarios can also lead to significant end-to-end LLM inference speedups (of up to $2.8\times$) when integrated with the popular vLLM open-source serving engine. Finally, we show that MARLIN is extensible to further compression techniques, like NVIDIA 2:4 sparsity, leading to additional speedups.

Tue 4 Mar

Displayed time zone: Pacific Time (US & Canada) change

10:00 - 11:00
Session 6: Large Language Models (Session Chair: Minjia Zhang)Main Conference at Acacia D
10:00
20m
Talk
MARLIN: Mixed-Precision Auto-Regressive Parallel Inference on Large Language Models
Main Conference
Elias Frantar ISTA, Roberto López Castro Universidade da Coruña, Jiale Chen ISTA, Torsten Hoefler ETH Zurich, Dan Alistarh IST Austria
10:20
20m
Talk
WeiPipe: Weight Pipeline Parallelism for Communication-Effective Long-Context Large Model Training
Main Conference
Junfeng Lin Tsinghua University, Ziming Liu National University of Singapore, Yang You National University of Singapore, Jun Wang CETHIK Group Co. Ltd., Weihao Zhang Lynxi Technologies Co. Ltd, Rong Zhao Tsinghua University
10:40
20m
Talk
ATTNChecker: Highly-Optimized Fault Tolerant Attention for Large Language Model Training
Main Conference
Yuhang Liang University of Oregon, Xinyi Li Pacific Northwest National Laboratory(PNNL), Jie Ren William & Mary, Ang Li Pacific Northwest National Laboratory, Bo Fang Pacific Northwest National Laboratory(PNNL), Jieyang Chen University of Oregon