Table of Links
-
Analysis
-
Experiments Results
-
Practical Inference Speedup Evaluation
A. Appendix / supplemental material
2 Related Work and Background
Efficient Inference of LLMs. Efficient LLM inference poses challenges that necessitate a synergistic combination of algorithmic and systemic approaches. From an algorithmic standpoint, researchers have explored various methods to reduce computation and memory overheads, including compressing models [40, 63, 18, 34, 61], modifying model structures [3, 21], and speculative decoding methods [32, 12, 10]. On the systemic front, there are efforts that effectively integrate the features of downstream hardware and upper-level models to maximize the efficiency of computation and memory utilization [4, 49, 16, 64], leading to the development of more efficient frameworks like vLLM [29].
Sparse activation, in particular, has emerged as a research area that demands an even tighter integration of algorithmic and systemic approaches. The selection of activation functions and the construction of activation predictors are algorithmic problems, while fully exploiting the sparse activation of LLMs on specific hardware is a systemic challenge. By leveraging sparse activation, researchers have achieved promising results in building efficient LLM inference systems [36, 56].
Mixture-of-Experts (MoE). MoE techniques induce effective sparsity in LLMs by determining which subset of subnetworks (referred to as "experts") to activate during the inference pass, often through a trained "router" subnetwork. This approach allows the model to enhance its capacity without escalating the computational expenses [31, 53].
Intrinsic Activation Sparsity. Intrinsic activation sparsity is known to be present in LLMs that utilize ReLU family nonlinearities in their MLP blocks [68, 33]. This phenomenon has been explored to accelerate inference speed and reduce memory usage [56, 36, 37]. With this phenomenon, each neuron can be viewed as an expert to reduce the computation overhead.
Gated-MLP Blocks. We now delve into the components of LLMs that our study aims to analyze: the Gated-MLP blocks, which are commonly used. A Gated-MLP block consists of three fully
connected layers and performs the following computation:
Authors:
(1) Yixin Song, Institute of Parallel and Distributed Systems (IPADS), Shanghai Jiao Tong University;
(2) Haotong Xie, Institute of Parallel and Distributed Systems (IPADS), Shanghai Jiao Tong University;
(3) Zhengyan Zhang, Department of Computer Science and Technology, Tsinghua University;
(4) Bo Wen, Institute of Parallel and Distributed Systems (IPADS), Shanghai Jiao Tong University;
(5) Li Ma, Shanghai Artificial Intelligence Laboratory;
(6) Zeyu Mi, Institute of Parallel and Distributed Systems (IPADS), Shanghai Jiao Tong University Mi [email protected]);
(7) Haibo Chen, Institute of Parallel and Distributed Systems (IPADS), Shanghai Jiao Tong University.
This paper is
