Table of Links
-
Analysis
-
Experiments Results
-
Practical Inference Speedup Evaluation
A. Appendix / supplemental material
3 Analysis
3.1 Limitations about Existing ReLUfication
We first evaluate the sparsity of ReLULlama-7B [59] and the original Llama-2-7B [60], as shown in Table 1. The results reveal that existing ReLUfication methods can only improve the sparsity from 40% to 67%, indicating their limited effectiveness in significantly enhancing model sparsity.
To investigate the underlying reasons for this limitation, we profile the activation distribution of the gate and up projection components separately in ReLULlama-7B and Llama-2-7B, as illustrated in Figure 3. The figure shows that after ReLUfication, the combined activation becomes more concentrated around 0, with the sparsity increasing to 67%. This can be attributed to the ReLU activation function applied after the gate weight, which masks all negative activations to zero.
To further push the sparsity, shifted-ReLU [42] has been proposed, which adjusts the threshold of ReLU function to mask out more activations in the gate projection. However, the improvements brought by this method are limited. Another line of work is to adopt progressive sparsity regularization to the intermediate output to introduce more zero activation output [55]. However, this method carries the risk of performance degradation.
Existing ReLUfication methods primarily focus on modifying the gate component. Different from previous work, we find that existing ReLUfication doesn’t alter the activation distribution of the up projection component, as shown in Figure 3(c) and (f). According to the definition of Gated-MLP (Equation 1), the gate and up projection components jointly influence the sparsity of neuron activations in parallel. However, a significant number of activation values in the up projection component remain less than 0. This suggests that masking the outputs of the up and gate matrices that are less than 0 as inactive could introduce stronger sparsity without sacrificing non-linear capabilities. This observation motivates us to explore the possibility of further enhancing model sparsity by modifying the up projection.
Authors:
(1) Yixin Song, Institute of Parallel and Distributed Systems (IPADS), Shanghai Jiao Tong University;
(2) Haotong Xie, Institute of Parallel and Distributed Systems (IPADS), Shanghai Jiao Tong University;
(3) Zhengyan Zhang, Department of Computer Science and Technology, Tsinghua University;
(4) Bo Wen, Institute of Parallel and Distributed Systems (IPADS), Shanghai Jiao Tong University;
(5) Li Ma, Shanghai Artificial Intelligence Laboratory;
(6) Zeyu Mi, Institute of Parallel and Distributed Systems (IPADS), Shanghai Jiao Tong University Mi [email protected]);
(7) Haibo Chen, Institute of Parallel and Distributed Systems (IPADS), Shanghai Jiao Tong University.
This paper is
