Table of Links
-
Analysis
-
Experiments Results
-
Practical Inference Speedup Evaluation
A. Appendix / supplemental material
4 Are Neurons in Expert still Sparsely Activated?
Previous work has shown that dense LLMs with different activation functions (ReLU, SwiGLU, etc.) exhibit the property of sparse activation [69, 36, 30]. However, the analysis is limited to dense models. Despite the intuitive assumption that partitioning FFNs into different experts within an MoE model would result in denser activations within each expert, it remains unclear whether this sparsity phenomenon persists in MoE models. In this section, we select representative MoE models and commonly used downstream tasks to investigate whether this sparsity phenomenon still exists in MoE models. We utilize the same method in 3 to control the sparsity in each expert.
Models. We select Deepseek-MoE [15], Qwen1.5-MoE [5] and Mixtral [25] as the models for our experiments. We also add Llama-2-7B as for comparison.
We first study the performance with regard to the sparsity ratio, as shown in Figure 5 (a)[2]. Specifically, the performance only drops by about 1%-2% when the sparsity ratio is 0.5. This trend suggests that MoE models exhibit similar sparsity compared to dense models.
Further, we profile the activation patterns of Mistral and Mixtral, a pair of popular dense LLM and MoE LLM, as shown in Figure 5 (b). We find that both LLMs show a similar pattern where activations are concentrated around 0, which is consistent with previous analysis of dense LLMs. The sparsity in experts also implies that every neuron in the same expert has different functionality. This finding applies to all layers and experts, as detailed in Appendix A.2. We report this interesting observation and leave further analysis for future work.
Inspired by our discoveries in MoE models, we are convinced that ReLUfication can be extended to MoE models and is not restricted to dense models. As the proportion of FFN weights in MoE models increases, the FLOP reduction achieved through ReLUfication will be even more pronounced.
Authors:
(1) Yixin Song, Institute of Parallel and Distributed Systems (IPADS), Shanghai Jiao Tong University;
(2) Haotong Xie, Institute of Parallel and Distributed Systems (IPADS), Shanghai Jiao Tong University;
(3) Zhengyan Zhang, Department of Computer Science and Technology, Tsinghua University;
(4) Bo Wen, Institute of Parallel and Distributed Systems (IPADS), Shanghai Jiao Tong University;
(5) Li Ma, Shanghai Artificial Intelligence Laboratory;
(6) Zeyu Mi, Institute of Parallel and Distributed Systems (IPADS), Shanghai Jiao Tong University Mi [email protected]);
(7) Haibo Chen, Institute of Parallel and Distributed Systems (IPADS), Shanghai Jiao Tong University.
This paper is
