TurboSparse: Democratizing AI via Efficient dReLU Sparsification

Written by languagemodels | Published 2026/03/06
Tech Story Tags: language-models | sustainable-ai-development | drelu-sparsification-impact | resource-efficient-llms | green-computing-in-ai | accessible-machine-learning | computational-cost-reduction | broad-impact-ai-research

TLDRDiscover how dReLU sparsification lowers AI energy consumption and computational costs. TurboSparse democratizes access to LLMs for researchers and smaller organizations.via the TL;DR App

Abstract and 1. Introduction

  1. Related Work and Background

  2. Analysis

    3.1 Limitations about Existing ReLUficatio

    3.2 dReLU

  3. Are Neurons in Expert still Sparsely Activated?

  4. dReLU Sparsification

  5. Experiments Results

    6.1 Downstream Tasks Performance

    6.2 Sparsity of Sparsified Models

  6. Practical Inference Speedup Evaluation

    7.1 Experiments Setting

    7.2 Pure CPU Inference and 7.3 Hybrid GPU-CPU Inference

    7.4 Deploy LLMs on mobile phones

  7. Conclusion and References

A. Appendix / supplemental material

B. Limitation

C. Broader Impact

C Broader Impact

The paper introduces a dReLU-based sparsification method and verifies its effectiveness on both dense and MoE LLMs. This approach significantly reduces computational demands, addresses environmental concerns through lower energy consumption, and helps democratize access to advanced AI technologies. We believe that our work can better support smaller organizations, educational institutions, and researchers, who previously faced barriers due to resource limitations, in accessing LLMs more easily.

Authors:

(1) Yixin Song, Institute of Parallel and Distributed Systems (IPADS), Shanghai Jiao Tong University;

(2) Haotong Xie, Institute of Parallel and Distributed Systems (IPADS), Shanghai Jiao Tong University;

(3) Zhengyan Zhang, Department of Computer Science and Technology, Tsinghua University;

(4) Bo Wen, Institute of Parallel and Distributed Systems (IPADS), Shanghai Jiao Tong University;

(5) Li Ma, Shanghai Artificial Intelligence Laboratory;

(6) Zeyu Mi, Institute of Parallel and Distributed Systems (IPADS), Shanghai Jiao Tong University Mi [email protected]);

(7) Haibo Chen, Institute of Parallel and Distributed Systems (IPADS), Shanghai Jiao Tong University.


This paper is available on arxiv under CC BY 4.0 license.


Written by languagemodels | Large Language Models (LLMs) ushered in a technological revolution. We breakdown how the most important models work.
Published by HackerNoon on 2026/03/06