**Secure your critical AI workloads!**

PolyThrottle: Energy-efficient Neural Network Inference on Edge Devices: Arithmetic Intensity by@bayesianinference

by Bayesian InferenceApril 2nd, 2024

This paper is available on arxiv under CC BY-NC-ND 4.0 DEED license.

**Authors:**

(1) Minghao Yan, University of Wisconsin-Madison;

(2) Hongyi Wang, Carnegie Mellon University;

(3) Shivaram Venkataraman, [email protected].

- Abstract & Introduction
- Motivation
- Opportunities
- Architecture Overview
- Proble Formulation: Two-Phase Tuning
- Modeling Workload Interference
- Experiments
- Conclusion & References
- A. Hardware Details
- B. Experimental Results
- C. Arithmetic Intensity
- D. Predictor Analysis

The arithmetic intensity of a 2D convolution layer can be computed by the following equation:

The notations used in equation 1 can be found in table 8.

The FLOPs term captures the total computation of each workload, while the arithmetic intensity term captures how much computation power and memory bandwidth will affect the final performance. Combining the aforementioned features with an intercept term, which captures the fixed overhead in neural network inference, we can build a model that predicts inference latency if the hardware operating frequency is stable.

L O A D I N G

. . . comments & more!

. . . comments & more!