Model Quantization in Deep Neural Networks
Too Long; Didn't Read
Quantization is the process of converting values from a continuous range to a smaller set of discrete values, often used in deep neural networks to enhance inference speed on various devices. This conversion involves mapping high-precision formats like float32 to lower-precision formats like int8. Quantization can be uniform (linear mapping) or non-uniform (non-linear mapping). In symmetric quantization, zero in the input maps to zero in the output, while asymmetric quantization shifts this mapping. The scale factor and zero point are crucial parameters for quantization, determined through calibration. Quantization modes include Post Training Quantization (PTQ) and Quantization Aware Training (QAT), with QAT offering better model accuracy through fine-tuning. It involves using fake quantizers to make quantization compatible with the differentiability required for fine-tuning.