Optimizing SAGE Net: Achieving High Performance with Shorter Input Sequences for Online Inference

Written by instancing | Published 2025/11/05
Tech Story Tags: instance-incremental-learning | knowledge-distillation | inter-class-interference | catastrophic-forgetting | sage-net | decision-boundary | db-center | dbd

TLDRThis article introduces Decision Boundary-Aware Distillation, a novel method for Instance-Incremental Learning that preserves and extends the decision boundary using fused labels.via the TL;DR App

Abstract and 1 Introduction

  1. Related works

  2. Problem setting

  3. Methodology

    4.1. Decision boundary-aware distillation

    4.2. Knowledge consolidation

  4. Experimental results and 5.1. Experiment Setup

    5.2. Comparison with SOTA methods

    5.3. Ablation study

  5. Conclusion and future work and References

Supplementary Material

  1. Details of the theoretical analysis on KCEMA mechanism in IIL
  2. Algorithm overview
  3. Dataset details
  4. Implementation details
  5. Visualization of dusted input images
  6. More experimental results

4.1. Decision boundary-aware distillation

Decision boundary (DB) which reflects the inter-class relationship and intra-class distribution is one of the most valuable knowledge stored in a well-trained model. It can be defined by distinguishing between inner samples (correctly classified) and outer samples (misclassified), as illustrated in Fig. 2 (a). In new IIL, promoting the model’s performance on new data without forgetting equals to extend the existing DB to enclose those new outer samples while retain the DB in other locations. However, to learn from new data, existing methods take the annotated one-hot labels as the optimal learning target for granted. We argue that one-hot labels ignore the relationship between target class and other classes. Naively learning with one-hot labels tends to push outer samples towards the DB center, which can potentially interfere with the learning of other classes, especially when the data is insufficient to rectify such interference in IIL.

To address the inter-class interference, we propose to learn the new data by fusing annotated one-hot labels with predictions of existing model, as show in Eq. (1). For new outer samples, learning with fused labels retains the knowledge on none-target classes and extends the decision boundary more moderately to enclose them. Knowledge of nonetarget classes is crucial for retain learned knowledge, which also reported by Mittal et al. [18] in using super-class labels. For new inner samples, keeping the DB around it is a safer choice, i.e. using existing predicted scores as the learning target. However, we propose to push the DB away from the new peripheral inner samples by sharpening the teacher’s prediction score with the one-hot label, which in essence enlarges the inter-class distance while retaining the DB. Hence, although with different motivation, training on new outer samples and new inner samples are unified through the fused label.

Alternating the learning target with fused labels unifies the knowledge learning and retaining on new samples in a simple manner. Although this manner helps retain some learned knowledge, preservation of DB in other directions that only supported by the old data is not addressed.

Decision boundary-aware distillation enables the student network to learning new knowledge with awareness of the existing knowledge.

Authors:

(1) Qiang Nie, Hong Kong University of Science and Technology (Guangzhou);

(2) Weifu Fu, Tencent Youtu Lab;

(3) Yuhuan Lin, Tencent Youtu Lab;

(4) Jialin Li, Tencent Youtu Lab;

(5) Yifeng Zhou, Tencent Youtu Lab;

(6) Yong Liu, Tencent Youtu Lab;

(7) Qiang Nie, Hong Kong University of Science and Technology (Guangzhou);

(8) Chengjie Wang, Tencent Youtu Lab.


This paper is available on arxiv under CC BY-NC-ND 4.0 Deed (Attribution-Noncommercial-Noderivs 4.0 International) license.


Written by instancing | Pioneering instance management, driving innovative solutions for efficient resource utilization, and enabling a more sus
Published by HackerNoon on 2025/11/05