Table of Links
-
Related work
-
Our method
4.2. Batched Streaming Generative Active Learning
While we have proposed an ideal estimation of contribution in the preceding section, it is not directly transferable to a real segmentation training process. The specific reasons are as follows:
-
We conduct batch data training, eliminating the possibility of estimating each instance individually, as this would provoke excessive computation.
-
Our model undergoes constant updates. Therefore, even the same sample’s contribution to the model varies under different training stages. Furthermore, given the near-infinite data pool, the entire training process closely resembles a streaming process(Saran et al., 2023). After each data entry, we must decide whether to include this data in the present update.
In response to the third point, based on Definition 4.4, we can propose an algorithm called Batched Streaming Generative Active Learning (BSGAL), as shown in Algorithm 2.
So the Algorithm 2 can be further simplified by using Equation (7) in Line 10.
The modified contribution estimation algorithm for the final BSGAL is shown in Algorithm 3.
Authors:
(1) Muzhi Zhu, with equal contribution from Zhejiang University, China;
(2) Chengxiang Fan, with equal contribution from Zhejiang University, China;
(3) Hao Chen, Zhejiang University, China ([email protected]);
(4) Yang Liu, Zhejiang University, China;
(5) Weian Mao, Zhejiang University, China and The University of Adelaide, Australia;
(6) Xiaogang Xu, Zhejiang University, China;
(7) Chunhua Shen, Zhejiang University, China ([email protected]).
This paper is
