This story draft by @escholar has not been reviewed by an editor, YET.

Self-supervised Contrastive Concept Learning

EScholar: Electronic Academic Papers for Scholars HackerNoon profile picture
0-item

Authors:

(1) Sanchit Sinha, University of Virginia ([email protected]);

(2) Guangzhi Xiong, University of Virginia ([email protected]);

(3) Aidong Zhang, University of Virginia ([email protected]).

Table of Links

Abstract and 1 Introduction

2 Related Work

3 Methodology and 3.1 Representative Concept Extraction

3.2 Self-supervised Contrastive Concept Learning

3.3 Prototype-based Concept Grounding

3.4 End-to-end Composite Training

4 Experiments and 4.1 Datasets and Networks

4.2 Hyperparameter Settings

4.3 Evaluation Metrics and 4.4 Generalization Results

4.5 Concept Fidelity and 4.6 Qualitative Visualization

5 Conclusion and References

Appendix

3.2 Self-supervised Contrastive Concept Learning

Even though the RCE framework generates representative concepts, the concepts extracted are adulterated with domain noise thus limiting their generalization. In addition, with limited training data, the concept extraction process is not robust. Self-supervised learning contrastive training objectives are the most commonly used paradigm [Thota and Leontidis, 2021] for learning robust visual features in images. We incorporate self-supervised contrastive learning to learn domain invariant concepts, termed CCL.



This paper is available on arxiv under CC BY 4.0 DEED license.


Trending Topics

blockchaincryptocurrencyhackernoon-top-storyprogrammingsoftware-developmenttechnologystartuphackernoon-booksBitcoinbooks