paint-brush
A Data-centric Approach to Class-specific Bias in Image Data Augmentation: Appendices A-Lby@computational
153 reads

A Data-centric Approach to Class-specific Bias in Image Data Augmentation: Appendices A-L

tldt arrow

Too Long; Didn't Read

Data augmentation enhances model generalization in computer vision but may introduce biases, impacting class accuracy unevenly.
featured image - A Data-centric Approach to Class-specific Bias in Image Data Augmentation: Appendices A-L
Computational Technology for All HackerNoon profile picture

Authors:

(1) Athanasios Angelakis, Amsterdam University Medical Center, University of Amsterdam - Data Science Center, Amsterdam Public Health Research Institute, Amsterdam, Netherlands

(2) Andrey Rass, Den Haag, Netherlands.

Appendices

Appendix A: Image dimensions (in pixels) off training images after being randomly cropped and before being resized

[32x32, 31x31, 30x30,

29x29, 28x28, 27x27,

26x26, 25x25, 24x24,

22x22, 21x21, 20x20,

19x19, 18x18, 17x17,

16x16, 15x15, 14x14,

13x13, 12x12, 11x11,

10x10, 9x9, 8x8,

6x6,5x5, 4x4, 3x3]

Appendix B: Dataset samples corresponding to the Fashion-MNIST segment used in training

Appendix C: Dataset samples corresponding to the CIFAR-10 segment used in training

Appendix D: Dataset samples corresponding to the CIFAR-100 segment used in training

Appendix E: Full collection of class accuracy plots for CIFAR-100


(a) The results in all figures employ official ResNet50 models from Tensorflow trained from scratch on the CIFAR-100 dataset with random crop data augmentation applied. All results in this figure are averaged over 4 runs. During training, the proportion of the original image obscured by the augmentation varies from 100% to 10%. We observe The vertical dotted lines denote the best test accuracy for every class.



(a) The results in all figures employ official ResNet50 models from Tensorflow trained from scratch on the CIFAR-100 dataset with random crop and random horizontal flip data augmentations applied. All results in this figure are averaged over 4 runs. During training, the proportion of the original image obscured by the augmentation varies from 100% to 10%. We observe The vertical dotted lines denote the best test accuracy for every class.

Appendix F: Full collection of best test performances for CIFAR100

Without Random Horizontal Flip:



With Random Horizontal Flip


Appendix G: Per-class and overall test set performances samples for the Fashion-MNIST + ResNet50 + Random Cropping + Random Horizontal Flip experiment

Appendix H: Per-class and overall test set performances samples for the CIFAR-10 + ResNet50 + Random Cropping + Random Horizontal Flip experiment

Appendix I: Per-class and overall test set performances samples for the Fashion-MNIST + EfficientNetV2S + Random Cropping + Random Horizontal Flip experiment

Appendix J: Per-class and overall test set performances samples for the Fashion-MNIST + ResNet50 + Random Cropping experiment

Appendix K: Per-class and overall test set performances samples for the CIFAR-10 + ResNet50 + Random Cropping experiment

Appendix L: Per-class and overall test set performances samples for the Fashion-MNIST + SWIN Transformer + Random Cropping + Random Horizontal Flip experiment


This paper is available on arxiv under CC BY 4.0 DEED license.