paint-brush
The Effect Of Data Augmentation-Induced Class-Specific Bias Is Influenced By Data, Regularization by@computational
119 reads

The Effect Of Data Augmentation-Induced Class-Specific Bias Is Influenced By Data, Regularization

by Computational Technology for All
Computational Technology for All HackerNoon profile picture

Computational Technology for All

@computational

Computational: We take random inputs, follow complex steps, and hope...

August 31st, 2024
Read on Terminal Reader
Read this story in a terminal
Print this story
Read this story w/o Javascript
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Data augmentation enhances model generalization in computer vision but may introduce biases, impacting class accuracy unevenly.
featured image - The Effect Of Data Augmentation-Induced Class-Specific Bias Is Influenced By Data, Regularization
1x
Read by Dr. One voice-avatar

Listen to this story

Computational Technology for All HackerNoon profile picture
Computational Technology for All

Computational Technology for All

@computational

Computational: We take random inputs, follow complex steps, and hope the output makes sense. And then blog about it.

Learn More
LEARN MORE ABOUT @COMPUTATIONAL'S
EXPERTISE AND PLACE ON THE INTERNET.
0-item

STORY’S CREDIBILITY

Academic Research Paper

Academic Research Paper

Part of HackerNoon's growing list of open-source research papers, promoting free access to academic material.

Authors:

(1) Athanasios Angelakis, Amsterdam University Medical Center, University of Amsterdam - Data Science Center, Amsterdam Public Health Research Institute, Amsterdam, Netherlands

(2) Andrey Rass, Den Haag, Netherlands.

2 The Effect Of Data Augmentation-Induced Class-Specific Bias Is Influenced By Data, Regularization and Architecture

This section details our study’s data-centric and model-centric analysis of the phenomena originally observed in (Balestriero, Bottou, and LeCun 2022). Firstly, we establish a practical framework for replicating such experiments in Section 2.1. Following this, we use a ResNet50 model trained from scratch with the Random Cropping and Random Horizontal Flip DA to provide the data-centric analysis of DA-induced class-specific bias on three datasets (Fashion-MNIST, CIFAR-10 and CIFAR-100) in Section 2.2. We then take a step back in Section 2.3 to evaluate the potential side effects of including the Random Horizontal Flip augmentation, as done in the original study. Finally, we conclude by demonstrating how alternate computer vision architectures interact with the phenomenon illustrated in previous sections. These findings are key as they serve to deepen our understanding of the potential pitfalls of introducing DA to computer vision tasks in order to improve overall model performance, while showing how the problem of class-specific bias can be alleviated or forestalled.


This paper is available on arxiv under CC BY 4.0 DEED license.


L O A D I N G
. . . comments & more!

About Author

Computational Technology for All HackerNoon profile picture
Computational Technology for All@computational
Computational: We take random inputs, follow complex steps, and hope the output makes sense. And then blog about it.

TOPICS

THIS ARTICLE WAS FEATURED IN...

Permanent on Arweave
Read on Terminal Reader
Read this story in a terminal
 Terminal
Read this story w/o Javascript
Read this story w/o Javascript
 Lite
X
Coffee-web
Briefly
X REMOVE AD