paint-brush
Mitigating Bias in AI Modelsby@mediabias

Mitigating Bias in AI Models

tldt arrow

Too Long; Didn't Read

BiasPainter helps mitigate bias in image generation models by providing insights for designing balanced training data and evaluating bias reduction methods. Adding system prompts to maintain the same gender/race/age as the input image showed some reduction in bias, but complete elimination remains challenging.
featured image - Mitigating Bias in AI Models
Tech Media Bias [Research Publication] HackerNoon profile picture

Authors:

(1) Wenxuan Wang, The Chinese University of Hong Kong, Hong Kong, China;

(2) Haonan Bai, The Chinese University of Hong Kong, Hong Kong, China

(3) Jen-tse Huang, The Chinese University of Hong Kong, Hong Kong, China;

(4) Yuxuan Wan, The Chinese University of Hong Kong, Hong Kong, China;

(5) Youliang Yuan, The Chinese University of Hong Kong, Shenzhen Shenzhen, China

(6) Haoyi Qiu University of California, Los Angeles, Los Angeles, USA;

(7) Nanyun Peng, University of California, Los Angeles, Los Angeles, USA

(8) Michael Lyu, The Chinese University of Hong Kong, Hong Kong, China.

Abstract

1 Introduction

2 Background

3 Approach and Implementation

3.1 Seed Image Collection and 3.2 Neutral Prompt List Collection

3.3 Image Generation and 3.4 Properties Assessment

3.5 Bias Evaluation

4 Evaluation

4.1 Experimental Setup

4.2 RQ1: Effectiveness of BiasPainter

4.3 RQ2 - Validity of Identified Biases

4.4 RQ3 - Bias Mitigation

5 Threats to Validity

6 Related Work

7 Conclusion, Data Availability, and References

4.4 RQ3 - Bias Mitigation

The next step in measuring the social bias in image generation models is mitigating the bias. So the following question is: can BiasPainter be helpful to mitigate the bias in image generation models? In this section, we illustrate that BiasPainter can be used for bias mitigation by either providing insights and direction or being an automatic evaluation method.


Previous studies have proposed various methods to mitigate the bias in AI systems, which can be categorized into the method before training, such as balancing the training data, during training, such as adding regularization terms in training objective function, and after training, such as prompt design [20]. We believe BiasPainter can provide useful insights into what an image generation model is biased, which can be adopted to design more balanced training data or more efficient regularization. For example, based on the finding that nurses are more biased toward females, developers can add more training data about male nurses. Besides, BiasPainter can be used as an automatic evaluation method to measure the effectiveness of different bias mitigation methods, which can be useful for bias mitigation studies. Since most of the image generation models only provide API service without providing the training data or model parameters, in this section, we adopt BiasPainter to evaluate the effectiveness of the prompt design.


Specifically, we select the top biased profession words shown in Table 3, add an additional system prompt that "maintains the same gender/race/age as the input image" and then regenerate the images. Then, we compare the bias score when generating with the original prompt (denoted as "Ori") to the bias score when generating with the additional system prompt (denoted as "Miti"). As the results are shown in Table 5, the average bias score when generating with the additional prompt is relatively smaller (e.g. 0.40 v.s. 0.98 for SD1.5), indicating that adding this specific prompt can reduce the bias to a certain extent, but is far from completely eliminating it.



This paper is available on arxiv under CC0 1.0 DEED license.