paint-brush
Exploring Bias and Fairness in AI: The Need for Comprehensive Testing Frameworksby@mediabias

Exploring Bias and Fairness in AI: The Need for Comprehensive Testing Frameworks

tldt arrow

Too Long; Didn't Read

AI software testing has evolved across various domains, with methods for adversarial examples, reliability, and bias evaluation. While existing frameworks have explored biases in NLP and other AI areas, comprehensive testing for image generation models remains underdeveloped. BiasPainter is introduced as the first automatic and comprehensive framework for accurately revealing social bias in image generation models.
featured image - Exploring Bias and Fairness in AI: The Need for Comprehensive Testing Frameworks
Tech Media Bias [Research Publication] HackerNoon profile picture

Authors:

(1) Wenxuan Wang, The Chinese University of Hong Kong, Hong Kong, China;

(2) Haonan Bai, The Chinese University of Hong Kong, Hong Kong, China

(3) Jen-tse Huang, The Chinese University of Hong Kong, Hong Kong, China;

(4) Yuxuan Wan, The Chinese University of Hong Kong, Hong Kong, China;

(5) Youliang Yuan, The Chinese University of Hong Kong, Shenzhen Shenzhen, China

(6) Haoyi Qiu University of California, Los Angeles, Los Angeles, USA;

(7) Nanyun Peng, University of California, Los Angeles, Los Angeles, USA

(8) Michael Lyu, The Chinese University of Hong Kong, Hong Kong, China.

Abstract

1 Introduction

2 Background

3 Approach and Implementation

3.1 Seed Image Collection and 3.2 Neutral Prompt List Collection

3.3 Image Generation and 3.4 Properties Assessment

3.5 Bias Evaluation

4 Evaluation

4.1 Experimental Setup

4.2 RQ1: Effectiveness of BiasPainter

4.3 RQ2 - Validity of Identified Biases

4.4 RQ3 - Bias Mitigation

5 Threats to Validity

6 Related Work

7 Conclusion, Data Availability, and References

6.1 Testing of AI Software

AI software has been adopted in various domains, such as autonomous driving and face recognition. However, AI software is not robust enough and can generate erroneous outputs that lead to fatal accidents [24, 63]. To this end, researchers have proposed a variety of methods to generate adversarial examples or test cases to measure the reliability of AI software [7, 22, 27, 28, 31, 32, 36, 43, 45, 48, 50, 51, 58, 59, 62].


NLP software, such as machine translation software and chatbot, has also been widely used in human life. Similar to AI software, researchers have proposed variance methods to validate the reliability of NLP software on the correctness [18, 19, 40, 44], toxicity [53, 54], fairness [47, 52, 57].


In contrast to the aforementioned AI software, image generation models serve as a new multimodal AI software that emerged recently, posing challenges for researchers to precisely and comprehensively evaluate and test.

6.2 Bias and Fairness in AI Software

Bias and fairness have gained significant attention in the SE community from various perspectives, such as bias testing [29] and bias mitigation [20], and for various kinds of software, such as nature language processing models [42], recommendation systems [33], chatbot [47], and autonomous driving car [25].


As one of the most popular AI software, the image generation model is widely used with a sufficient amount of active users. However, to the best of our knowledge, the comprehensive testing framework for the bias and fairness in image generation models remains under investigation. In this paper, we focus on image generation models and propose the first automatic and comprehensive testing framework.

6.3 Bias in Image Generation Models

We systematically reviewed papers on testing the biases in image generation models across related research areas, including software engineering, computer vision, natural language processing, and security.


[3] is an early work that conducts an empirical study to show the stereotypes learned by text-to-image models. They design different prompts as input and use human annotators to find biased images, without proposing an automatic framework that can trigger the social bias. Inspired by this, [12] proposed an automatic framework to evaluate the bias in image generation models. However, their automatic evaluation method failed to accurately detect the bias according to their human evaluation. Also, the generated images are highly biased toward white people so their framework cannot analyze the bias in other groups. More recently, [49] study the gender stereotype in occupations, but the scope of which is limited.


Different from the aforementioned works, BiasPainter is the first framework that can automatically, comprehensively, and accurately reveal the social bias in image generation models.


This paper is available on arxiv under CC0 1.0 DEED license.