paint-brush

This story draft by @escholar has not been reviewed by an editor, YET.

On the Inductive Biases of Demographic Parity-based Fair Learning Algorithms: Related Works

EScholar: Electronic Academic Papers for Scholars HackerNoon profile picture
0-item

Authors:

(1) Haoyu LEI, Department of Computer Science and Engineering, The Chinese University of Hong Kong (hylei22@cse.cuhk.edu.hk);

(2) Amin Gohari, Department of Information Engineering, The Chinese University of Hong Kong (agohari@ie.cuhk.edu.hk);

(3) Farzan Farnia, Department of Computer Science and Engineering, The Chinese University of Hong Kong (farnia@cse.cuhk.edu.hk).

Table of Links

Abstract and 1 Introduction

2 Related Works

3 Preliminaries

3.1 Fair Supervised Learning and 3.2 Fairness Criteria

3.3 Dependence Measures for Fair Supervised Learning

4 Inductive Biases of DP-based Fair Supervised Learning

4.1 Extending the Theoretical Results to Randomized Prediction Rule

5 A Distributionally Robust Optimization Approach to DP-based Fair Learning

6 Numerical Results

6.1 Experimental Setup

6.2 Inductive Biases of Models trained in DP-based Fair Learning

6.3 DP-based Fair Classification in Heterogeneous Federated Learning

7 Conclusion and References

Appendix A Proofs

Appendix B Additional Results for Image Dataset

2 Related Works

Fairness Violation Metrics. In this work, we focus on the learning frameworks aiming toward demographic parity (DP). Since enforcing DP to strictly hold could be costly and damaging to the learner’s performance, the machine learning literature has proposed applying several metrics assessing the dependence between random variables, including: the mutual information: [3–7], Pearson correlation [8, 9], kernel-based maximum mean discrepancy: [10], kernel density estimation of the difference of demographic parity (DDP) measures [11], the maximal correlation [12–15], and the exponential Renyi mutual information [16]. In our analysis, we mostly focus on a DDP-based fair regularization scheme, while we show only weaker versions of the inductive biases could further hold in the case of mutual information and maximal correlation-based fair learning algorithms.



Fair Classification Algorithms. Fair machine learning algorithms can be classified into three main categories: pre-processing, post-processing, and in-processing. Pre-processing algorithms [17–19] transform biased data features into a new space where labels and sensitive attributes are statistically independent. Post-processing methods such as [2, 20] aim to alleviate the discriminatory impact of a classifier by modifying its ultimate decision. The focus of our work focus is only on in-processing approaches regularizing the training process toward DP-based fair models. Also, [21–23] propose distributionally robust optimization (DRO) for fair classification; however, unlike our method, these works do not apply DRO on the sensitive attribute distribution to reduce the biases.

simplfy


This paper is available on arxiv under CC BY-NC-SA 4.0 DEED license.


L O A D I N G
. . . comments & more!

About Author

EScholar: Electronic Academic Papers for Scholars HackerNoon profile picture
EScholar: Electronic Academic Papers for Scholars@escholar
We publish the best academic work (that's too often lost to peer reviews & the TA's desk) to the global tech community

Topics

Around The Web...