3 Preliminaries
3.1 Fair Supervised Learning and 3.2 Fairness Criteria
3.3 Dependence Measures for Fair Supervised Learning
4 Inductive Biases of DP-based Fair Supervised Learning
4.1 Extending the Theoretical Results to Randomized Prediction Rule
5 A Distributionally Robust Optimization Approach to DP-based Fair Learning
6 Numerical Results
6.2 Inductive Biases of Models trained in DP-based Fair Learning
6.3 DP-based Fair Classification in Heterogeneous Federated Learning
Appendix B Additional Results for Image Dataset
In a fair supervised learning algorithm, the learned prediction rule is expected to meet a fairness criterion. Here, we review two standard fairness criteria in the literature:
To measure the DP-based fairness violation, the machine learning literature has proposed the application of several dependence measures which we analyze in the paper. In the following, we review some of the applied dependence metrics:
This paper is available on arxiv under CC BY-NC-SA 4.0 DEED license.
Authors:
(1) Haoyu LEI, Department of Computer Science and Engineering, The Chinese University of Hong Kong (hylei22@cse.cuhk.edu.hk);
(2) Amin Gohari, Department of Information Engineering, The Chinese University of Hong Kong (agohari@ie.cuhk.edu.hk);
(3) Farzan Farnia, Department of Computer Science and Engineering, The Chinese University of Hong Kong (farnia@cse.cuhk.edu.hk).