paint-brush
Privacy-preserving Computation of Fairness for ML Systems: Acknowledgement & Referencesby@escholar
121 reads

Privacy-preserving Computation of Fairness for ML Systems: Acknowledgement & References

tldt arrow

Too Long; Didn't Read

Fairness as a Service (FaaS) revolutionizes algorithmic fairness audits by preserving privacy without accessing original datasets or model specifics. This paper presents FaaS as a trustworthy framework employing encrypted cryptograms and Zero Knowledge Proofs. Security guarantees, a proof-of-concept implementation, and performance experiments showcase FaaS as a promising avenue for calculating and verifying fairness in AI algorithms, addressing challenges in privacy, trust, and performance.

People Mentioned

Mention Thumbnail
Mention Thumbnail

Company Mentioned

Mention Thumbnail
featured image - Privacy-preserving Computation of Fairness for ML Systems: Acknowledgement & References
EScholar: Electronic Academic Papers for Scholars HackerNoon profile picture

This paper is available on arxiv under CC BY 4.0 DEED license.

Authors:

(1) Ehsan Toreini, University of Surrey, UK;

(2) Maryam Mehrnezhad, Royal Holloway University of London;

(3) Aad Van Moorsel, Birmingham University.


Abstract & Introduction

Background and Related Work

FaaS Architecture

Implementation and Performance Analysis

Conclusion

Acknowledgment and References


Acknowledgment


The authors in this project have been funded by UK EPSRC grant “FinTrust: Trust Engineering for the Financial Industry” under grant number EP/R033595/1, and UK EPSRC grant “AGENCY: Assuring Citizen Agency in a World with Complex Online Harms” under grant EP/W032481/1 and PETRAS National Centre of Excellence for IoT Systems Cybersecurity, which has been funded by the UK EPSRC under grant number EP/S035362/1.


References


[1] Philip Adler, Casey Falk, Sorelle A Friedler, Tionney Nix, Gabriel Rybeck, Carlos Scheidegger, Brandon Smith, and Suresh Venkatasubramanian. Auditing black-box models for indirect influence. Knowledge and Information Systems, 54(1):95–122, 2018.


[2] Elaine Angelino, Nicholas Larus-Stone, Daniel Alabi, Margo Seltzer, and Cynthia Rudin. Learning certifiably optimal rule lists for categorical data. arXiv preprint arXiv:1704.01701, 2017.


[3] Muhammad Ajmal Azad, Samiran Bag, Simon Parkinson, and Feng Hao. Trustvote: Privacy-preserving node ranking in vehicular networks. IEEE Internet of Things Journal, 6(4):5878–5891, 2018.


[4] Olivier Baudron, Pierre-Alain Fouque, David Pointcheval, Jacques Stern, and Guillaume Poupard. Practical multi-candidate election system. In Proceedings of the twentieth annual ACM symposium on Principles of distributed computing, pages 274–283, 2001.


[5] Alexandra Chouldechova. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big data, 5(2):153–163, 2017.


[6] Sam Corbett-Davies, Emma Pierson, Avi Feller, Sharad Goel, and Aziz Huq. Algorithmic decision making and the cost of fairness. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 797–806. ACM, 2017.


[7] Ronald Cramer, Ivan Damg˚ard, and Berry Schoenmakers. Proofs of partial knowledge and simplified design of witness hiding protocols. In Annual International Cryptology Conference, pages 174–187. Springer, 1994.


[8] Craig E. Carroll and Rowena Olegario. Pathways to corporate accountability: Corporate reputation and its alternatives. Journal of Business Ethics, 163(2):173–181, 2020.


[9] Michael Feldman, Sorelle A Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. Certifying and removing disparate impact. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 259–268. ACM, 2015.


[10] Amos Fiat and Adi Shamir. How to prove yourself: Practical solutions to identification and signature problems. In Conference on the theory and application of cryptographic techniques, pages 186–194. Springer, 1986.


[11] Sorelle A Friedler, Carlos Scheidegger, and Suresh Venkatasubramanian. On the (im) possibility of fairness. arXiv preprint arXiv:1609.07236, 2016.


[12] Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti, and Dino Pedreschi. A survey of methods for explaining black box models. ACM computing surveys (CSUR), 51(5):1–42, 2018.


[13] Feng Hao, Matthew N Kreeger, Brian Randell, Dylan Clarke, Siamak F Shahandashti, and Peter Hyun-Jeen Lee. Every vote counts: Ensuring integrity in large-scale electronic voting. In 2014 Electronic Voting Technology Workshop/Workshop on Trustworthy Elections (EVT/WOTE 14), 2014.


[14] Feng Hao, Peter YA Ryan, and Piotr Zieli´nski. Anonymous voting by two-round public discussion. IET Information Security, 4(2):62–67, 2010.


[15] Moritz Hardt, Eric Price, Nati Srebro, and others. Equality of opportunity in supervised learning. In Advances in neural information processing systems, pages 3315–3323, 2016.


[16] Hui Hu, Yijun Liu, Zhen Wang, and Chao Lan. A distributed fair machine learning framework with private demographic data protection. In 2019 IEEE International Conference on Data Mining (ICDM), pages 1102–1107. IEEE, 2019.


[17] Matthew Jagielski, Michael Kearns, Jieming Mao, Alina Oprea, Aaron Roth, Saeed Sharifi-Malvajerdi, and Jonathan Ullman. Differentially private fair learning. In International Conference on Machine Learning, pages 3000–3008. PMLR, 2019.


[18] Jonathan Katz and Yehuda Lindell. Introduction to modern cryptography. CRC press, 2014.


[19] Niki Kilbertus, Adria Gascon, Matt Kusner, Michael Veale, Krishna P Gummadi, and Adrian Weller. Blind justice: Fairness with encrypted sensitive attributes. In 35th International Conference on Machine Learning, pages 2630–2639. PMLR, 2018.


[20] Jeff Larson, Surya Mattu, Lauren Kirchner, and Julia Angwin. How we analyzed the COMPAS recidivism algorithm. ProPublica (5 2016), 9(1), 2016.


[21] Jiacheng Liu, Fei Yu, and Lixin Song. A systematic investigation on the research publications that have used the medical expenditure panel survey (MEPS) data through a bibliometrics approach. Library Hi Tech, 2020.


[22] Scott M Lundberg and Su-In Lee. A unified approach to interpreting model predictions. Advances in neural information processing systems, 30, 2017.


[23] Arwa Mahdawi. It’s not just A-levels – algorithms have a nightmarish new power over our lives. The Guardian, 2020.


[24] Arvind Narayanan. Translation tutorial: 21 fairness definitions and their politics. In Proc. Conf. Fairness Accountability Transp., New York, USA, 2018.


[25] Cecilia Panigutti, Alan Perotti, Andr´e Panisson, Paolo Bajardi, and Dino Pedreschi. Fairlens: Auditing black-box clinical decision support systems. Information Processing & Management, 58(5):102657, 2021.


[26] Cecilia Panigutti, Alan Perotti, and Dino Pedreschi. Doctor xai: an ontology-based approach to black-box sequential data classification explanations. In Proceedings of the 2020 conference on fairness, accountability, and transparency, pages 629–639, 2020.


[27] Saerom Park, Seongmin Kim, and Yeon-sup Lim. Fairness audit of machine learning models with confidential computing. In Proceedings of the ACM Web Conference 2022, pages 3488–3499, 2022.


[28] Reuters. Amazon ditched AI recruiting tool that favored men for technical jobs. The Guardian, 2018.


[29] Shahar Segal, Yossi Adi, Benny Pinkas, Carsten Baum, Chaya Ganesh, and Joseph Keshet. Fairness in the eyes of the data: Certifying machinelearning models. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, pages 926–935, 2021.


[30] Keng Siau and Weiyu Wang. Building trust in artificial intelligence, machine learning, and robotics. Cutter business technology journal, 31(2):47– 53, 2018.


[31] Douglas Robert Stinson and Maura Paterson. Cryptography: theory and practice. CRC press, 2018.


[32] Ehsan Toreini, Mhairi Aitken, Kovila Coopamootoo, Karen Elliott, Carlos Gonzalez Zelaya, and Aad van Moorsel. The relationship between trust in AI and trustworthy machine learning technologies. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pages 272–283, 2020.


[33] Michael Veale and Reuben Binns. Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data. Big Data & Society, 4(2):2053951717743530, 2017.


[34] Tong Wang, Cynthia Rudin, Finale Doshi-Velez, Yimin Liu, Erica Klampfl, and Perry MacNeille. A bayesian framework for learning rule sets for interpretable classification. The Journal of Machine Learning Research, 18(1):2357–2393, 2017.


[35] Qiang Yang, Yang Liu, Tianjian Chen, and Yongxin Tong. Federated machine learning: Concept and applications. ACM Transactions on Intelligent Systems and Technology (TIST), 10(2):1–19, 2019.