paint-brush
Verifiable Fairness: Privacy-preserving Computation of Fairness for ML Systems: Conclusionby@escholar
134 reads

Verifiable Fairness: Privacy-preserving Computation of Fairness for ML Systems: Conclusion

Too Long; Didn't Read

Fairness as a Service (FaaS) revolutionizes algorithmic fairness audits by preserving privacy without accessing original datasets or model specifics. This paper presents FaaS as a trustworthy framework employing encrypted cryptograms and Zero Knowledge Proofs. Security guarantees, a proof-of-concept implementation, and performance experiments showcase FaaS as a promising avenue for calculating and verifying fairness in AI algorithms, addressing challenges in privacy, trust, and performance.

Company Mentioned

Mention Thumbnail
featured image - Verifiable Fairness: Privacy-preserving Computation of Fairness for ML Systems: Conclusion
EScholar: Electronic Academic Papers for Scholars HackerNoon profile picture

This paper is available on arxiv under CC BY 4.0 DEED license.

Authors:

(1) Ehsan Toreini, University of Surrey, UK;

(2) Maryam Mehrnezhad, Royal Holloway University of London;

(3) Aad Van Moorsel, Birmingham University.

Abstract & Introduction

Background and Related Work

FaaS Architecture

Implementation and Performance Analysis

Conclusion

Acknowledgment and References

5 Conclusion

This paper proposes Fairness as a Service (FaaS), a trustworthy service architecture and secure protocol for the calculation of algorithmic fairness. FaaS is designed as a service that calculates fairness without asking the ML system to share the original dataset or model information. Instead, it requires an encrypted representation of the values of the data features delivered by the ML system in the shape of cryptograms. We used non-interactive Zero Knowledge Proofs within the cryptogram to assure that the protocol is executed as it should. These cryptograms are posted on a public fairness board for everyone to inspect the correctness of the computations for the fairness of the ML system. This is a new approach in privacy–preserving computation of fairness since unlike other similar proposals that use federated learning approach, our FaaS architecture does not rely on a specific machine learning model or a fairness metric definition for its operation. Instead, one have the freedom of deploying their desired model and the fairness metric of choice.


In this paper we proved that the security protocol guarantees the privacy of data and does not leak any model information. Compared to earlier designs, trust in our design is in the correct construction of the cryptogram by the ML system. Arguably, this is more realistic as a solution than providing full access to data to the trusted third party, taking into account the many legal, business and ethical requirements of ML systems. At the same time, this provides a new challenge in increasing the trust one has in the ML system. Increasing trust in the construction of the cryptograms remains an interesting research challenge following from the presented protocol.


We implemented a proof-of-concept of FaaS and conducted performance experiments on commodity hardware. The protocol takes seconds per data point to complete, thus demonstrating in performance challenges if the number of data points is large (tens of thousands). To mitigate the performance challenge, the security protocol is staged such that the construction of the cryptogram can be done off-line. The performance of the calculation of fairness from the cryptogram is a challenge to address in future work. All together, we believe FaaS and the presented underlying security protocol provide a new and promising approach to calculating and verifying fairness of AI algorithms.