paint-brush
An Introduction to Verifiable Fairness: Privacy-preserving Computation of Fairness for ML Systemsby@escholar

An Introduction to Verifiable Fairness: Privacy-preserving Computation of Fairness for ML Systems

tldt arrow

Too Long; Didn't Read

This paper introduces Fairness as a Service (FaaS), a pioneering and model-agnostic protocol designed for privacy-preserving fairness evaluations in machine learning. FaaS leverages cryptograms and zero-knowledge proofs to ensure data privacy and verifiability. Unlike traditional approaches, FaaS does not require access to sensitive data or model information, making fairness calculations universally verifiable and transparent. The proof-of-concept implementation showcases the practical feasibility of FaaS using standard hardware, software, and datasets.
featured image - An Introduction to Verifiable Fairness: Privacy-preserving Computation of Fairness for ML Systems
EScholar: Electronic Academic Papers for Scholars HackerNoon profile picture

This paper is available on arxiv under CC BY 4.0 DEED license.

Authors:

(1) Ehsan Toreini, University of Surrey, UK;

(2) Maryam Mehrnezhad, Royal Holloway University of London;

(3) Aad Van Moorsel, Birmingham University.

Abstract & Introduction

Background and Related Work

FaaS Architecture

Implementation and Performance Analysis

Conclusion

Acknowledgement and References

Abstract

Fair machine learning is a thriving and vibrant research topic. In this paper, we propose Fairness as a Service (FaaS), a secure, verifiable and privacy-preserving protocol to compute and verify the fairness of any machine learning (ML) model. In the deisgn of FaaS, the data and outcomes are represented through cryptograms to ensure privacy. Also, zero knowledge proofs guarantee the well-formedness of the cryptograms and underlying data. FaaS is model–agnostic and can support various fairness metrics; hence, it can be used as a service to audit the fairness of any ML model. Our solution requires no trusted third party or private channels for the computation of the fairness metric. The security guarantees and commitments are implemented in a way that every step is securely transparent and verifiable from the start to the end of the process. The cryptograms of all input data are publicly available for everyone, e.g., auditors, social activists and experts, to verify the correctness of the process. We implemented FaaS to investigate performance and demonstrate the successful use of FaaS for a publicly available data set with thousands of entries.


Introduction

Demonstrating the fairness of algorithms is critical to the continued proliferation and acceptance of algorithmic decision-making in general, and AI-based systems in particular. There is no shortage of examples that have diminished trust in algorithms because of unfair discrimination of groups within our population. This includes news stories about the human resource decision-making tools used by large companies, which turn out to discriminate against women [28]. There also are well-understood seminal examples studied widely within the academic community, such as the unfair decisions related to recidivism in different ethnicities [20]. In the UK, most recently the algorithm to determine A-levels substitute scores under COVID-19 was widely found to be unfair across demographics [23].


There has been a surge of research that aims to establish metrics that quantify the fairness of an algorithm. This is an important area of research, and tens of different metrics have been proposed, from individual fairness to group fairness. It has been shown that various expressions for fairness cannot be satisfied or optimised at once, thus establishing impossibility results [11]. Moreover, even if one agrees about a metric, this metric on its own does not provide trust to people. It matters not only what the metrics express, but also who computes the metrics and whether one can verify these computations and possibly appeal against them. At the same time, in situations in which verification by stakeholders is possible, the owner of the data wants to be assured that none of the original, typically sensitive and personal, data is leaked. The system that runs the algorithms (later referred to as Machine Learning system or ML system) may have a valid interest in maintaining the secrecy of the model. In other words, if one wants to establish verifiable fairness, one needs to tackle a number of security, privacy and trust concerns.


In FaaS, we take a fundamentally different design approach. We leak no data or model information, but the FaaS is still able to calculate fairness for a variety of fairness metrics and independent of the ML model. Thus, replacing the model in the ML system will not impact functionality of FaaS protocol. Moreover, any other party can verify this calculation since all the necessary encrypted information is posted publicly, on a ‘fairness board’.


Summarising, our contributions are:


• We propose FaaS, a model–agnostic protocol to compute different fairness metrics without accessing sensitive information about the model and the dataset.


• FaaS is universally verifiable so everyone can verify the well–formedness of the cryptograms and the steps of the protocol.


• We implement a proof-of-concept of the FaaS architecture and protocol using off-the-shelf hardware, software, and datasets and run experiments to demonstrate the practical feasibility of FaaS.