An Introduction to Verifiable Fairness: Privacy-preserving Computation of Fairness for ML Systemsby@escholar

An Introduction to Verifiable Fairness: Privacy-preserving Computation of Fairness for ML Systems

tldt arrow
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

This paper introduces Fairness as a Service (FaaS), a pioneering and model-agnostic protocol designed for privacy-preserving fairness evaluations in machine learning. FaaS leverages cryptograms and zero-knowledge proofs to ensure data privacy and verifiability. Unlike traditional approaches, FaaS does not require access to sensitive data or model information, making fairness calculations universally verifiable and transparent. The proof-of-concept implementation showcases the practical feasibility of FaaS using standard hardware, software, and datasets.

People Mentioned

Mention Thumbnail

Company Mentioned

Mention Thumbnail
featured image - An Introduction to Verifiable Fairness: Privacy-preserving Computation of Fairness for ML Systems
Machine Learning systems via HackerNoon AI Image Generator
EScholar: Electronic Academic Papers for Scholars HackerNoon profile picture

@escholar

EScholar: Electronic Academic Papers for Scholars

We publish the best academic work (that's too often lost to peer reviews & the TA's desk) to the global tech community


Receive Stories from @escholar

react to story with heart
EScholar: Electronic Academic Papers for Scholars HackerNoon profile picture
by EScholar: Electronic Academic Papers for Scholars @escholar.We publish the best academic work (that's too often lost to peer reviews & the TA's desk) to the global tech community
Read my stories

RELATED STORIES

L O A D I N G
. . . comments & more!