Verifiable Fairness: Privacy-preserving Computation of Fairness for ML Systems: Conclusionby@escholar

Verifiable Fairness: Privacy-preserving Computation of Fairness for ML Systems: Conclusion

tldt arrow
EN
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

Fairness as a Service (FaaS) revolutionizes algorithmic fairness audits by preserving privacy without accessing original datasets or model specifics. This paper presents FaaS as a trustworthy framework employing encrypted cryptograms and Zero Knowledge Proofs. Security guarantees, a proof-of-concept implementation, and performance experiments showcase FaaS as a promising avenue for calculating and verifying fairness in AI algorithms, addressing challenges in privacy, trust, and performance.

Company Mentioned

Mention Thumbnail
featured image - Verifiable Fairness: Privacy-preserving Computation of Fairness for ML Systems: Conclusion
ML System via HackerNoon AI Image Generator
EScholar: Electronic Academic Papers for Scholars HackerNoon profile picture

@escholar

EScholar: Electronic Academic Papers for Scholars

We publish the best academic work (that's too often lost to peer reviews & the TA's desk) to the global tech community


Receive Stories from @escholar

react to story with heart
EScholar: Electronic Academic Papers for Scholars HackerNoon profile picture
by EScholar: Electronic Academic Papers for Scholars @escholar.We publish the best academic work (that's too often lost to peer reviews & the TA's desk) to the global tech community
Read my stories

RELATED STORIES

L O A D I N G
. . . comments & more!