There has been an exponential rise in applications of AI, Data Science, and Machine Learning spanning across a wide variety of industries. Scientists, researchers, and data scientists have become increasingly aware of . This is in response to the range of individual and societal harms and the negative consequences that AI systems may cause. AI Ethics AI ethics is a set of values, principles, and techniques that employ widely accepted standards of right and wrong to guide moral conduct in the development and use of AI technologies. In , any given algorithm is said to be , or to have , if its results are independent of given variables, especially those considered sensitive. The traits of individuals should not correlate with the outcome (i.e. gender, ethnicity, sexual orientation, disability, etc.). machine learning and AI fair fairness Most of the ML algorithms, which are unfair, suffer from disparate treatment if it's decisions are (partly) based on the subject’s sensitive attribute. A disparate impact occurs if outcomes disproportionately hurt (or benefit) people with certain sensitive attribute values (e.g., females, blacks). Fairness is based on the following. Unawareness Demographic Parity Equalized Odds Predictive Rate Parity Individual Fairness Counterfactual fairness The research about Fair ML Algorithms and ToolKits shows that it can detect and mitigate bias and yield fair outcomes for both text and visual data. The algorithms that have been developed and they fall into three phases of ML lifecycle: and preprocessing optimization at training time, and post-processing in-algorithmic optimization. In this blog, we primarily discuss the that can help us to design Fair AI solutions. open-source toolkits and algorithms ToolKits What-If-Tool — By Google launched by Google is a new feature of the open-source TensorBoard web application. This tool enables: What-If-Tool Users to an ML model without writing code. analyze With given pointers to the TensorFlow model and a dataset, the – offers an interactive visual interface for exploring model results. What If Tool It also allows of examples from your dataset and see the effect of those changes. manual editing It supports automatic generation of which show how the model’s predictions change as any single feature is changed. partial dependence plots AI Fairness 360 — By IBM The AI Fairness 360 toolkit is an extensible open-source library containing techniques developed by the IBM research community to help and throughout the AI application lifecycle. AI Fairness 360 package is available in both Python and R. detect, mitigate bias in machine learning models The includes AI Fairness 360 package A for datasets and models to test for biases comprehensive set of metrics for these metrics, and Algorithms to bias in datasets and models. Explanations mitigate Designed to algorithmic research from the lab into the actual practice of domains as wide-ranging as finance, human capital management, healthcare, and education. translate The below figure illustrates different fairness metrics on protected attributes sex and age on German Credit Dataset. Source This link states in detail more on the bias mitigation algorithms (optimized pre-processing, disparate parity remover) and Supported Fairness Metrics. Github Fair-Learn — By Microsoft This tool primarily developed by Microsoft focuses on how an AI system is behaving unfairly in terms of its impact on people – i.e. in terms of harms. It includes: – These harms can occur when AI systems extend or withheld opportunities, resources, or information. Some of the key applications are in hiring, school admissions, and lending. Allocation harms Quality of service refers to whether a system works as well for one person as it does for another, even if no opportunities, resources, or information are extended or withheld. Quality-of-service harms – Source This describes how the tool incorporates different types of fairness algorithms (reduction variants, post-processing algorithms) that can be applied to Classification and Regression problems. blog To integrate the Fairness tool, refer to https://github.com/fairlearn/fairlearn Themis-ml A Python utility built on top of and that implements fairness-aware machine learning algorithms, by measuring and The objective behind using this tool is to use discrimination-aware techniques to use it as a preference (bias) for or against a set of social groups that result in the unfair treatment of its members with respect to some outcome. themis-ml pandas sklearn discrimination mitigating discrimination. With ML algorithms, try this out with pre-processing library, model estimation, and post-processing techniques on different datasets available from Github The Bias and Fairness Audit Toolkit is an open-source bias audit toolkit for data scientists, machine learning researchers, and policymakers to audit machine learning models for and , and to make informed and equitable decisions around developing and deploying predictive tools. Aequitas discrimination bias Toolkit for Auditing and Mitigating Bias and Fairness of Machine Learning Systems is developed for and primarily for bias and fairness of machine learning systems, in addition to mitigating bias and adjust fairness through , with special emphasis on NLP models. Responsibly practitioners researchers auditing algorithmic interventions Derivable Conditional Fairness Regularizer is an method to deal with fairness issues in supervised machine learning tasks. Traditional fairness notations, such as and are demonstrated as special cases of conditional fairness. The main objective of this library is to define , which can be integrated into any decision-making model, to track the trade-off between and of algorithmic decision making and to measure the degree of in . DCFR adversarial learning demographic parity equalized odds Derivable Conditional Fairness Regularizer (DCFR) precision fairness unfairness adversarial representation Fairness Comparison This repository is meant to facilitate the of f , by accounting for differences in different fairness techniques. The benchmarking technique helps to compare a number of different algorithms under a variety of fairness measures, and a large number of existing datasets. This benchmarking tool also simulates that tend to be to fluctuations in dataset composition. benchmarking airness aware machine learning algorithms fairness-preserving algorithms sensitive Counterfactual Local Explanations via Regression (CLEAR) Though not a Fairness tool, the model tool explains single predictions of machine learning classifiers, based on the view that a satisfactory explanation of a single prediction needs to both explain the value of that prediction and answer questions. This explainable tool tries to answer impact of different questions by considering the relative importance of the input features and show how they interact. Explainability CLEAR ’what-if-things-had-been-different’ Differential Fairness This library leverages the connections between and , and further measures the fairness cost of with a parameter The concept has originated from the Differential Privacy to limit/restrict a combination of and , marginalizing over the remaining attributes of the dataset x, with a probability of 80%. differential privacy legal notions of fairness mechanism M(x) ε. protected group’s intersecting gender, race, nationality Flexible-Fairness-Constraints This library introduces an framework to enforce fairness constraints on . It uses a composition technique that can flexibly accommodate different combinations of fairness constraints during inference. adversarial graph embeddings In the context of , this framework also allows one user to request, such that their recommendations are invariant to both their age and gender. Further, this library also allows other users to request invariance to just their age, which is demonstrated with standard and recommender system benchmarks. social recommendations knowledge graph FairRegression This library optimizes the accuracy of the estimation subject to a of (when the subject is composed of multiple sensitive attributes e.g., race, gender, age). The fairness constraint induces a nonconvexity of the feasible region, which disables the use of an off-the-shelf convex optimizer. user-defined level fairness Fair Classification This library is composed of implementation techniques in python for the fair classification mechanisms introduced in AISTATS’17, WWW’17, and papers. logistic regression NIPS’17 Fair Clustering Below are some of the techniques involved in designing Fair Clustering Algorithms: Scalable Fair Clustering This library implements a fair clustering algorithm. It computes a of the dataset, followed by a non-fair k-median algorithm on the centers. The resulting clustering is then extended to the whole dataset by assigning each data point to the cluster that contains its , to yield a final fair clustering. k-median fairlet decomposition fairlet fairlet center Fair Algorithms for Clustering Variational Fair Clustering This library implements a clustering method that finds clusters with specified of different demographic groups pertaining to a sensitive attribute of the dataset (e.g. race, gender, etc.). It can be used for any well-known clustering method such as , etc. in a flexible and way. proportions K-means, K-median, or Spectral clustering (Normalized cut) scalable Proportionally Fair Clustering This library implements in a metric context, by clustering n points with k centers. Fairness is defined as the proportionality to mean that any are entitled to form their own cluster if there is another center that is closer in distance for all n/k points. proportional centroid clustering n/k points Fair Clustering Through Fairlets This library introduces the concept of fairlets, which are minimal sets that satisfy while the clustering objective. In its implementation, it shows that any fair clustering problem can be decomposed into first finding good fairlets, and then using existing machinery for traditional clustering algorithms. fair representation approximately preserving Though finding good fairlets can be NP-hard, the library strives to obtain efficient algorithms based on . approximation minimum cost flow Fair Recommendation Systems Below are some of the techniques involved in designing Fair Recommendation Systems: Two-Sided Fairness for Personalized Recommendations in Two-Sided Platforms The objective of this fair recommendation library is in the context of , comprising on one side and . Recommendation services in these platforms have been built primarily to maximize customer satisfaction through preferences of the individual customers. two-sided online platforms customers producers personalized This library incorporates a fair allocation of goods by guaranteeing at least for most of the and for every indivisible Maximin Share (MMS) of exposure producers Envy-Free up to One item (EF1) fairness customer. FLAG: Frequency Linked Attribute for Evaluating Consumer-side Fairness The goal of this library is to demonstrate an application of assigning to different recommendation data sets. Such attributes are personally sensitive and excluded from data sets. synthetic demographic attributes publicly-available fairness-aware variational autoencoder for collaborative filtering is a framework for making recommendations. The objective of this library is to incorporate randomness in the regular operation of in order to increase the fairness (mitigate the position bias) in multiple rounds of recommendation Variational auto-encoders VAEs Fairness-Aware_Tensor-Based_Recommendation The objective of this library is to enhance recommendation fairness while preserving recommendation quality. It achieves this goal by introducing (i) a new for isolating sensitive features sensitive latent factor matrix (ii) a that extracts sensitive information that can taint other factors sensitive information regularizer latent (iii) an effective algorithm to solve the proposed optimization model (iv) extension to cases. multi-feature and multi-category SIREN is a python interface built to offer in based on the toolbox and visualizations for two diversity metrics (long-tail and unexpectedness). SIREN can be used by content providers (news outlets) to investigate which better according to their diverse needs, in addition to analyzing recommendation effects in a different news environment. Siren diversity recommendations MyMediaLite recommendation strategy fits Awesome AI Guidelines There has been a large amount of content published which attempts to address these issues through ” and beyond, which are captured through this repository . In addition, there are papers, documents, and resources available at for introducing Fairness in “Principles”, “Ethics Frameworks”, “Standards & Regulations”, “Checklists link Github Computer Vision. Fairness in Machine Learning This library contains implementation of and Pytorch implementation of . The principle behind this library is to introduce a training procedure based on adversarial networks for enforcing the property (or, equivalently, with respect to attributes) on a predictive model. Further, it also includes a to make a between accuracy and robustness Keras & TensorFlow Towards fairness in ML with adversarial networks Fairness in Machine Learning with PyTorch pivotal fairness continuous hyperparameter trade-off FairPut — Fair Machine Learning Framework is a light open framework that strives to state a process at the end of the machine learning pipeline to enhance model fairness. It also plays a simultaneously role in enhancing model and a reasonable level of . FairPut preferred interpretability, robustness, accuracy Other Libraries — EthicML is a library for the researcher’s toolkit for performing and assessing . It has support for multiple sensitive attributes, vision datasets, codebase typed with mypy, tested code and reproducible results. EthicML algorithmic fairness – This library introduces in , where regardless of the contribution all individual participants can receive the same or similar models. This is achieved by utilizing reputation to enforce participants to converge to different models, which helps in achieving fairness without sacrificing the . Collaborative Fairness in Federated Learning Collaborative Fairness Federated Learning predictive performance Rich Subgroup Fairness Classification constraints on small collections of appear to be fair on each group, but badly violates the fairness constraint on one or more defined over the protected attributes, (from Kearns et al., ) pre-defined groups individual structured subgroups https://arxiv.org/abs/1711.05144 This fairness library implements statistical notions of fairness across (or infinitely) many , defined by a structured class of functions over the protected attributes. This library works primarily with sklearn binary classifiers ( ) and group classes having . The library provides: exponentially subgroups LinearRegression Linear threshold functions subject to subgroup fairness constraints ( ) Learning fair classifiers https://arxiv.org/abs/1711.05144 classifier predictions for fairness violations Auditing tradeoffs between error and fairness metrics fairness sensitive datasets for experiments ( ) Visualizing https://arxiv.org/abs/1808.08166 It also supports supported for learning and auditing: Fairness metrics False Positive Rate Equality False Negative Rate Equality Fairness in Sentiment Prediction demonstrates how to use SenSR to gender and racial biases in the task described. This fairness distance is computed on the basis of and can be tried out from this link on . Sensitive Subspace Robustness (SenSR) eliminate sentiment prediction Truncated SVD Github – Contains notebooks to train deep learning models as a part of the project. UnIntended Bias Analysis Conversational AI Conversational AI References https://people.engr.tamu.edu/caverlee/pubs/zhu2018fairness.pdf https://dl.acm.org/doi/10.1145/3297662.3365798 https://arxiv.org/abs/1802.04422 https://arxiv.org/abs/2006.10483 http://jfoulds.informationsystems.umbc.edu/papers/2019/Foulds%20(2019)%20-%20DifferentialFairness_NeurIPS_MLWG.pdf https://arxiv.org/abs/1902.03519 https://paperswithcode.com/paper/fair-clustering-through-fairlets Fair K-MEans using Matlab: https://github.com/fairkmeans/Fair-K-Means-Clustering https://papers.nips.cc/paper/2017/file/978fce5bcc4eccc88ad48ce3914124a2-Paper.pdf https://arxiv.org/abs/1905.10674 https://towardsdatascience.com/a-tutorial-on-fairness-in-machine-learning-3ff8ba1040cb https://github.com/topics/fairness-awareness-model https://arxiv.org/abs/1711.05144 https://ai.googleblog.com/2018/09/the-what-if-tool-code-free-probing-of.html https://github.com/sharmi1206/fairness-tensorflow-toxicity-classification Previously published at https://techairesearch.com/most-essential-python-fairness-libraries-every-data-scientist-should-know/