Fairness

When trying to operationalize fairness it is important to realize that fairness in machine learning is a complex socio-technical issue. At minimum, this means that fairness tools should never be seen as plug-and-play solutions. This is already evident from the fact that - as most of the listed tools will emphasize - choices have to be made in which type of fairness is strived for. One general distinction for example is between group fairness and individual fairness. Within group fairness, different parity metrics correspond to different “worldviews” on equality.

Additionally, there are different stages in the machine learning pipeline where you can intervene. The choice for a particular algorithm will partially be guided by the level of access to different stages of the machine learning pipeline. Generally speaking, it is optimal to intervene as early as possible. In particularly, bias mitigation in the preprocessing stage can address both group- and individual fairness.

With all these factors combined, choosing an appropriate bias mitigation strategy is a complex tasks that requires both expertise in data science and sensitivity to context and values.

Aequitas: Bias and Fairness Audit Toolkit

Audit The Aequitas toolkit can both be used on the command-line, programmatically via its Python API or via a web interface. The web interface offers a four step programme to audit a dataset on bias. The four steps are: Upload (tabular) data Determine protected groups and reference group Select fairness metrics and disparity intolerance Inspect bias report Example audit report. This toolkit is useful for auditing bias and fairness according to a limited set of common fairness metrics, but does not offer algorithms for mitigating bias. Read more...

Agile Ethics for AI

Butnaru and others associated with the HAI center at Stanford set up a Agile Ethics workflow in the form of a Trello board. From left to right, the workflow walks you through relevant ethical considerations at the various steps of a machine learning pipeline. The phases are: Scope Consider ethical implications of the project Consider skill mapping (what’s the impact of AI on jobs)? Facilitates up-skilling or a change of strategy in the use of human talent Data audit Led by Chief Data Officer “Meet and plan” stage in Agile Helpful: Data Ethics Canvas Train Build stage in Agile Consider (tools for) transparency and fairness Analyse Benchmarks, including benchmarks related to e. Read more...

AI Fairness 360

The IBM AI Fairness 360 Toolkit contains several bias mitigation algorithms that are applicable to various stages of the machine learning pipeline. The toolkit implements different notions of fairness, both on individual and the group level, and several fairness metrics for both classes of fairness. The toolkit provides additional guidance on choosing metrics and mitigation algorithms given a particular goal and application. The following should be noted when using the fairness toolkit (and other similar toolkits, for that matter): Read more...

Alibi Detect

Alibi Detect is an open source Python library (sister library to Alibi ) focused detecting outliers, adversarial examples, and concept drift. Finding adversarial examples is relevant for assessing the security of machine learning models. Machine learning models learn complex statistical patterns in datasets. If these statistical patterns “drift” (in unforeseen ways) after a model is deployed, this will decrease the model performance over time. In systems where model predictions have an impact on people, this may be a threat to the fairness of the predictions. Read more...

Data Ethics Canvas

The Data Ethics Canvas is a tool developed by the Open Data Institute for providing ethical guidance to organizations doing any type of project involving data. That includes data collection, sharing, and its usage for example in machine learning applications. The tool is accompanied with a white paper and a brief practical guide for its usage. Page 3 of the practical guide lists some recommendations that are also relevant when you do not use this tool. Read more...

Data Nutrition Label

In analogy with nutrition labels on food products, the authors of this paper propose a way to create a Data Nutrition Label. The goal of this method is to asses data quality and mitigate potential problems early on before building models on the data. According to the authors, their approach is different from the datasheet in that the “proposed datasheet [i.e. by Gebru et al.] includes dataset provenance, key characteristics, relevant regulations and test results, but also significant yet more subjective information such as potential bias, strengths and weaknesses of the dataset, API, or model, and suggested uses. Read more...

Data Statements for NLP

A data statement, according to the authors, is … a characterization of a dataset that provides context to allow developers and users to better understand how experimental results might generalize,how software might be appropriately deployed,and what biases might be reflected in systems built on the software. (587) This paper specifically focuses on ethically responsive NLP technology. The authors argue that a data statement should be an integral part of work and writing on NLP. Read more...

Debiaswe: try to make word embeddings less sexist

Word embeddings are a widely used representation for text data. A well-known example in natural language processing (NLP) is Word2vec, which uses a neural network to learn latent vector representations of words. It turns out that relations in this latent vector space capture semantic relations quite well. For example, by finding similar vectors you typically end up with highly related or synonymous words. Another typical example is that when you add up the vectors of “king” and “woman”, you end up with the vector corresponding to “queen”, so even a form of conceptual calculus is possible. Read more...

Equity Evaluation Corpus (EEC)

This handcrafted dataset can be used to evaluate bias in AI using text data for NLP tasks. Dataset description: Automatic machine learning systems can inadvertently accentuate and perpetuate inappropriate human biases. Past work on examining inappropriate biases has largely focused on just individual systems and resources. Further, there is a lack of benchmark datasets for examining inappropriate biases in system predictions. Here, we present the Equity Evaluation Corpus (EEC), which consists of 8,640 English sentences carefully chosen to tease out biases towards certain races and genders. Read more...

Fairlearn

The documentation of fairlearn is excellent and provides a good introduction to the topic of fairness in AI. It is emphasized that fairness algorithms are no plug-and-play technical solutions, but require serious thought about the context of the data and the problem at hand. Fairness is a fundamentally sociotechnical challenge and cannot be solved with technical tools alone. They may be helpful for certain tasks such as assessing unfairness through various metrics, or to mitigate observed unfairness when training a model. Read more...

Fairness in Classification

The not-so originally named “fairness in classification” provides a Python implementation of three fairness constraints for logistic regression: Disparate impact: similar acceptance rate for different demographic groups. See Zafar et al., 2017 a. Disparate mistreatment: similar misclassification rate for different demographic groups. See Zafar et al., 2017b Preference-based fairness (as opposed to parity-based fairness): a more game-theoretical approach where decision boundaries are chosen such that it can be shown that each group prefers its own decision boundary, if rational. Read more...

Model cards for Model Reporting

Model cards are an extension of the datasheet to machine learning models. Model cards are short documents accompanying trained machine learning models that provide benchmarked evaluation in a variety of conditions, such as across different cultural, demographic, or phenotypic groups (e.g., race, geographic location, sex, Fitzpatrick skin type [15]) and intersectional groups (e.g., age and race, or sex and Fitzpatrick skin type) that are relevant to the intended application domains. Read more...

What-If Tool

The What-If Tool (WIT) takes a pretrained model and then allows you to visualize the effect of changing e.g. classification thresholds or the data points themselves on performance, explainability and fairness metrics. Many convenient functions for gaining insight in the data set are provided, such as binning on particular features, attribution values, or inference scores, computing partial dependence plots, and typical performance indicators such as a confusion matrix or ROC curve. Read more...

XAI Toolbox

This library is a small toolbox that offers some convenience functions for quickly visualizing imbalances in the data set, computing (permutation) feature importances and metrics such as the ROC-curve. A function to balance the data is offered through basic up- or downsampling, but other than this no fairness criteria are defined. Compared to other libraries the XAI Toolbox is very basic and currently the roadmap (which is not updated since 2019) does not include any major improvements. Read more...