AI Ethics Guidelines Global Inventory
- Values: { accountability }
- Categories: { model-agnostic }
- Stage: { design-phase }
- References:
AlgorithmWatch is maintaining a searchable inventory of published frameworks that set out ethical AI values. They can be searched on sector/actor, type, region and location.
AlgorithmWatch noted some common patterns here after publishing the first version of the index:
- “All include the similar principles on transparency, equality/non-discrimination, accountability and safety. Some add additional principles, such as the demand for AI be socially beneficial and protect human rights.”
- “Most frameworks are developed by coalitions, or institutions such as universities that then invite companies and individuals to sign up to these.”
- “Only a few companies have developed their own frameworks.”
- “Almost all examples are voluntary commitments. There are only three or four examples that indicate an oversight or enforcement mechanism.”
- “Apart from around 10 documents, all were published in 2018 or 2019 (some did not have a date).”
- “Overwhelmingly, the declarations take the form of statements, e.g. “we will ensure our data sets are not biased”. Few include recommendations or examples of how to operationalise the principles.”
At the moment of writing, a total of 167 frameworks are indexed, of which only 8 are binding agreements. This shows that the main challenges of AI ethics principles is to 1) operationalise them and 2) make them sufficiently binding so as to avoid allegations of ethics “whitewashing”.