Explainability
Explainability is instrumental for maintaining other values such as fairness and for trust in AI systems. There is little consensus about what “explainability” precisely is. The related concepts of “transparency” and “interpretability” are sometimes used as synonyms, sometimes distinctly. For example, the explainability of machine learning models can be seen as one aspect of the overall need to be transparent in the use of AI (so transparency is the superconcept). But one may also use the word “transparency” to indicate “white box” models that are in themselves interpretable. The predictions of opaque black box models can however nevertheless be explained, so in this sense “transparency” is a subconcept of “explainability”.
Technical tools assume and implement various different types of explanation. It is important to be aware of the limitations of the particular interpretation of “explainability” that is used. Generally speaking, there is a quite large gap between technical implementations of explainability and what humans generally consider to be a good explanation.