AI Explainability

Understanding how AI systems arrive at decisions

TransparencyLegalTechnicalGovernance
Back to Glossary
Updated 18 October 2025

Definition

AI explainability refers to the ability to understand, interpret, and communicate how an AI system arrives at its outputs or decisions. It is a subset of the broader concept of transparency, often focusing on model behavior, decision logic, and input-output relationships.

Legal Context / Relevance

Explainability is a core principle in many AI governance frameworks. It supports accountability, non-discrimination, and due process. Two primary regulatory references include:

  • EU AI Act – Article 13: Requires providers of high-risk systems to ensure "appropriate" levels of transparency and explanation, especially when systems affect rights or safety. The article mandates that users must be able to interpret system outputs and understand limitations.
  • NIST AI Risk Management Framework: Categorizes explainability as one of the key characteristics of trustworthy AI systems. Specifically, it defines "explainable and interpretable" as essential properties to manage risk and support governance objectives.

Examples in Practice

  • A bank using an AI system to assess loan applications must provide a human-readable explanation when a loan is denied.
  • In healthcare, an AI diagnostic tool should be able to explain which features in the input data led to a particular diagnosis.
  • In law enforcement, a predictive policing system may be challenged if its risk assessments cannot be adequately explained to the affected parties.

Applications & Use Cases

  • Loan decision justification
  • Clinical AI diagnostics
  • Model audit and certification

Risks & Considerations

  • Opacity in high-risk systems
  • Unexplainable discrimination
  • Regulatory non-compliance