We use cookies to give you the best experience and to help improve our website

Find out what cookies we use and how to disable them

ISO/IEC NP TS 6254 Information technology -- Artificial intelligence -- Objectives and methods for explainability ofML models and AI systems

Scope

This document describes approaches and methods that can be used to achieve explainability objectives of different stakeholders with regards to ML models and AI systems‘ behaviours, outputs, and results. Stakeholders include but are not limited to, academia, industry, policy makers, and end users. It provides recommendations concerning the applicability of the described approaches and methods to the identified objectives throughout the AI system’s life cycle, as defined in ISO/IEC 22989. 

Purpose

When AI is used to help make decisions that impact people’s lives, it is important that people understand how those decisions are made. Achieving useful explanations of the behaviour of AI systems and their components is a complex task. Industry and academia are actively exploring emerging methods for enabling explainability, as well as scenarios and reasons why explainability might be required.

While the overarching goal of explainability is to improve the trustworthiness of AI systems, at different stages of an AI life cycle, different stakeholders will have more specific objectives in support of the goal. To illustrate this point, several examples are provided. For developers, it would be improving the safety, reliability, and robustness of an AI system by making it easier to identify and fix bugs. For users, explainability will help to decide how much to trust an AI system by uncovering potential sources of bias or unfairness. For service providers, explainability will be essential for demonstrating compliance with laws and regulations. For policy makers, understanding the capabilities and limitations of different explainability methods would help to develop effective policy frameworks that best address societal needs while promoting innovation.

The proposed working item will describe the applicability and the properties of existing approaches and methods for improving explainability of ML models and AI systems. As more methods for enabling human understanding of AI systems are developed and refined, the proposed working item will help with guiding the stakeholders through the important considerations involved with selection and application of such methods. While methods for explainability of ML models play a central role in achieving the explainability of AI systems, other methods (such as data analytics tools and fairness frameworks) can contribute to the understanding of AI systems behaviour and outputs. The description and classification of such complementary methods are out of scope for the proposed working item. If necessary, the proposed working item will refer to other publications (potentially, including those by ISO/IEC) on the topic.

Comment on proposal

Required form fields are indicated by an asterisk (*) character.


Please email further comments to: debbie.stead@bsigroup.com

Follow standard

You are now following this standard. Weekly digest emails will be sent to update you on the following activities:

You can manage your follow preferences from your Account. Please check your mailbox junk folder if you don't receive the weekly email.

Unfollow standard

You have successfully unsubscribed from weekly updates for this standard.

Error