We use cookies to give you the best experience and to help improve our website

Find out what cookies we use and how to disable them

IEC 65/1117/NP N 500 Information technology ? Artificial intelligence ? Guidance and requirements for uncertainty quantification in AI systems

Scope

This document specifies general and technical guidance and requirements for the development and use of methods for the quantification of uncertainties in AI systems.

This document defines fundamental terminology for uncertainty quantification in AI systems along with the characteristics of selected approaches to uncertainty quantification. The characteristics and approaches are then described through selected applications.

This document describes aspects of quantification of uncertainties for all stages of the AI system life cycle.

Purpose

Rapid advancements in artificial intelligence (AI) and machine learning (ML) have led to the adoption of AI and ML algorithms in applications in almost every industrial sector and scientific fields. ML methods are also becoming increasingly important in the development of real-world and often safety-critical AI systems. Robust and safe operation of ML elements and AI systems is essential and can be supported by uncertainty quantification. This document contains requirements due to the current and envisaged importance of the concepts developed in this document and the need of objectively verifiable criteria.

Uncertainty quantification is a broad and extensively researched discipline, often linked with fields such as e.g., numerical modeling and simulation. Uncertainty quantification allows to estimate the uncertainty associated with models, algorithms, and predicted results [1]. The estimation of uncertainty is also of major interest and importance in AI systems.

Due to the complexity of many AI and ML applications, training data (often) cannot represent the entire intended domain of use, and scenarios will occur that were insufficiently covered during the development of the AI system. High quality uncertainty estimates can support better runtime monitoring. Uncertainty quantification is, therefore, one of the building blocks of general safety of AI. Apart from that, uncertainty quantification is also highly relevant to other aspects of AI and ML such as transfer learning, active learning, reinforcement learning, or data fusion [2].

Uncertainty quantification in AI systems (and especially in ML and deep learning) is already an established but still very fast-growing field of research. While scientific advancements have created a solid base for broader adoption of uncertainty quantification in AI systems, there is a lack of standardization work that reflects the state of the art and helps stakeholders to apply uncertainty quantification in practice. To address this issue, this document defines essential terminology (Clause 3), introduces important aspects of uncertainty quantification (Clause 4), describes selected applications of uncertainty quantification (Clause 5), provides an overview of uncertainty quantification approaches and general characteristics (Clause 6) as well as formulates guidance and requirements for uncertainty quantification in AI systems (Clause 7). Lastly, the document is supplemented with informative annexes that include additional explanations (Annex A and Annex B) and presentation of use case examples (Annex C).

The following standards (published or under development) with relation to this proposed NP exist:

• ISO/IEC TR 5469:2024 - Artificial intelligence— Functional safety and AI systems

• ISO/IEC AWI TS 22440 - Artificial intelligence — Functional safety and AI systems — Requirements • ISO/IEC TR 24029-1:2021 - Artificial Intelligence (AI) — Assessment of the robustness of neural networks — Part 1:

Overview
• ISO/IEC 24029-2:2023 - Artificial intelligence (AI) — Assessment of the robustness of neural networks — Part 2: Methodology for the use of formal methods

• ISO/IEC AWI 24029-3 - Artificial intelligence (AI) — Assessment of the robustness of neural networks— Part 3: Methodology for the use of statistical methods

• ISO/IEC CD 12792 - Information technology — Artificial intelligence — Transparency taxonomy of AI systems

• ISO/IEC TS 8200 - Information technology — Artificial intelligence — Controllability of automated artificial intelligence systems

• ISO/IEC 22989:2022 - Information technology — Artificial intelligence — Artificial intelligence  concepts and terminology

• ISO/IEC 23894:2023 - Information technology — Artificial intelligence — Guidance on risk management

However, uncertainty quantification in AI systems is not considered there in detail, which motivated this NP dedicated to that topic.
[1] Smith, R.C. 1960. Uncertainty Quantification: Theory, Implementation, and Applications. Carolina State University, Raleigh, North Carolina.
[2] Gawlikowski, J., Tassi, C. R. N., Ali, M., Lee, J., Humt, M. et al. 2023. A Survey of Uncertainty in Deep Neural Networks. Artificial Intelligence Review, pp. 1–77. https://doi.org/10.1007/s10462-023-10562-9

Comment on proposal

Required form fields are indicated by an asterisk (*) character.


Please email further comments to: debbie.stead@bsigroup.com

Follow standard

You are now following this standard. Weekly digest emails will be sent to update you on the following activities:

You can manage your follow preferences from your Account. Please check your mailbox junk folder if you don't receive the weekly email.

Unfollow standard

You have successfully unsubscribed from weekly updates for this standard.

Error