We use cookies to give you the best experience and to help improve our website
Find out what cookies we use and how to disable themThis document provides examples of application on the terminology, properties, risk factors, processes, methods, techniques and architectures relating to:
- Use of AI technology within a safety-related function
- Use of safety-related function based on conventional technology to ensure safety of a system using AI technology;
- Use of AI technology to design, develop and verify safety-related functions.
This document includes general considerations on how security threats can affect safety of an AI system. Unless differently specific, this document is applicable to all types of AI technologies. It includes specific details on machine learning (ML).There is no scope change from the original deliverable.
The ISO/IEC TS 22440 series is crafted to serve as a comprehensive guide for professionals involved in the development of safetyrelated systems that integrate artificial intelligence (AI) into their safety functions. By providing detailed insights and structured guidance, this series empowers developers and engineers to make informed decisions during the design and implementation phases, ensuring that AI components are deployed safely and effectively.
At its core, the series aims to enhance understanding of AI technologies by highlighting both their innovative capabilities and the unique safety challenges they present. It lays out the fundamental characteristics of AI—including its vast potential for adaptive learning and autonomous operation—while also drawing attention to the specific risks inherent to these systems. Moreover, it outlines a range of safety methods and strategies available to mitigate these risks, addressing possible constraints that might arise during integration. This dual focus on enabling benefits and managing hazards is key to fostering a responsible and secure adoption of AI in safety-critical environments.
Beyond mere risk assessment, the ISO/IEC TS 22440 series delves into the broader challenges associated with AI safety. It critically examines issues such as the unpredictability of AI behavior in complex scenarios, the difficulties in validating and verifying AI decision-making processes, and the potential for systemic failures if not managed properly. Importantly, the series doesn’t stop at problem identification—it also presents potential solutions and best practices that can guide the development of robust and resilient safety functions. This structured approach helps organizations navigate the delicate balance between leveraging cutting-edge AI technology and ensuring uncompromised safety.
A particularly noteworthy aspect of this initiative is its collaborative development process. The standard is being formulated by a joint working group that brings together experts from both ISO/IEC JTC 1/SC 42, which represents the AI community, and IEC TC 65/SC 65A, which specializes in functional safety. This alliance embodies a fusion of traditional functional safety methodologies with the novel challenges and opportunities presented by AI. In doing so, it not only reinforces the credibility and applicability of the guidelines but also ensures that the resulting standards are versatile enough to address both conventional safety concerns and the nuanced demands of modern AI-driven systems.
Required form fields are indicated by an asterisk (*) character.
You are now following this standard. Weekly digest emails will be sent to update you on the following activities:
You can manage your follow preferences from your Account. Please check your mailbox junk folder if you don't receive the weekly email.
You have successfully unsubscribed from weekly updates for this standard.
Comment by: