Please note, we are experiencing intermittent issues on the platform which we are investigating. You may experience issues with submitting comments. If you do encounter issues, please resubmit your comment. Please accept our apologies for any inconvenience caused

We use cookies to give you the best experience and to help improve our website

Find out what cookies we use and how to disable them

ISO/IEC NP TS 25568 Information technology — Artificial Intelligence — Guidance on addressing risks in generative AI systems

Scope

This document provides guidance on addressing risks in generative artificial intelligence (AI) systems.
It includes:
- The objectives of generative AI systems when identifying risks.
- The risk sources and the stakeholders facing the risks in generative AI systems throughout its life
cycle.
- Guidance for risk analysis, risk treatment and controls of addressing risks in generative AI systems.

 

Purpose

Generative Artificial Intelligence (AI) is a type of AI based on techniques and generative models that
aim to generate new content (defined in ISO/IEC 22989:2022/AWI Amd 1), and its performance in
knowledge learning, inductive summarization, content creation, perception and cognition is distinctly
different from previous AI technologies. It has greater generalization and interactivity, and therefore is
extensively integrated into various scenarios.
There are several new features of generative AI, including but not limited to the following.
• New contents are generated by modelling the patterns of vast quantities of training data, rather than
recognizing or classifying existing contents.
• Long context windows and self-attention mechanism enable different attention weights given to the
relationships between various parts of the user input, so that users can get better interactive
experience.
• Contents are easily generated via natural language conversation.
• The foundation model can be fine-tuned at low cost and then applied in wide areas and at large scale.
• Contents generated are highly convincing and more aligned with human habits, as generative AI is
more generalized.
• Greater randomness is introduced in generated contents, because generative AI is based on next
token prediction.
• The automatic generation of contents (texts, pictures, sounds) has a strong impact for the work
organization in many professions where such contents have been so far created by human beings.
Therefore, the industry has expressed concerns about the potential risks of generative AI. Generative
AI brings new risks, and exacerbate existing AI risks, which include but not limited to the following.
• Easy access to knowledge might make it easier for malicious users to cause harms to society without
specialized training (e.g., CBRN knowledge, malware); meanwhile, it also positively increases the
productivity of research work and innovation.
• Generative AI’s strong generalization ability might result in hallucination; at the same time, the ability
allows to process diverse user input.
• The generated contents, when contain faults or misalign with regulations and ethics, might mislead
the downstream applications make incorrect decisions or even harmful actions; while the generated
contents can also empower various applications, such as AI agents.
• Generative AI might generate unethical contents that pose harms to individuals and society.
• Over-reliance on generative AI might cause humans to be manipulated, especially when humans
have no detailed knowledge of how generative AI works.
• The generated contents might cause sensitive information leakage; while users can benefit from the
customized personal assistant by feeding personal information to generative AI.
• The generated contents might cause copyrights infringement.
• Continuous learning based on the user feedback can be leveraged to mislead gennerative AI behaviors; meanwhile, it can enable better alignment with human preference. • Prompt-based attacks expand the attack surface. Since generative AI systems can involve multiple stakeholders, and the management of generative AI risks relies on the participation of various stakeholders. However, there is no standard defining thestakeholders’ responsibilities. Moreover, having stakeholders be responsible for addressing risks in AI
system life cycle stage where they do not have risk control capabilities can be highly inefficient and
resource-intensive.
While several existing ISO standards are also dealing with AI risks, there are still some gaps.
• For ISO/IEC 23894:2023 Information technology — Artificial intelligence — Guidance on risk
management, it covers risk management process, AI-related objectives and AI risk sources, but it does
not provide objectives related to generative AI system, does not provide risk sources related to
generative AI systems, and it lacks the granularity of risk analysis against objectives of generative AI
systems, including what are the specific risk consequence, and what are the specific factors to consider
in likelihood assessment.
• For ISO/IEC 42001:2023 Information technology — Artificial intelligence —Management system, it
covers controls of addressing risk related to the design and operation of AI systems, but it does not
provide controls related to the risks of generative AI systems.
• Also, the existing ISO standards fail to identify the stakeholders responsible for addressing risks
related to generative AI systems.
Therefore, this standard aims to achieve the following objectives.
• Develop new and refined objectives to manage risks of generative AI systems.
• Identify the risk sources related to generative AI systems.
• Identify the stakeholders responsible for addressing risks in generative AI systems throughout its life
cycle.
• Conduct a fine granular risk analysis against objectives of generative AI systems, including specific
consequence, and specific factors to consider in likelihood assessment where applicable.
• Specify risk treatment and controls for generative AI systems.

 

Comment on proposal

Required form fields are indicated by an asterisk (*) character.


Please email further comments to: debbie.stead@bsigroup.com

Follow standard

You are now following this standard. Weekly digest emails will be sent to update you on the following activities:

You can manage your follow preferences from your Account. Please check your mailbox junk folder if you don't receive the weekly email.

Unfollow standard

You have successfully unsubscribed from weekly updates for this standard.

Error