Scope
This document addresses organizational and technical solutions aimed at ensuring the cybersecurity of high-risk AI systems over the lifecycle, appropriate to the relevant circumstances and the risks. The technical solutions to address AI specific vulnerabilities include, where appropriate, measures to prevent, detect, respond to, resolve and control for attacks trying to manipulate the training dataset (data poisoning), or pre-trained components used in training (model poisoning), inputs designed to cause the model to make a mistake (adversarial examples or model evasion), confidentiality attacks or model flaws. This document provides objective criteria to enable decisions on whether a given technical or organizational solution adequately achieves a given vulnerability-related goal.
Purpose
This new work item (NWI) will address organizational and technical solutions aimed at ensuring the cybersecurity of high-risk AI systems over lifecycle, which will be appropriate to the relevant circumstances and the risks. The technical solutions to address AI specific vulnerabilities will include, where appropriate, measures to prevent, detect, respond to, resolve and control for attacks trying to
manipulate the training dataset (data poisoning), or pre-trained components used in training (model poisoning), inputs designed to cause the model to make a mistake (adversarial examples’ or ‘model evasion’), confidentiality attacks or model flaws.
The NWIP is based on
- the draft of “Standardisation request to the European Standardisation Organisations in support of safe and trustworthy artificial intelligence” and Annex 1with Nr.8 Cybersecurity from May 2022 and
- the AI-Act in its last version from march 2024 and the requirements related to cybersecurity
Standardization Request #8:
Cybersecurity specifications for AI systems:
This (these) European standard(s) or European standardisation deliverable(s) shall provide suitable organisational and technical solutions, to ensure that AI systems are resilient against attempts to alter their use, behaviour, or performance or to compromise their security properties by malicious third parties exploiting the AI systems’ vulnerabilities. Organisational and technical solutions shall
therefore include, where appropriate, measures to prevent and control cyberattacks trying to manipulate AI specific assets, such as training data sets (e.g. data poisoning) or trained models (e.g. adversarial examples), or trying to exploit vulnerabilities in an AI system’s digital assets or the underlying ICT infrastructure. These technical solutions shall be appropriate to the relevant
circumstances and risks.
This (these) European standard(s) or European standardisation deliverable(s) shall take due account of the essential requirements for products with digital elements as listed in Sections 1 and 2 of Annex I to the proposal for a Regulation of the European Parliament and the Council on horizontal cybersecurity requirements for products with digital elements (1*COM (2022) 454 final of 15 September 2022.).
Note: in case the WI is based on documents from other organizations than ISO/IEC, please specify it here
Comment on proposal
Required form fields are indicated by an asterisk (*) character.