If you have difficulty in submitting comments on draft standards you can use a commenting template and email it to admin.start@bsigroup.com. The commenting template can be found here.
This document describes a generic framework for describing and characterizing an AI system as an object of conformity assessment. It specifies requirements and provides guidance for the consistent application of this framework in support of all types of conformity assessment activities. The framework provides structured elements for describing relevant characteristics of an AI system, including system boundaries, intended purpose, operational context, and other attributes necessary to enable transparent evaluation against the requirements for which conformity is to be achieved. This document does not define conformity assessment procedures. It does not replace or modify existing conformity assessment or sector-specific standards.
The document is applicable to AI systems as defined in ISO/IEC 22989. The extent of the application of the document depends on the specific requirements and context of the conformity assessment.
Note: It can be used within 1st party, 2nd party, 3rd party conformity assessment system.
Justification
Conformity assessment of AI systems requires a clear, structured, and comprehensive description of the object of conformity assessment (OCA). According to the ISO/IEC 17000 series, the object of conformity assessment is the entity to which specified requirements apply. In the context of artificial intelligence, this entity exhibits characteristics that differ substantially from those of traditional IT systems. AI systems may involve adaptive behavior, data-dependent performance, probabilistic outputs, or complex interactions between software, hardware, models, data, and operational environments. These characteristics must therefore be captured in an AI-specific description of the OCA.
AI systems occur in a wide variety of forms, ranging from standalone applications based on a single machine-learning model to embedded components within larger ICT pipelines, as well as complex, multi-component systems employing hybrid AI techniques or human-AI interaction. Because of this diversity, establishing a common framework for describing AI systems as objects of conformity assessment is essential. Such a framework supports a shared understanding of which elements constitute the OCA, how the boundaries of an AI system can be defined, and how to represent relevant operational and lifecycle aspects. Consistent descriptions across systems and organizations form an essential foundation for ensuring comparability, reproducibility, and transparency in AI-related conformity assessment activities.
PURPOSE
While ISO/IEC 42007 provides overarching guidance for developing conformity assessment schemes for AI systems, it does not define how to describe the object of conformity assessment itself. This document therefore complements ISO/IEC 42007 by establishing a framework for describing AI systems as objects of conformity assessment, specifying requirements for the minimum elements that such a description shall contain, and providing guidance on how the framework may be applied across different types of AI systems and conformity assessment activities. This document does not claim to provide guidance on collecting all the information necessary to specify an OCA, but only for information relating to the AI-specific part of the OCA.
The framework defined in this document enables the structured representation of an AI system’s scope, boundaries, intended function and operational environment. It also allows the inclusion of optional descriptive elements when needed for specific assessment contexts. This format supports the transparent articulation and communication of trustworthiness aspects and facilitates the consistent linking of AI systems to applicable requirements and evaluation activities.
NEED FOR THE STANDARD
For conformity assessment bodies Conformity assessment bodies require stable, repeatable, and harmonized methods for describing the objects they assess. In the AI domain, the absence of a common framework has resulted in inconsistent descriptions that vary in structure, terminology, and level of detail. This inconsistency complicates the planning and execution of assessments, the determination of applicable requirements, and the comparison of assessment outcomes across conformity assessment bodies, schemes, and jurisdictions.
This document provides conformity assessment bodies with:
• a consistent and internationally harmonized baseline for structuring AI-specific OCA descriptions;
• clarity in defining assessment boundaries;
• improved reproducibility and predictability of conformity assessment activities;
• enhanced comparability and interoperability between assessment results produced by different conformity assessment bodies.
By supporting the generation of consistent OCA descriptions, the framework strengthens confidence in conformity assessment outcomes and contributes to the integrity of AI-related assessment schemes.
For industry
Organizations that develop, integrate, deploy, or procure AI systems benefit from clear expectations regarding the documentation required for conformity assessment. Currently, industry faces uncertainty about which system characteristics must be described, and how to prepare documentation that satisfies diverse assessment schemes and regulatory contexts. This lack of clarity can create unnecessary burden, inefficiencies, and inconsistent documentation practices.
This document supports industry by providing:
• a clear, structured, and internationally accepted framework for describing AI systems for conformity assessment purposes;
• predictable documentation requirements that reduce uncertainty and facilitate efficient preparation for assessments;
• improved comparability across markets, aiding organizations operating in multiple jurisdictions. By offering a consistent and neutral framework, this document reduces compliance-related friction and promotes efficient market access for AI-enabled products and services.
For policy makers
Policy makers require harmonized, technology-agnostic tools that support effective oversight, riskbased governance, and cross-border regulatory coherence in the AI domain. Without a standardized framework for describing AI systems as objects of conformity assessment, policy implementation risks becoming fragmented, costly, and difficult to align across sectors, jurisdictions, or regulatory regimes.
This document provides policy makers with:
• Improved transparency and auditability, enabling oversight authorities to review system descriptions more efficiently and consistently.
• Support for risk-based regulatory implementation, including more effective classification of system risks and targeted application of regulatory obligations.
• Cross-border regulatory interoperability, reducing divergence in documentation expectations and enabling cooperation across national or regional oversight systems.
Long-term policy stability, as the framework is designed to accommodate diverse and evolving AI system architectures, lifecycle models, and deployment contexts.
Required form fields are indicated by an asterisk (*) character.
You are now following this standard. Weekly digest emails will be sent to update you on the following activities:
You can manage your follow preferences from your Account. Please check your mailbox junk folder if you don't receive the weekly email.
You have successfully unsubscribed from weekly updates for this standard.
Comment by: