Please note, we are experiencing intermittent issues on the platform which we are investigating. You may experience issues with submitting comments. If you do encounter issues, please resubmit your comment. Please accept our apologies for any inconvenience caused

We use cookies to give you the best experience and to help improve our website

Find out what cookies we use and how to disable them

NWIP Taxonomy AI system methods and capabilities

Scope

This document provides guidance on the classification of AI system by describing a taxonomy of methods and capabilities. The taxonomy enables AI stakeholders to describe and have a common understanding of an AI system. This document applies to all types of organizations involved in any of the lifecycle stages of AI systems as well as to any AI stakeholder roles.

Purpose

Within the Vienna Agreement, the international project ISO/IEC 42102 "Taxonomy of AI system methods and capabilities" is hereby proposed for the European level and the development of a European standard. The working draft of ISO/IEC 42102 is an editorially and technically complete document whose content is currently being negotiated from various perspectives at the international level.

Using the taxonomy outlined in ISO/IEC 42102 offers a clear basis for understanding the technical aspects of AI systems. This clarity enhances the ability to assess AI methods (approaches and algorithms) as well as thereby realisable capabilities (abilities and functionalities) in a more accessible manner. With this improved understanding, one can better plan and execute actions that meet both legislative as well as ethical considerations. In this context, the taxonomy serves as a reliable foundation for describing AI systems, helping to categorise their methods and capabilities through a scientific lens.

The document can be used to fulfil of portion of the AI Act-related expectations in the standardisation request on the subject of "transparency", e.g., with regard to

a) the operation of AI systems;

b) instructions for use;

c) instructions on the AI system’s capabilities and limitations;

i) identify and appropriately distinguish information, which is relevant and comprehensible for different professional user profiles and lay users.

Below are some quotes from the EU AI Act which underpin the value of ISO/IEC 42102 for describing capabilities of AI systems. Apart from ISO/IEC 42102, there is basically no standard for specifying the capabilities of AI systems, which means that ISO/IEC 42102 closes a very wide gap in the comprehensibility of AI-specific functionalities.:

Some quotes on that from the EU AI Act:

- Transparency means that AI systems are developed and used in a way that allows appropriate traceability and explainability, while making humans aware that they communicate or interact with an AI system, as well as duly informing deployers of the capabilities and limitations of that AI system and affected persons about their rights;

- Such information should include the general characteristics, capabilities and limitations of the system, algorithms, data, training, testing and validation processes used as well as documentation on the relevant risk-management system and drawn in a clear and comprehensive form;

- ..... often provided by downstream providers that necessitate a good understanding of the models and their capabilities, both to enable the integration of such models into their products, and to fulfil their obligations under this or other regulations;

- The full range of capabilities in a model could be better understood after its placing on the market or when deployers interact with the model.

In general, the taxonomy serves as a basis for

- transparency with regard to the implemented algorithms and thereby realisable capabilities;

- documenting AI systems within product accompanying documents;

- structuring traces of algorithm and functionality

-related actions as well as quality characteristics during logging procedures;

- the development of AI system databases;

- clarity about AI system’s technical composition while applying further AI

-related standardisation as well as legislative documents;

- outlining the interaction between pairs of algorithms and functionalities in complex AI systems;

- uniform labelling of AI systems;

- depicting the technical composition of AI systems while outlining the context-dependent criticality levels;

- a detailed description of AI before introduction and launch;

- accelerating the development of AI pipelines;

- clarification of an AI systems' technical basis in requirements catalogues to clarify the client's expectations towards contractors;

- specification of AI systems' quality criteria in dependence of methods and capabilities;

- outlining the interaction between pairs of algorithms and functionalities in complex AI systems;

- mapping the functioning of algorithms in AI systems in individual steps;

- associating specific test/certification procedures with pairs of methods and capabilities;

- associating skills with pairs of methods and capabilities that experts must have for the assessment of laboratories and certification bodies;

- a structured overview of AI systems recalled from the European single market;

- communicating AI system traits among stakeholders;

- checking/tracking the fulfilment of specific requirements related to methods and capabilities (e.g. via checklists and models in the Asset Administration Shell)

Comment on proposal

Required form fields are indicated by an asterisk (*) character.


Please email further comments to: debbie.stead@bsigroup.com

Follow standard

You are now following this standard. Weekly digest emails will be sent to update you on the following activities:

You can manage your follow preferences from your Account. Please check your mailbox junk folder if you don't receive the weekly email.

Unfollow standard

You have successfully unsubscribed from weekly updates for this standard.

Error