We use cookies to give you the best experience and to help improve our website

Find out what cookies we use and how to disable them

CEN/CLC/JTC 21 N 148 AI-enhanced Nudging

Scope

This document provides definitions, concepts, and guidelines to address specifically AI-enhanced nudging mechanisms by organisations.

It focuses on a standard that aims to support existing legislations and allow industry to deal with AI-enhanced Nudging Mechanisms according to applicable standards, guidelines and processes.

It is applicable to “AI-enhanced nudging mechanisms” as a sub category of digital nudges empowered and enhanced by AI systems. AI- enhanced nudging mechanisms can occur at a very fine level of granularity and are difficult to be regulated by hard law or hard ethics. Case studies have shown that although regulations exist at EU level (e.g. GDPR for personal data or UCPD for unfair commercial practices), the subtlety and spread of nudging mechanisms makes it difficult to enforce the law.

It also provides use-cases to illustrate the subcategory of digital nudge enhanced by AI systems. It also provides requirements for designing Responsible AI-enhanced nudging mechanisms. Processes and key indicators will accompany requirements, both horizontally (by industry and sectors) and vertically (by applications and technologies), to develop guidance, self-assessment methodologies and methodologies for third-party audits.

It is not applicable for nudge mechanisms designed by the architects of the decision-making process and embedded in the interfaces of deterministic systems, where the allocation of moral responsibility is direct (i.e. digital nudging mechanisms not enhanced by AI systems).

Purpose

This proposal is about providing definitions, concepts, and guidelines to address AI-enhanced nudging mechanisms by organisations. It will support not only organisations that develop or use AI-enhanced nudges, but also consumers, workers and NGOs that will be able to rely on it to protect the freewill of individuals. The objectives are to find the right balance between global and European standardisation, generic and sector-specific standards, life cycle management (audit, feedback loop, revision…), standard criteria, and acceptable residual risk to release an IA-enhanced nudging system.

In order to obtain the conformity of products or services, several criteria of watchdog have to be created:

● ex-ante risk self-assessment and design: making certain that companies comply with these standard criteria and do not act illegally, assuring the implement systems’ warnings systems for users,

● and ext-post enforcement for AI with specific transparency obligation and high-risk AI systems, assuring post-market monitoring and building stakeholders’ community to assess the system’s trustworthiness, etc.

Providing for a robust monitoring and evaluation mechanism is crucial to ensure the effective deployment of ethical AI systems.

Comment on proposal

Required form fields are indicated by an asterisk (*) character.


Please email further comments to: debbie.stead@bsigroup.com

Follow standard

You are now following this standard. Weekly digest emails will be sent to update you on the following activities:

You can manage your follow preferences from your Account. Please check your mailbox junk folder if you don't receive the weekly email.

Unfollow standard

You have successfully unsubscribed from weekly updates for this standard.

Error