If you have difficulty in submitting comments on draft standards you can use a commenting template and email it to admin.start@bsigroup.com. The commenting template can be found here.

We use cookies to give you the best experience and to help improve our website

Find out what cookies we use and how to disable them

ISO/IEC NP 26655 Information technology — Biometric deepfake attack detection — Testing and reporting

Source:
ISO/IEC
Committee:
IST/44 - Biometrics
Categories:
Information management | Standardization. General rules
Comment period start date:
Comment period end date:

Comment by:

Scope

This document establishes a framework for testing and reporting deepfake attack detection mechanisms in biometric systems. The scope of this standard is focused on ensuring comparability, repeatability, and transparency of biometric deepfake attack detection.

This document specifies:

— Principles and methods for the performance assessment of biometric deepfake attack detection mechanisms;

— Reporting of testing results from evaluations of biometric deepfake attack detection mechanisms.

Outside the scope of this document are: 

— Standardization of specific biometric deepfake attack detection mechanisms;

— Detailed information about models (e.g., neural-network based detectors), algorithms, or sensors;

— Overall system-level security or vulnerability assessment beyond the evaluation of biometric deepfake attack detection components.

Purpose

This proposal aims to address the urgent market demand for standardized testing and reporting in the field of biometric deepfake attack detection. With the rapid development of artificial intelligence technologies, biometric recognition has been widely deployed in high-risk domains such as financial services, public security, and digital identity verification. However, the malicious use of deepfake generation techniques has severely undermined the credibility of these systems. The misuse of deepfake technology enables attackers to deceive existing biometric systems by forging biometric data (such as facial or vocal information), posing serious threats to personal privacy and public security.

To counter such threats, both providers and users of biometric technologies have integrated deepfake attack detection as a critical module into existing biometric systems to intercept these risks. Currently, detection technologies for deepfake attacks are still in the developmental stage, with significant variations in the security, usability, and reliability of products from different vendors. Corresponding technical and evaluation standards are notably lacking. While presentation attack detection and related standards are relatively well-established, biometric deepfake attacks, as an emerging and more sophisticated threat, have become a vulnerable point in biometric systems.

Therefore, it is particularly urgent to establish unified testing specifications for biometric deepfake attack detection. Such standards would evaluate and measure the functionality and performance of deepfake detection modules in biometric systems, guiding technology providers and users to iteratively enhance deepfake detection capabilities to meet growing security demands.

This standard seeks to resolve these issues by establishing a unified testing and reporting framework. It will introduce consistent datasets, evaluation metrics, and testing protocols to shift the industry's focus from performance-centric benchmarks to reliability, robustness, and real-world applicability. By doing so, it will enhance the credibility and effectiveness of deepfake detection technologies, support compliance with international regulations—particularly for high-risk AI systems—and facilitate crossmarket adoption.

For end-users, the standard will deliver three key benefits: enhanced trust and security, by ensuring that biometric systems can reliably detect sophisticated deepfake attacks and restoring user confidence in digital identity solutions; informed decision-making, through clear testing and reporting guidelines that enable organizations to compare and select detection technologies based on transparent and reproducible criteria; and global alignment, as governments strengthen AI governance, with this standard serving as a technical benchmark for compliance, promoting international cooperation and a cohesive approach to deepfake risk management. In summary, this proposal fills a critical gap in the global standards landscape, supports the development of secure and resilient biometric systems, and contributes to building a safer, more trustworthy digital ecosystem.

Comment on proposal

Required form fields are indicated by an asterisk (*) character.


Please email further comments to: debbie.stead@bsigroup.com

Follow standard

You are now following this standard. Weekly digest emails will be sent to update you on the following activities:

You can manage your follow preferences from your Account. Please check your mailbox junk folder if you don't receive the weekly email.

Unfollow standard

You have successfully unsubscribed from weekly updates for this standard.

Error