We use cookies to give you the best experience and to help improve our website
Find out what cookies we use and how to disable themThis standard defines an attack potential table for deep learning-based image recognition that can be used for various areas such as face recognition, autonomous driving and video surveillance.
Although primarily intended for such applications, this standard may be used in other fields where desired.
We are seeing increasing application of deep learning techniques to wide range of IT products or TOEs. One main example of such application is image recognition which is a technology that enables TOEs to interpret and categorize visual data, much like how humans understand images, using deep learning models to identify objects, patterns, and features within digital images or videos. Such TOEs can be used for diverse purpose including face recognition for user authentication, object detection for autonomous driving or weapon detection for video surveillance.
We are also seeing evolving attack techniques to deep learning model. An example of such well known attack is adversarial attack which is a deceiving technique that can fool a deep learning model using a defective input. For example, object detection for autonomous driving can take a stop sign as a speed limit by this adversarial attack attaching a small perturbation (i.e., adversarial patches) to the stop sign. Under the ISO/IEC 15408 evaluation, the third-party evaluation laboratory shall perform a search of public domain sources to identify potential vulnerabilities in the TOE. Evaluation laboratory usually search the internet to find relevant information like research papers that describes successful attack methods to identify potential vulnerabilities. However, it may be difficult for the evaluation laboratory to follow this requirement to evaluate the deep learning-based TOE. The evaluation laboratory can use Google Scholar to identify relevant research papers, however, number of publications of research papers from 2022 to 2024 that include “adversarial attacks” in the title is about 2,750 (Number of publication of papers in the same period that includes “buffer overflow” in the title is only 57). The laboratory may not need to examine all of them because some research papers only target to a specific modality such as voice recognition that may be out of scope of the TOE, however, 2,750 is a still huge number to handle. The laboratory shall also search and identify the other potential vulnerabilities like poisoning (number of publication of papers in the same period that includes “poisoning attacks” in the title is 613) or privacy attacks (number of publication of papers in the same period that includes “deep learning privacy” in the title is 568) but it’s not desirable to spend too much time only for this task, especially for low assurance evaluations that should be completed in a few months.
The evaluation laboratory can filter out papers based on the attack potential defined in the ISO/IEC 18045. The attack potential is a numerically expressed attacker's potential that is required for executing attack scenarios for exploiting vulnerabilities. An attack potential is expressed as the sum of the numerical values calculated for each of five factors (Elapsed Time, Expertise, Knowledge of TOE, Window of Opportunity, and Equipment). The evaluation laboratory can filter out those papers that require higher attack potential than targeted. However, we are seeing two issues in this table in ISO/IEC 18045.
a) The table is generic so that it can cover all types of IT products and there is no specific information about attacks to the deep learning. It’s not possible to apply this table to the deep learning-based TOE without modification or additional guidance.
b) The table defines four levels of attack potential (Basic, Enhanced-Basic, Moderate and High) and doesn’t much existing regulations such as EU Cyber Security Act (CSA) that define three levels of assurance (Basic, Substantial or High).
Smartcard technical community (JIL) developed attack potential tables adjusted to specific technology like smart cards and the similar adjusted table for the deep learning should also be developed. Thismeans five factors (Elapsed Time, Expertise, Knowledge of TOE, Window of Opportunity and Equipment) in the table should be modified or refined using well-known deep learning terms that we often see in the deep learning research papers (e.g., black or white box attack, necessary computing resources (GPU time) or minimum number of queries to the model for creating adversarial examples, and imperceptibility of adversarial attacks). The guidance how to adjust four levels of attack potential to existing regulations is also provided.
For example, if the risk of users who use the image recognition TOE is low (e.g., image recognition for non-security critical personal use), the evaluation laboratory should focus on those attacks requiring low attack potential like black-box attacks assuming no or minimal knowledge about the TOE. The attacker has little knowledge about the deep learning and those attacks that train the model to generate the adversarial examples or patches, which is a popular attack method among many research papers, should also be filtered out.
If the risk of users who use the image recognition TOE is significant where protection measures against known attack scenarios are needed (e.g., deep learning-based automated cashier that can automatically recognize items such as drinks to speed up the checkout), the attacker may attach an adversarial patch to the item to avoid recognitions or make the TOE mis-recognize expensive items as cheaper ones. The laboratory should focus on physical attacks which are successfully demonstrated in the real world using tangible adversarial examples and should not spend much time on digital adversarial attacks which only occurs in the digital space through the addition of subtle perturbations to the digital image. The laboratory can save a lot of time to identify potential vulnerabilities by filtering out those research papers that are out of scope of target assurance level.
If the risk of users who use the image recognition TOE is high with scenarios where the impact of an attack could be severe (e.g., deep learning-based object detection and recognition for autonomous driving), the laboratory should focus on white-box attack assuming that the attacker operates with full knowledge about the TOE, including the training data, model architecture, model hyper-parameters and detail of defence measure against the adversarial attacks. The evaluation laboratory should focus on those attack methods that can break the same or similar defence measure that the TOE implements. The laboratory should also identify promising digital adversarial attacks and make attacks robust enough to the real physical environment considering camera views, location of perturbation, and lighting condition etc.
Adjusted attack potential table can enable the evaluation laboratory to efficiently select relevant research papers based on the target assurance level from huge number of research papers. The same table can also be used by regulatory authorities to form and share a clear notion of what “Basic”, “Substantial” or “High” assurance means for the security of deep learning. Researchers can also describe the value of each factor in the attack table in their research papers to clearly show the criticality of attacks. Developers or security testers can also use the table to identify and test attacks commensurate with the required level of assurance.
Typical well-known deep learning-based attack methods will be also introduced to explain how the table can be used to estimate the attack potential.
Required form fields are indicated by an asterisk (*) character.
You are now following this standard. Weekly digest emails will be sent to update you on the following activities:
You can manage your follow preferences from your Account. Please check your mailbox junk folder if you don't receive the weekly email.
You have successfully unsubscribed from weekly updates for this standard.
Comment by: