Artificial Intelligence (AI) — Assessment of the robustness of neural networks — Part 1: Overview

This document provides background about existing methods to assess the robustness of neural networks.

Intelligence artificielle (IA) — Évaluation de la robustesse des réseaux de neurones — Partie 1: Vue d'ensemble

General Information

Status
Published
Publication Date
09-Mar-2021
Current Stage
6060 - International Standard published
Start Date
10-Mar-2021
Completion Date
10-Mar-2021
Ref Project
Technical report
ISO/IEC TR 24029-1:2021 - Artificial Intelligence (AI) -- Assessment of the robustness of neural networks
English language
31 pages
sale 15% off
Preview
sale 15% off
Preview

Standards Content (Sample)


TECHNICAL ISO/IEC TR
REPORT 24029-1
First edition
2021-03
Artificial Intelligence (AI) —
Assessment of the robustness of
neural networks —
Part 1:
Overview
Reference number
©
ISO/IEC 2021
© ISO/IEC 2021
All rights reserved. Unless otherwise specified, or required in the context of its implementation, no part of this publication may
be reproduced or utilized otherwise in any form or by any means, electronic or mechanical, including photocopying, or posting
on the internet or an intranet, without prior written permission. Permission can be requested from either ISO at the address
below or ISO’s member body in the country of the requester.
ISO copyright office
CP 401 • Ch. de Blandonnet 8
CH-1214 Vernier, Geneva
Phone: +41 22 749 01 11
Email: copyright@iso.org
Website: www.iso.org
Published in Switzerland
ii © ISO/IEC 2021 – All rights reserved

Contents Page
Foreword .iv
Introduction .v
1 Scope .1
2 Normative references .1
3 Terms and definitions .1
4 Overview of the existing methods to assess the robustness of neural networks .3
4.1 General . 3
4.1.1 Robustness concept . 3
4.1.2 Typical workflow to assess robustness . 3
4.2 Classification of methods . 6
5 Statistical methods .7
5.1 General . 7
5.2 Robustness metrics available using statistical methods . 8
5.2.1 General. 8
5.2.2 Examples of performance measures for interpolation . 8
5.2.3 Examples of performance measures for classification . 9
5.2.4 Other measures .13
5.3 Statistical methods to measure robustness of a neural network .14
5.3.1 General.14
5.3.2 Contrastive measures .14
6 Formal methods .14
6.1 General .14
6.2 Robustness goal achievable using formal methods.15
6.2.1 General.15
6.2.2 Interpolation stability .15
6.2.3 Maximum stable space for perturbation resistance.15
6.3 Conduct the testing using formal methods .16
6.3.1 Using uncertainty analysis to prove interpolation stability .16
6.3.2 Using solver to prove a maximum stable space property .16
6.3.3 Using optimization techniques to prove a maximum stable space property .16
6.3.4 Using abstract interpretation to prove a maximum stable space property .17
7 Empirical methods .17
7.1 General .17
7.2 Field trials .17
7.3 A posteriori testing .18
7.4 Benchmarking of neural networks .19
Annex A (informative) Data perturbation .20
Annex B (informative) Principle of abstract interpretation .25
Bibliography .26
© ISO/IEC 2021 – All rights reserved iii

Foreword
ISO (the International Organization for Standardization) and IEC (the International Electrotechnical
Commission) form the specialized system for worldwide standardization. National bodies that
are members of ISO or IEC participate in the development of International Standards through
technical committees established by the respective organization to deal with particular fields of
technical activity. ISO and IEC technical committees collaborate in fields of mutual interest. Other
international organizations, governmental and non-governmental, in liaison with ISO and IEC, also
take part in the work.
The procedures used to develop this document and those intended for its further maintenance are
described in the ISO/IEC Directives, Part 1. In particular, the different approval criteria needed for
the different types of document should be noted. This document was drafted in accordance with the
editorial rules of the ISO/IEC Directives, Part 2 (see www .iso .org/ directives or www .iec .ch/ members
_experts/ refdocs).
Attention is drawn to the possibility that some of the elements of this document may be the subject
of patent rights. ISO and IEC shall not be held responsible for identifying any or all such patent
rights. Details of any patent rights identified during the development of the document will be in the
Introduction and/or on the ISO list of patent declarations received (see www .iso .org/ patents) or the IEC
list of patent declarations received (see patents.iec.ch).
Any trade name used in this document is information given for the convenience of users and does not
constitute an endorsement.
For an explanation of the voluntary nature of standards, the meaning of ISO specific terms and
expressions related to conformity assessment, as well as information about ISO's adherence to the
World Trade Organization (WTO) principles in the Technical Barriers to Trade (TBT), see www .iso .org/
iso/ foreword .html. In the IEC, see www .iec .ch/ understanding -standards.
This document was prepared by Joint Technical Committee ISO/IEC JTC 1, Information technology,
Subcommittee SC 42, Artificial intelligence.
A list of all parts in the ISO/IEC 24029 series can be found on the ISO and IEC websites.
Any feedback or questions on this document should be directed to the user’s national standards body. A
complete listing of these bodies can be found at www .iso .org/ members .html and www .iec .ch/ national
-committees.
iv © ISO/IEC 2021 – All rights reserved

Introduction
When designing an AI system, several properties are often considered desirable, such as robustness,
resiliency, reliability, accuracy, safety, security, privacy. A definition of robustness is provided in 3.6.
Robustness is a crucial property that poses new challenges in the context of AI systems. For example, in
AI systems there are some risks specifically tied to the robustness of AI systems. Understanding these
risks is essential for the adoption of AI in many contexts. This document aims at providing an overview
of the approaches available to assess these risks, with a particular focus on neural networks, which are
heavily used in industry, government and academia.
In many organizations, software validation is an essential part of putting software into production.
The objective is to ensure various properties including safety and performance of the software used
in all parts of the system. In some domains, the software validation and verification process is also
an important part of system certification. For example, in the automotive or aeronautic fields, existing
standards, such as ISO 26262 or Reference [2], require some specific actions to justify the design, the
implementation and the testing of any piece of embedded software.
The techniques used in AI systems are also subject to validation. However, common techniques used in
AI systems pose new challenges that require specific approaches in order to ensure adequate testing
and validation.
AI technologies are designed to fulfil various tasks, including interpolation/regression, classification
and other tasks.
While many methods exist for validating non-AI systems, they are not always directly applicable to
AI systems, and neural networks in particular. Neural network systems represent a specific challenge
as they are both hard to explain and sometimes have unexpected behaviour due to their non-linear
nature. As a result, alternative approaches are needed.
Methods are categorized into three groups: statistical methods, formal methods and empirical methods.
This document provides background on these methods to assess the robustness of neural networks.
It is noted that characterizing the robustness of neural networks is an open area of research, and there
are limitations to both testing and validation approaches.
© ISO/IEC 2021 – All rights reserved v

TECHNICAL REPORT ISO/IEC TR 24029-1:2021(E)
Artificial Intelligence (AI) — Assessment of the robustness
of neural networks —
Part 1:
Overview
1 Scope
This document provides background about existing methods to assess the robustness of neural
networks.
2 Normative references
There are no normative references in this document.
3 Terms and definitions
For the purposes of this document, the following terms and definitions apply.
ISO and IEC maintain terminological databases for use in standardization at the following addresses:
— ISO Online browsing platform: available at https:// www .iso .org/ obp
— IEC Electropedia: available at http:// www .electropedia .org/
3.1
artificial intelligence
AI
capability of an engineered system to acquire, process and apply knowledge and skills
3.2
field trial
trial of a new system in actual situations for which it is intended (potentially with a restricted user group)
Note 1 to entry: Situation encompasses environment and process of usage.
3.3
input data
data for which a deployed machine learning model calculates a predicted output or inference
Note 1 to entry: Input data is also referred to by machine learning practitioners as out-of-sample data, new data
and production data.
© ISO/IEC 2021 – All rights reserved 1

3.4
neural network
neural net
NN
artificial neural network
ANN
network of primitive processing elements connected by weighted links with adjustable weights, in
which each element produces a value by applying a non-linear function to its input values, and transmits
it to other elements or presents it as an output value
Note 1 to entry: Whereas some neural networks are intended to simulate t
...

Questions, Comments and Discussion

Ask us and Technical Secretary will try to provide an answer. You can facilitate discussion about the standard in here.