TY - GEN
T1 - Verification of Deep Convolutional Neural Networks Using ImageStars
AU - Tran, Hoang Dung
AU - Bak, Stanley
AU - Xiang, Weiming
AU - Johnson, Taylor T.
N1 - Publisher Copyright:
© 2020, The Author(s).
PY - 2020
Y1 - 2020
N2 - Convolutional Neural Networks (CNN) have redefined state-of-the-art in many real-world applications, such as facial recognition, image classification, human pose estimation, and semantic segmentation. Despite their success, CNNs are vulnerable to adversarial attacks, where slight changes to their inputs may lead to sharp changes in their output in even well-trained networks. Set-based analysis methods can detect or prove the absence of bounded adversarial attacks, which can then be used to evaluate the effectiveness of neural network training methodology. Unfortunately, existing verification approaches have limited scalability in terms of the size of networks that can be analyzed. In this paper, we describe a set-based framework that successfully deals with real-world CNNs, such as VGG16 and VGG19, that have high accuracy on ImageNet. Our approach is based on a new set representation called the ImageStar, which enables efficient exact and over-approximative analysis of CNNs. ImageStars perform efficient set-based analysis by combining operations on concrete images with linear programming (LP). Our approach is implemented in a tool called NNV, and can verify the robustness of VGG networks with respect to a small set of input states, derived from adversarial attacks, such as the DeepFool attack. The experimental results show that our approach is less conservative and faster than existing zonotope and polytope methods.
AB - Convolutional Neural Networks (CNN) have redefined state-of-the-art in many real-world applications, such as facial recognition, image classification, human pose estimation, and semantic segmentation. Despite their success, CNNs are vulnerable to adversarial attacks, where slight changes to their inputs may lead to sharp changes in their output in even well-trained networks. Set-based analysis methods can detect or prove the absence of bounded adversarial attacks, which can then be used to evaluate the effectiveness of neural network training methodology. Unfortunately, existing verification approaches have limited scalability in terms of the size of networks that can be analyzed. In this paper, we describe a set-based framework that successfully deals with real-world CNNs, such as VGG16 and VGG19, that have high accuracy on ImageNet. Our approach is based on a new set representation called the ImageStar, which enables efficient exact and over-approximative analysis of CNNs. ImageStars perform efficient set-based analysis by combining operations on concrete images with linear programming (LP). Our approach is implemented in a tool called NNV, and can verify the robustness of VGG networks with respect to a small set of input states, derived from adversarial attacks, such as the DeepFool attack. The experimental results show that our approach is less conservative and faster than existing zonotope and polytope methods.
KW - Computer vision
KW - Machine learning
KW - Neural networks
KW - Reachability analysis
UR - http://www.scopus.com/inward/record.url?scp=85089240965&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85089240965&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-53288-8_2
DO - 10.1007/978-3-030-53288-8_2
M3 - Conference contribution
AN - SCOPUS:85089240965
SN - 9783030532871
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 18
EP - 42
BT - Computer Aided Verification - 32nd International Conference, CAV 2020, Proceedings
A2 - Lahiri, Shuvendu K.
A2 - Wang, Chao
PB - Springer
T2 - 32nd International Conference on Computer Aided Verification, CAV 2020
Y2 - 21 July 2020 through 24 July 2020
ER -