TY - GEN
T1 - Seeing is not believing
T2 - 28th USENIX Security Symposium, USENIX Security 2019
AU - Xiao, Qixue
AU - Chen, Yufei
AU - Shen, Chao
AU - Chen, Yu
AU - Li, Kang
N1 - Publisher Copyright:
© 2019 by The USENIX Association. All rights reserved.
PY - 2019
Y1 - 2019
N2 - Image scaling algorithms are intended to preserve the visual features before and after scaling, which is commonly used in numerous visual and image processing applications. In this paper, we demonstrate an automated attack against common scaling algorithms, i.e. to automatically generate camouflage images whose visual semantics change dramatically after scaling. To illustrate the threats from such camouflage attacks, we choose several computer vision applications as targeted victims, including multiple image classification applications based on popular deep learning frameworks, as well as mainstream web browsers. Our experimental results show that such attacks can cause different visual results after scaling and thus create evasion or data poisoning effect to these victim applications. We also present an algorithm that can successfully enable attacks against famous cloud-based image services (such as those from Microsoft Azure, Aliyun, Baidu, and Tencent) and cause obvious misclassification effects, even when the details of image processing (such as the exact scaling algorithm and scale dimension parameters) are hidden in the cloud. To defend against such attacks, this paper suggests a few potential countermeasures from attack prevention to detection.
AB - Image scaling algorithms are intended to preserve the visual features before and after scaling, which is commonly used in numerous visual and image processing applications. In this paper, we demonstrate an automated attack against common scaling algorithms, i.e. to automatically generate camouflage images whose visual semantics change dramatically after scaling. To illustrate the threats from such camouflage attacks, we choose several computer vision applications as targeted victims, including multiple image classification applications based on popular deep learning frameworks, as well as mainstream web browsers. Our experimental results show that such attacks can cause different visual results after scaling and thus create evasion or data poisoning effect to these victim applications. We also present an algorithm that can successfully enable attacks against famous cloud-based image services (such as those from Microsoft Azure, Aliyun, Baidu, and Tencent) and cause obvious misclassification effects, even when the details of image processing (such as the exact scaling algorithm and scale dimension parameters) are hidden in the cloud. To defend against such attacks, this paper suggests a few potential countermeasures from attack prevention to detection.
UR - https://www.scopus.com/pages/publications/85076378546
M3 - 会议稿件
AN - SCOPUS:85076378546
T3 - Proceedings of the 28th USENIX Security Symposium
SP - 443
EP - 460
BT - Proceedings of the 28th USENIX Security Symposium
PB - USENIX Association
Y2 - 14 August 2019 through 16 August 2019
ER -