TY - JOUR
T1 - Scaling Camouflage
T2 - Content Disguising Attack Against Computer Vision Applications
AU - Chen, Yufei
AU - Shen, Chao
AU - Wang, Cong
AU - Xiao, Qixue
AU - Li, Kang
AU - Chen, Yu
N1 - Publisher Copyright:
© 2020 IEEE.
PY - 2021/9
Y1 - 2021/9
N2 - Recently, deep neural networks have achieved state-of-the-art performance in multiple computer vision tasks, and become core parts of computer vision applications. In most of their implementations, a standard input preprocessing component called image scaling is embedded, in order to resize the original data to match the input size of pre-trained neural networks. This article demonstrates content disguising attacks by exploiting the image scaling procedure, which cause machine’s extracted content to be dramatically dissimilar with that before scaled. Different from previous adversarial attacks, our attacks happen in the data preprocessing stage, and hence they are not subject to specific machine learning models. To achieve a better deceiving and disguising effect, we propose and implement three feasible attack approaches with L0-, L2- and L1-norm distance metrics. We have conducted a comprehensive evaluation on various image classification applications, including three local demos and two remote proprietary services. We also investigate the attack effects on a YOLO-v3 object detection demo. Our experimental results demonstrate successful content disguising against all of them, which validate our approaches are practical.
AB - Recently, deep neural networks have achieved state-of-the-art performance in multiple computer vision tasks, and become core parts of computer vision applications. In most of their implementations, a standard input preprocessing component called image scaling is embedded, in order to resize the original data to match the input size of pre-trained neural networks. This article demonstrates content disguising attacks by exploiting the image scaling procedure, which cause machine’s extracted content to be dramatically dissimilar with that before scaled. Different from previous adversarial attacks, our attacks happen in the data preprocessing stage, and hence they are not subject to specific machine learning models. To achieve a better deceiving and disguising effect, we propose and implement three feasible attack approaches with L0-, L2- and L1-norm distance metrics. We have conducted a comprehensive evaluation on various image classification applications, including three local demos and two remote proprietary services. We also investigate the attack effects on a YOLO-v3 object detection demo. Our experimental results demonstrate successful content disguising against all of them, which validate our approaches are practical.
KW - adversarial examples
KW - computer vision
KW - Content disguising
KW - deep learning
KW - image scaling
UR - https://www.scopus.com/pages/publications/85079468593
U2 - 10.1109/TDSC.2020.2971601
DO - 10.1109/TDSC.2020.2971601
M3 - 文章
AN - SCOPUS:85079468593
SN - 1545-5971
VL - 18
SP - 2017
EP - 2028
JO - IEEE Transactions on Dependable and Secure Computing
JF - IEEE Transactions on Dependable and Secure Computing
IS - 5
ER -