Scaling Camouflage: Content Disguising Attack Against Computer Vision Applications

  • Yufei Chen
  • , Chao Shen
  • , Cong Wang
  • , Qixue Xiao
  • , Kang Li
  • , Yu Chen

Research output: Contribution to journalArticlepeer-review

10 Scopus citations

Abstract

Recently, deep neural networks have achieved state-of-the-art performance in multiple computer vision tasks, and become core parts of computer vision applications. In most of their implementations, a standard input preprocessing component called image scaling is embedded, in order to resize the original data to match the input size of pre-trained neural networks. This article demonstrates content disguising attacks by exploiting the image scaling procedure, which cause machine’s extracted content to be dramatically dissimilar with that before scaled. Different from previous adversarial attacks, our attacks happen in the data preprocessing stage, and hence they are not subject to specific machine learning models. To achieve a better deceiving and disguising effect, we propose and implement three feasible attack approaches with L0-, L2- and L1-norm distance metrics. We have conducted a comprehensive evaluation on various image classification applications, including three local demos and two remote proprietary services. We also investigate the attack effects on a YOLO-v3 object detection demo. Our experimental results demonstrate successful content disguising against all of them, which validate our approaches are practical.

Original languageEnglish
Pages (from-to)2017-2028
Number of pages12
JournalIEEE Transactions on Dependable and Secure Computing
Volume18
Issue number5
DOIs
StatePublished - Sep 2021

Keywords

  • adversarial examples
  • computer vision
  • Content disguising
  • deep learning
  • image scaling

Fingerprint

Dive into the research topics of 'Scaling Camouflage: Content Disguising Attack Against Computer Vision Applications'. Together they form a unique fingerprint.

Cite this