Black-box Adversarial Attacks on Commercial Speech Platforms with Minimal Information

  • Baolin Zheng
  • , Peipei Jiang
  • , Qian Wang
  • , Qi Li
  • , Chao Shen
  • , Cong Wang
  • , Yunjie Ge
  • , Qingyang Teng
  • , Shenyi Zhang

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

87 Scopus citations

Abstract

Adversarial attacks against commercial black-box speech platforms, including cloud speech APIs and voice control devices, have received little attention until recent years. Constructing such attacks is difficult mainly due to the unique characteristics of time-domain speech signals and the much more complex architecture of acoustic systems. The current "black-box"attacks all heavily rely on the knowledge of prediction/confidence scores or other probability information to craft effective adversarial examples (AEs), which can be intuitively defended by service providers without returning these messages. In this paper, we take one more step forward and propose two novel adversarial attacks in more practical and rigorous scenarios. For commercial cloud speech APIs, we propose Occam, a decision-only black-box adversarial attack, where only final decisions are available to the adversary. In Occam, we formulate the decision-only AE generation as a discontinuous large-scale global optimization problem, and solve it by adaptively decomposing this complicated problem into a set of sub-problems and cooperatively optimizing each one. Our Occam is a one-size-fits-all approach, which achieves 100% success rates of attacks (SRoA) with an average SNR of 14.23dB, on a wide range of popular speech and speaker recognition APIs, including Google, Alibaba, Microsoft, Tencent, iFlytek, and Jingdong, outperforming the state-of-the-art black-box attacks. For commercial voice control devices, we propose NI-Occam, the first non-interactive physical adversarial attack, where the adversary does not need to query the oracle and has no access to its internal information and training data. We, for the first time, combine adversarial attacks with model inversion attacks, and thus generate the physically-effective audio AEs with high transferability without any interaction with target devices. Our experimental results show that NI-Occam can successfully fool Apple Siri, Microsoft Cortana, Google Assistant, iFlytek and Amazon Echo with an average SRoA of 52% and SNR of 9.65dB, shedding light on non-interactive physical attacks against voice control devices.

Original languageEnglish
Title of host publicationCCS 2021 - Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security
PublisherAssociation for Computing Machinery
Pages86-107
Number of pages22
ISBN (Electronic)9781450384544
DOIs
StatePublished - 13 Nov 2021
Event27th ACM Annual Conference on Computer and Communication Security, CCS 2021 - Virtual, Online, Korea, Republic of
Duration: 15 Nov 202119 Nov 2021

Publication series

NameProceedings of the ACM Conference on Computer and Communications Security
ISSN (Print)1543-7221

Conference

Conference27th ACM Annual Conference on Computer and Communication Security, CCS 2021
Country/TerritoryKorea, Republic of
CityVirtual, Online
Period15/11/2119/11/21

Keywords

  • adversarial attacks
  • black-box attacks
  • speaker recognition
  • speech recognition

Fingerprint

Dive into the research topics of 'Black-box Adversarial Attacks on Commercial Speech Platforms with Minimal Information'. Together they form a unique fingerprint.

Cite this