Home > Research > Publications & Outputs > Defending Adversarial Attacks on Cloud-aided Au...

Electronic data

  • Defending Adversarial Attacks on Cloud-aided Automatic Speech Recognition Systems

    Rights statement: © ACM, 2019. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in SCC '19 Proceedings of the Seventh International Workshop on Security in Cloud Computing http://doi.acm.org/10.1145/3327962.3331456

    Accepted author manuscript, 2.02 MB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

Links

Text available via DOI:

View graph of relations

Defending Adversarial Attacks on Cloud-aided Automatic Speech Recognition Systems

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published
Publication date2/07/2019
Host publicationSCC '19 Proceedings of the Seventh International Workshop on Security in Cloud Computing
Place of PublicationNew York
PublisherACM
Pages23-31
Number of pages9
ISBN (print)9781450367882
<mark>Original language</mark>English

Abstract

With the advancement of deep learning based speech recognition technology, an increasing number of cloud-aided automatic voice assistant applications, such as Google Home, Amazon Echo, and cloud AI services, such as IBM Watson, are emerging in our daily life. In a typical usage scenario, after keyword activation, the user's voice will be recorded and submitted to the cloud for automatic speech recognition (ASR) and then further action(s) might be triggered depending on the user's command(s). However, recent researches show that the deep learning based systems could be easily attacked by adversarial examples. Subsequently, the ASR systems are found being vulnerable to audio adversarial examples. Unfortunately, very few works about defending audio adversarial attack are known in the literature. Constructing a generic and robust defense mechanism to resolve this issue remains an open problem. In this work, we propose several proactive defense mechanisms against targeted audio adversarial examples in the ASR systems via code modulation and audio compression. We then show the effectiveness of the proposed strategies through extensive evaluation on natural dataset.

Bibliographic note

© ACM, 2019. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in SCC '19 Proceedings of the Seventh International Workshop on Security in Cloud Computing http://doi.acm.org/10.1145/3327962.3331456