Home > Research > Publications & Outputs > Defending Adversarial Attacks on Cloud-aided Au...

Electronic data

  • Defending Adversarial Attacks on Cloud-aided Automatic Speech Recognition Systems

    Rights statement: © ACM, 2019. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in SCC '19 Proceedings of the Seventh International Workshop on Security in Cloud Computing http://doi.acm.org/10.1145/3327962.3331456

    Accepted author manuscript, 2.02 MB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

Links

Text available via DOI:

View graph of relations

Defending Adversarial Attacks on Cloud-aided Automatic Speech Recognition Systems

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published

Standard

Defending Adversarial Attacks on Cloud-aided Automatic Speech Recognition Systems. / Zhang, Jiajie; Zhang, Bingsheng; Zhang, Bincheng.
SCC '19 Proceedings of the Seventh International Workshop on Security in Cloud Computing. New York: ACM, 2019. p. 23-31.

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Harvard

Zhang, J, Zhang, B & Zhang, B 2019, Defending Adversarial Attacks on Cloud-aided Automatic Speech Recognition Systems. in SCC '19 Proceedings of the Seventh International Workshop on Security in Cloud Computing. ACM, New York, pp. 23-31. https://doi.org/10.1145/3327962.3331456

APA

Zhang, J., Zhang, B., & Zhang, B. (2019). Defending Adversarial Attacks on Cloud-aided Automatic Speech Recognition Systems. In SCC '19 Proceedings of the Seventh International Workshop on Security in Cloud Computing (pp. 23-31). ACM. https://doi.org/10.1145/3327962.3331456

Vancouver

Zhang J, Zhang B, Zhang B. Defending Adversarial Attacks on Cloud-aided Automatic Speech Recognition Systems. In SCC '19 Proceedings of the Seventh International Workshop on Security in Cloud Computing. New York: ACM. 2019. p. 23-31 doi: 10.1145/3327962.3331456

Author

Zhang, Jiajie ; Zhang, Bingsheng ; Zhang, Bincheng. / Defending Adversarial Attacks on Cloud-aided Automatic Speech Recognition Systems. SCC '19 Proceedings of the Seventh International Workshop on Security in Cloud Computing. New York : ACM, 2019. pp. 23-31

Bibtex

@inproceedings{9bb303ed5a5d421c9df612440be3bdd9,
title = "Defending Adversarial Attacks on Cloud-aided Automatic Speech Recognition Systems",
abstract = "With the advancement of deep learning based speech recognition technology, an increasing number of cloud-aided automatic voice assistant applications, such as Google Home, Amazon Echo, and cloud AI services, such as IBM Watson, are emerging in our daily life. In a typical usage scenario, after keyword activation, the user's voice will be recorded and submitted to the cloud for automatic speech recognition (ASR) and then further action(s) might be triggered depending on the user's command(s). However, recent researches show that the deep learning based systems could be easily attacked by adversarial examples. Subsequently, the ASR systems are found being vulnerable to audio adversarial examples. Unfortunately, very few works about defending audio adversarial attack are known in the literature. Constructing a generic and robust defense mechanism to resolve this issue remains an open problem. In this work, we propose several proactive defense mechanisms against targeted audio adversarial examples in the ASR systems via code modulation and audio compression. We then show the effectiveness of the proposed strategies through extensive evaluation on natural dataset.",
author = "Jiajie Zhang and Bingsheng Zhang and Bincheng Zhang",
note = "{\textcopyright} ACM, 2019. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in SCC '19 Proceedings of the Seventh International Workshop on Security in Cloud Computing http://doi.acm.org/10.1145/3327962.3331456",
year = "2019",
month = jul,
day = "2",
doi = "10.1145/3327962.3331456",
language = "English",
isbn = "9781450367882",
pages = "23--31",
booktitle = "SCC '19 Proceedings of the Seventh International Workshop on Security in Cloud Computing",
publisher = "ACM",

}

RIS

TY - GEN

T1 - Defending Adversarial Attacks on Cloud-aided Automatic Speech Recognition Systems

AU - Zhang, Jiajie

AU - Zhang, Bingsheng

AU - Zhang, Bincheng

N1 - © ACM, 2019. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in SCC '19 Proceedings of the Seventh International Workshop on Security in Cloud Computing http://doi.acm.org/10.1145/3327962.3331456

PY - 2019/7/2

Y1 - 2019/7/2

N2 - With the advancement of deep learning based speech recognition technology, an increasing number of cloud-aided automatic voice assistant applications, such as Google Home, Amazon Echo, and cloud AI services, such as IBM Watson, are emerging in our daily life. In a typical usage scenario, after keyword activation, the user's voice will be recorded and submitted to the cloud for automatic speech recognition (ASR) and then further action(s) might be triggered depending on the user's command(s). However, recent researches show that the deep learning based systems could be easily attacked by adversarial examples. Subsequently, the ASR systems are found being vulnerable to audio adversarial examples. Unfortunately, very few works about defending audio adversarial attack are known in the literature. Constructing a generic and robust defense mechanism to resolve this issue remains an open problem. In this work, we propose several proactive defense mechanisms against targeted audio adversarial examples in the ASR systems via code modulation and audio compression. We then show the effectiveness of the proposed strategies through extensive evaluation on natural dataset.

AB - With the advancement of deep learning based speech recognition technology, an increasing number of cloud-aided automatic voice assistant applications, such as Google Home, Amazon Echo, and cloud AI services, such as IBM Watson, are emerging in our daily life. In a typical usage scenario, after keyword activation, the user's voice will be recorded and submitted to the cloud for automatic speech recognition (ASR) and then further action(s) might be triggered depending on the user's command(s). However, recent researches show that the deep learning based systems could be easily attacked by adversarial examples. Subsequently, the ASR systems are found being vulnerable to audio adversarial examples. Unfortunately, very few works about defending audio adversarial attack are known in the literature. Constructing a generic and robust defense mechanism to resolve this issue remains an open problem. In this work, we propose several proactive defense mechanisms against targeted audio adversarial examples in the ASR systems via code modulation and audio compression. We then show the effectiveness of the proposed strategies through extensive evaluation on natural dataset.

U2 - 10.1145/3327962.3331456

DO - 10.1145/3327962.3331456

M3 - Conference contribution/Paper

SN - 9781450367882

SP - 23

EP - 31

BT - SCC '19 Proceedings of the Seventh International Workshop on Security in Cloud Computing

PB - ACM

CY - New York

ER -