Home > Research > Publications & Outputs > Subverting Network Intrusion Detection: Craftin...

Associated organisational unit

View graph of relations

Subverting Network Intrusion Detection: Crafting Adversarial Examples Accounting for Domain-Specific Constraints.

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published

Standard

Subverting Network Intrusion Detection: Crafting Adversarial Examples Accounting for Domain-Specific Constraints. / Teuffenbach, Martin; Piatkowska, Ewa; Smith, Paul.
Machine Learning and Knowledge Extraction - 4th IFIP TC 5, TC 12, WG 8.4, WG 8.9, WG 12.9 International Cross-Domain Conference, CD-MAKE 2020, Proceedings. ed. / Andreas Holzinger; Peter Kieseberg; A Min Tjoa; Edgar Weippl. Springer, 2020. p. 301-320 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 12279 LNCS).

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Harvard

Teuffenbach, M, Piatkowska, E & Smith, P 2020, Subverting Network Intrusion Detection: Crafting Adversarial Examples Accounting for Domain-Specific Constraints. in A Holzinger, P Kieseberg, AM Tjoa & E Weippl (eds), Machine Learning and Knowledge Extraction - 4th IFIP TC 5, TC 12, WG 8.4, WG 8.9, WG 12.9 International Cross-Domain Conference, CD-MAKE 2020, Proceedings. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 12279 LNCS, Springer, pp. 301-320. https://doi.org/10.1007/978-3-030-57321-8_17, https://doi.org/10.1007/978-3-030-57321-8

APA

Teuffenbach, M., Piatkowska, E., & Smith, P. (2020). Subverting Network Intrusion Detection: Crafting Adversarial Examples Accounting for Domain-Specific Constraints. In A. Holzinger, P. Kieseberg, A. M. Tjoa, & E. Weippl (Eds.), Machine Learning and Knowledge Extraction - 4th IFIP TC 5, TC 12, WG 8.4, WG 8.9, WG 12.9 International Cross-Domain Conference, CD-MAKE 2020, Proceedings (pp. 301-320). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 12279 LNCS). Springer. https://doi.org/10.1007/978-3-030-57321-8_17, https://doi.org/10.1007/978-3-030-57321-8

Vancouver

Teuffenbach M, Piatkowska E, Smith P. Subverting Network Intrusion Detection: Crafting Adversarial Examples Accounting for Domain-Specific Constraints. In Holzinger A, Kieseberg P, Tjoa AM, Weippl E, editors, Machine Learning and Knowledge Extraction - 4th IFIP TC 5, TC 12, WG 8.4, WG 8.9, WG 12.9 International Cross-Domain Conference, CD-MAKE 2020, Proceedings. Springer. 2020. p. 301-320. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). doi: 10.1007/978-3-030-57321-8_17, https://doi.org/10.1007/978-3-030-57321-8

Author

Teuffenbach, Martin ; Piatkowska, Ewa ; Smith, Paul. / Subverting Network Intrusion Detection: Crafting Adversarial Examples Accounting for Domain-Specific Constraints. Machine Learning and Knowledge Extraction - 4th IFIP TC 5, TC 12, WG 8.4, WG 8.9, WG 12.9 International Cross-Domain Conference, CD-MAKE 2020, Proceedings. editor / Andreas Holzinger ; Peter Kieseberg ; A Min Tjoa ; Edgar Weippl. Springer, 2020. pp. 301-320 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).

Bibtex

@inproceedings{eba6cc97544e44548c1fed4fc52e36cc,
title = "Subverting Network Intrusion Detection: Crafting Adversarial Examples Accounting for Domain-Specific Constraints.",
abstract = "Deep Learning (DL) algorithms are being applied to network intrusion detection, as they can outperform other methods in terms of computational efficiency and accuracy. However, these algorithms have recently been found to be vulnerable to adversarial examples – inputs that are crafted with the intent of causing a Deep Neural Network (DNN) to misclassify with high confidence. Although a significant amount of work has been done to find robust defence techniques against adversarial examples, they still pose a potential risk. The majority of the proposed attack and defence strategies are tailored to the computer vision domain, in which adversarial examples were first found. In this paper, we consider this issue in the Network Intrusion Detection System (NIDS) domain and extend existing adversarial example crafting algorithms to account for the domain-specific constraints in the feature space. We propose to incorporate information about the difficulty of feature manipulation directly in the optimization function. Additionally, we define a novel measure for attack cost and include it in the assessment of the robustness of DL algorithms. We validate our approach on two benchmark datasets and demonstrate successful attacks against state-of-the-art DL network intrusion detection algorithms.",
author = "Martin Teuffenbach and Ewa Piatkowska and Paul Smith",
year = "2020",
doi = "10.1007/978-3-030-57321-8_17",
language = "English",
isbn = "9783030573201",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
publisher = "Springer",
pages = "301--320",
editor = "Andreas Holzinger and Peter Kieseberg and Tjoa, {A Min} and Edgar Weippl",
booktitle = "Machine Learning and Knowledge Extraction - 4th IFIP TC 5, TC 12, WG 8.4, WG 8.9, WG 12.9 International Cross-Domain Conference, CD-MAKE 2020, Proceedings",

}

RIS

TY - GEN

T1 - Subverting Network Intrusion Detection: Crafting Adversarial Examples Accounting for Domain-Specific Constraints.

AU - Teuffenbach, Martin

AU - Piatkowska, Ewa

AU - Smith, Paul

PY - 2020

Y1 - 2020

N2 - Deep Learning (DL) algorithms are being applied to network intrusion detection, as they can outperform other methods in terms of computational efficiency and accuracy. However, these algorithms have recently been found to be vulnerable to adversarial examples – inputs that are crafted with the intent of causing a Deep Neural Network (DNN) to misclassify with high confidence. Although a significant amount of work has been done to find robust defence techniques against adversarial examples, they still pose a potential risk. The majority of the proposed attack and defence strategies are tailored to the computer vision domain, in which adversarial examples were first found. In this paper, we consider this issue in the Network Intrusion Detection System (NIDS) domain and extend existing adversarial example crafting algorithms to account for the domain-specific constraints in the feature space. We propose to incorporate information about the difficulty of feature manipulation directly in the optimization function. Additionally, we define a novel measure for attack cost and include it in the assessment of the robustness of DL algorithms. We validate our approach on two benchmark datasets and demonstrate successful attacks against state-of-the-art DL network intrusion detection algorithms.

AB - Deep Learning (DL) algorithms are being applied to network intrusion detection, as they can outperform other methods in terms of computational efficiency and accuracy. However, these algorithms have recently been found to be vulnerable to adversarial examples – inputs that are crafted with the intent of causing a Deep Neural Network (DNN) to misclassify with high confidence. Although a significant amount of work has been done to find robust defence techniques against adversarial examples, they still pose a potential risk. The majority of the proposed attack and defence strategies are tailored to the computer vision domain, in which adversarial examples were first found. In this paper, we consider this issue in the Network Intrusion Detection System (NIDS) domain and extend existing adversarial example crafting algorithms to account for the domain-specific constraints in the feature space. We propose to incorporate information about the difficulty of feature manipulation directly in the optimization function. Additionally, we define a novel measure for attack cost and include it in the assessment of the robustness of DL algorithms. We validate our approach on two benchmark datasets and demonstrate successful attacks against state-of-the-art DL network intrusion detection algorithms.

UR - http://www.scopus.com/inward/record.url?scp=85090176272&partnerID=8YFLogxK

U2 - 10.1007/978-3-030-57321-8_17

DO - 10.1007/978-3-030-57321-8_17

M3 - Conference contribution/Paper

SN - 9783030573201

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 301

EP - 320

BT - Machine Learning and Knowledge Extraction - 4th IFIP TC 5, TC 12, WG 8.4, WG 8.9, WG 12.9 International Cross-Domain Conference, CD-MAKE 2020, Proceedings

A2 - Holzinger, Andreas

A2 - Kieseberg, Peter

A2 - Tjoa, A Min

A2 - Weippl, Edgar

PB - Springer

ER -