Home > Research > Publications & Outputs > The Impact of Hard and Easy Negative Training D...

Links

Text available via DOI:

View graph of relations

The Impact of Hard and Easy Negative Training Data on Vulnerability Prediction Performance

Research output: Contribution to Journal/MagazineJournal articlepeer-review

E-pub ahead of print
Article number112003
<mark>Journal publication date</mark>31/05/2024
<mark>Journal</mark>Journal of Systems and Software
Volume211
Publication StatusE-pub ahead of print
Early online date21/02/24
<mark>Original language</mark>English

Abstract

Vulnerability prediction models have been shown to perform poorly in the real world. We examine how the composition of negative training data influences vulnerability prediction model performance. Inspired by other disciplines (e.g. image processing), we focus on whether distinguishing between negative training data that is ‘easy’ to recognise from positive data (very different from positive data) and negative training data that is ‘hard’ to recognise from positive data (very similar to positive data) impacts on vulnerability prediction performance. We use a range of popular machine learning algorithms, including deep learning, to build models based on vulnerability patch data curated by Reis and Abreu, as well as the MSR dataset. Our results suggest that models trained on higher ratios of easy negatives perform better, plateauing at 15 easy negatives per positive instance. We also report that different ML algorithms work better based on the negative sample used. Overall, we found that the negative sampling approach used significantly impacts model performance, potentially leading to overly optimistic results. The ratio of ‘easy’ versus ‘hard’ negative training data should be explicitly considered when building vulnerability prediction models for the real world.