Home > Research > Publications & Outputs > Metric-Oriented Pretraining of Neural Source Co...
View graph of relations

Metric-Oriented Pretraining of Neural Source Code Summarisation Transformers to Enable more Secure Software Development

Research output: Contribution to conference - Without ISBN/ISSN Conference paperpeer-review

Published
Publication date30/07/2024
Number of pages15
Pages17-31
<mark>Original language</mark>English
EventThe First International Conference on Natural Language Processing and Artificial Intelligence for Cyber Security - Lancaster University, Lancaster, United Kingdom
Duration: 29/07/202430/07/2024
https://nlpaics.com/

Conference

ConferenceThe First International Conference on Natural Language Processing and Artificial Intelligence for Cyber Security
Abbreviated titleNLPAICS
Country/TerritoryUnited Kingdom
CityLancaster
Period29/07/2430/07/24
Internet address

Abstract

Source code summaries give developers and maintainers vital information about source code methods. These summaries aid with the security of software systems as they can be used to improve developer and maintainer understanding of code, with the aim of reducing the number of bugs and vulnerabilities. However writing these summaries takes up the developers’ time and these summaries are often missing, incomplete, or outdated. Neural source code summarisation solves these issues by summarising source code automatically. Current solutions use Transformer neural networks to achieve this. We present CodeSumBART - a BART-base model for neural source code summarisation, pretrained on a dataset of Java source code methods and English method summaries. We present a new approach to training Transformers for neural source code summarisation. We found that in our approach, using larger n-gram precision BLEU metrics for epoch validation, such as BLEU-4, produces better performing models than other common NLG metrics.