Home > Research > Publications & Outputs > Improved Evaluation of Automatic Source Code Su...

Links

View graph of relations

Improved Evaluation of Automatic Source Code Summarisation

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published
Publication date7/12/2022
Host publication2nd Wokshop on Natural Language Generation, Evaluation and Metrics: Proceedings of the Workshop
Place of PublicationStroudsberg, PA.
PublisherAssociation for Computational Linguistics (ACL Anthology)
Pages326-335
Number of pages10
ISBN (print)9781959429128
<mark>Original language</mark>English
Event2nd Workshop on Natural Language Generation, Evaluation, and Metrics - Abu Dhabi, United Arab Emirates (Hybrid), Abu Dhabi, United Arab Emirates
Duration: 7/12/20229/12/2022

Workshop

Workshop2nd Workshop on Natural Language Generation, Evaluation, and Metrics
Abbreviated titleGEM
Country/TerritoryUnited Arab Emirates
CityAbu Dhabi
Period7/12/229/12/22

Workshop

Workshop2nd Workshop on Natural Language Generation, Evaluation, and Metrics
Abbreviated titleGEM
Country/TerritoryUnited Arab Emirates
CityAbu Dhabi
Period7/12/229/12/22

Abstract

Source code summaries are a vital tool for the understanding and maintenance of source code as they can be used to explain code in simple terms. However, source code with missing, incorrect, or outdated summaries is a common occurrence in production code. Automatic source code summarisation seeks to solve these issues by generating up-to-date summaries of source code methods. Recent work in automatically generating source code summaries uses neural networks for generating summaries; commonly Sequence-to-Sequence or Transformer models, pretrained on method-summary pairs. The most common method of evaluating the quality of these summaries is comparing the machine-generated summaries against human-written summaries. Summaries can be evaluated using n-gram-based translation metrics such as BLEU, METEOR, or ROUGE-L. However, these metrics alone can be unreliable and new Natural Language Generation metrics based on large pretrained language models provide an alternative. In this paper, we propose a method of improving the evaluation of a model by improving the preprocessing of the data used to train it, as well as proposing evaluating the model with a metric based off a language model, pretrained on a Natural Language (English) alongside traditional metrics. Our evaluation suggests our model has been improved by cleaning and preprocessing the data used in model training. The addition of a pretrained language model metric alongside traditional metrics shows that both produce results which can be used to evaluate neural source code summarisation.