Home > Research > Publications & Outputs > Radio Galaxy Zoo

Associated organisational unit

Electronic data

  • 1801.04861

    Accepted author manuscript, 1.52 MB, PDF document

    Available under license: CC BY: Creative Commons Attribution 4.0 International License


Text available via DOI:

View graph of relations

Radio Galaxy Zoo: compact and extended radio source classification with deep learning

Research output: Contribution to journalJournal articlepeer-review

  • V Lukic
  • M Brüggen
  • J K Banfield
  • O I Wong
  • L Rudnick
  • R P Norris
  • B Simmons
<mark>Journal publication date</mark>1/05/2018
<mark>Journal</mark>Monthly Notices of the Royal Astronomical Society
Issue number1
Number of pages15
Pages (from-to)246-260
Publication StatusPublished
<mark>Original language</mark>English


Machine learning techniques have been increasingly useful in astronomical applications over the last few years, for example in the morphological classification of galaxies. Convolutional neural networks have proven to be highly effective in classifying objects in image data. In the context of radio-interferometric imaging in astronomy, we looked for ways to identify multiple components of individual sources. To this effect, we design a convolutional neural network to differentiate between different morphology classes using sources from the Radio Galaxy Zoo (RGZ) citizen science project. In this first step, we focus on exploring the factors that affect the performance of such neural networks, such as the amount of training data, number and nature of layers, and the hyperparameters. We begin with a simple experiment in which we only differentiate between two extreme morphologies, using compact and multiple-component extended sources. We found that a three-convolutional layer architecture yielded very good results, achieving a classification accuracy of 97.4 per cent on a test data set. The same architecture was then tested on a four-class problem where we let the network classify sources into compact and three classes of extended sources, achieving a test accuracy of 93.5 per cent. The best-performing convolutional neural network set-up has been verified against RGZ Data Release 1 where a final test accuracy of 94.8 per cent was obtained, using both original and augmented images. The use of sigma clipping does not offer a significant benefit overall, except in cases with a small number of training images.