Home > Research > Publications & Outputs > Multi-Donor Neural Transfer Learning for Geneti...

Electronic data

  • main

    Rights statement: © ACM, 2022. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in ACM Transactions on Evolutionary Learning and Optimization, {VOL#, ISS#, (DATE)} http://doi.acm.org/10.1145/nnnnnn.nnnnnn

    Accepted author manuscript, 714 KB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

View graph of relations

Multi-Donor Neural Transfer Learning for Genetic Programming

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Forthcoming
<mark>Journal publication date</mark>23/08/2022
<mark>Journal</mark>ACM Transactions on Evolutionary Learning and Optimization
Publication StatusAccepted/In press
<mark>Original language</mark>English

Abstract

Genetic programming (GP), for the synthesis of brand new programs, continues to demonstrate increasingly capable results towards increasingly complex problems. A key challenge in GP is how to learn from the past, so that the successful synthesis of simple programs can feed in to more challenging unsolved problems. Transfer Learning in the literature has yet to demonstrate an automated mechanism to identify existing donor programs with high-utility genetic material for new problems, instead relying on human guidance. In this paper we present a transfer learning mechanism for GP which fills this gap: we use a Turing-complete language for synthesis, and demonstrate how a neural network (NN) can be used to guide automated code fragment extraction from previously solved problems for injection into future problems. Using a framework which synthesises code from just 10 input-output examples, we first study NN ability to recognise the presence of code fragments in a larger program, then present an end-to-end system which takes only input-output examples and generates code fragments as it solves easier problems, then deploys selected high-utility fragments to solve harder ones. The use of NN-guided genetic material selection shows significant performance increases, on average doubling the percentage of programs that can be successfully synthesised when tested on two separate problem corpora, in comparison with a non-transfer-learning GP baseline.

Bibliographic note

© ACM, 2022. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in ACM Transactions on Evolutionary Learning and Optimization, {VOL#, ISS#, (DATE)} http://doi.acm.org/10.1145/nnnnnn.nnnnnn