Home > Research > Publications & Outputs > Detecting Post Editing of Multimedia Images usi...

Links

Text available via DOI:

View graph of relations

Detecting Post Editing of Multimedia Images using Transfer Learning and Fine Tuning

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published
Close
Article number154
<mark>Journal publication date</mark>30/06/2024
<mark>Journal</mark>ACM Transactions on Multimedia Computing, Communications, and Applications
Issue number6
Volume20
Number of pages22
Publication StatusPublished
Early online date8/03/24
<mark>Original language</mark>English

Abstract

In the domain of general image forgery detection, a myriad of different classification solutions have been developed to distinguish a “tampered” image from a “pristine” image. In this work, we aim to develop a new method to tackle the problem of binary image forgery detection. Our approach builds upon the extensive training that state-of-the-art image classification models have undergone on regular images from the ImageNet dataset, and transfers that knowledge to the image forgery detection space. By leveraging transfer learning and fine tuning, we can fit state-of-the-art image classification models to the forgery detection task. We train the models on a diverse and evenly distributed image forgery dataset. With five models—EfficientNetB0, VGG16, Xception, ResNet50V2, and NASNet-Large—we transferred and adapted pre-trained knowledge from ImageNet to the forgery detection task. Each model was fitted, fine-tuned, and evaluated according to a set of performance metrics. Our evaluation demonstrated the efficacy of large-scale image classification models—paired with transfer learning and fine tuning—at detecting image forgeries. When pitted against a previously unseen dataset, the best-performing model of EfficientNetB0 could achieve an accuracy rate of nearly 89.7%.