Home > Research > Publications & Outputs > Interpretable-Through-Prototypes Deepfake Detec...
View graph of relations

Interpretable-Through-Prototypes Deepfake Detection for Diffusion Models

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published
Close
Publication date2/10/2023
Host publicationProceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 2023
PublisherComputer Vision Foundation
Pages467-474
Number of pages8
<mark>Original language</mark>English
EventWorkshop and Challenge on DeepFake Analysis and Detection, ICCV 2023 - Paris Convention Centre, Paris, France
Duration: 2/10/20232/10/2023
https://ailb-web.ing.unimore.it/dfad2023/

Workshop

WorkshopWorkshop and Challenge on DeepFake Analysis and Detection, ICCV 2023
Abbreviated titleDFAD
Country/TerritoryFrance
CityParis
Period2/10/232/10/23
Internet address

Workshop

WorkshopWorkshop and Challenge on DeepFake Analysis and Detection, ICCV 2023
Abbreviated titleDFAD
Country/TerritoryFrance
CityParis
Period2/10/232/10/23
Internet address

Abstract

The process of recognizing and distinguishing between real content and content generated by deep learning algorithms, often referred to as deepfakes, is known as deepfake detection. In order to counter the rising threat of deepfakes and maintain the integrity of digital media, research is now being done to create more reliable and precise detection techniques. Deep learning models, such as Stable Diffusion, have been able to generate more detailed and less blurry images in recent years. In this paper, we develop a deepfake detection technique to distinguish original and fake images generated by various Diffusion Models. The developed methodology for deepfake detection takes advantage of features from fine-tuned Vision Transformers (ViTs), combined with existing classifiers such as Support Vector Machines (SVM). We demonstrate the proposed methodology's ability of interpretability-through-prototypes by analysing support vectors of the SVMs. Additionally, due to the novelty of the topic, there is a lack of open datasets for deepfake detection. Therefore, to evaluate the methodology, we have also created custom datasets based on various generative techniques of Diffusion Models on open datasets (ImageNet, FFHQ, Oxford-IIIT Pet). The code is available at https://github.com/lira-centre/