Home > Research > Publications & Outputs > Interpretable-Through-Prototypes Deepfake Detec...
View graph of relations

Interpretable-Through-Prototypes Deepfake Detection for Diffusion Models

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published

Standard

Interpretable-Through-Prototypes Deepfake Detection for Diffusion Models. / Aghasanli, Agil; Kangin, Dmitry; Angelov, Plamen.
Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 2023. Computer Vision Foundation, 2023. p. 467-474.

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Harvard

Aghasanli, A, Kangin, D & Angelov, P 2023, Interpretable-Through-Prototypes Deepfake Detection for Diffusion Models. in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 2023. Computer Vision Foundation, pp. 467-474, Workshop and Challenge on DeepFake Analysis and Detection, ICCV 2023, Paris, France, 2/10/23. <https://openaccess.thecvf.com/content/ICCV2023W/DFAD/html/Aghasanli_Interpretable-Through-Prototypes_Deepfake_Detection_for_Diffusion_Models_ICCVW_2023_paper.html>

APA

Aghasanli, A., Kangin, D., & Angelov, P. (2023). Interpretable-Through-Prototypes Deepfake Detection for Diffusion Models. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 2023 (pp. 467-474). Computer Vision Foundation. https://openaccess.thecvf.com/content/ICCV2023W/DFAD/html/Aghasanli_Interpretable-Through-Prototypes_Deepfake_Detection_for_Diffusion_Models_ICCVW_2023_paper.html

Vancouver

Aghasanli A, Kangin D, Angelov P. Interpretable-Through-Prototypes Deepfake Detection for Diffusion Models. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 2023. Computer Vision Foundation. 2023. p. 467-474

Author

Aghasanli, Agil ; Kangin, Dmitry ; Angelov, Plamen. / Interpretable-Through-Prototypes Deepfake Detection for Diffusion Models. Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 2023. Computer Vision Foundation, 2023. pp. 467-474

Bibtex

@inproceedings{91b83893cd2e4c7bb05447e4eda95d74,
title = "Interpretable-Through-Prototypes Deepfake Detection for Diffusion Models",
abstract = "The process of recognizing and distinguishing between real content and content generated by deep learning algorithms, often referred to as deepfakes, is known as deepfake detection. In order to counter the rising threat of deepfakes and maintain the integrity of digital media, research is now being done to create more reliable and precise detection techniques. Deep learning models, such as Stable Diffusion, have been able to generate more detailed and less blurry images in recent years. In this paper, we develop a deepfake detection technique to distinguish original and fake images generated by various Diffusion Models. The developed methodology for deepfake detection takes advantage of features from fine-tuned Vision Transformers (ViTs), combined with existing classifiers such as Support Vector Machines (SVM). We demonstrate the proposed methodology's ability of interpretability-through-prototypes by analysing support vectors of the SVMs. Additionally, due to the novelty of the topic, there is a lack of open datasets for deepfake detection. Therefore, to evaluate the methodology, we have also created custom datasets based on various generative techniques of Diffusion Models on open datasets (ImageNet, FFHQ, Oxford-IIIT Pet). The code is available at https://github.com/lira-centre/",
author = "Agil Aghasanli and Dmitry Kangin and Plamen Angelov",
year = "2023",
month = oct,
day = "2",
language = "English",
pages = "467--474",
booktitle = "Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 2023",
publisher = "Computer Vision Foundation",
note = "Workshop and Challenge on DeepFake Analysis and Detection, ICCV 2023, DFAD ; Conference date: 02-10-2023 Through 02-10-2023",
url = "https://ailb-web.ing.unimore.it/dfad2023/",

}

RIS

TY - GEN

T1 - Interpretable-Through-Prototypes Deepfake Detection for Diffusion Models

AU - Aghasanli, Agil

AU - Kangin, Dmitry

AU - Angelov, Plamen

PY - 2023/10/2

Y1 - 2023/10/2

N2 - The process of recognizing and distinguishing between real content and content generated by deep learning algorithms, often referred to as deepfakes, is known as deepfake detection. In order to counter the rising threat of deepfakes and maintain the integrity of digital media, research is now being done to create more reliable and precise detection techniques. Deep learning models, such as Stable Diffusion, have been able to generate more detailed and less blurry images in recent years. In this paper, we develop a deepfake detection technique to distinguish original and fake images generated by various Diffusion Models. The developed methodology for deepfake detection takes advantage of features from fine-tuned Vision Transformers (ViTs), combined with existing classifiers such as Support Vector Machines (SVM). We demonstrate the proposed methodology's ability of interpretability-through-prototypes by analysing support vectors of the SVMs. Additionally, due to the novelty of the topic, there is a lack of open datasets for deepfake detection. Therefore, to evaluate the methodology, we have also created custom datasets based on various generative techniques of Diffusion Models on open datasets (ImageNet, FFHQ, Oxford-IIIT Pet). The code is available at https://github.com/lira-centre/

AB - The process of recognizing and distinguishing between real content and content generated by deep learning algorithms, often referred to as deepfakes, is known as deepfake detection. In order to counter the rising threat of deepfakes and maintain the integrity of digital media, research is now being done to create more reliable and precise detection techniques. Deep learning models, such as Stable Diffusion, have been able to generate more detailed and less blurry images in recent years. In this paper, we develop a deepfake detection technique to distinguish original and fake images generated by various Diffusion Models. The developed methodology for deepfake detection takes advantage of features from fine-tuned Vision Transformers (ViTs), combined with existing classifiers such as Support Vector Machines (SVM). We demonstrate the proposed methodology's ability of interpretability-through-prototypes by analysing support vectors of the SVMs. Additionally, due to the novelty of the topic, there is a lack of open datasets for deepfake detection. Therefore, to evaluate the methodology, we have also created custom datasets based on various generative techniques of Diffusion Models on open datasets (ImageNet, FFHQ, Oxford-IIIT Pet). The code is available at https://github.com/lira-centre/

M3 - Conference contribution/Paper

SP - 467

EP - 474

BT - Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 2023

PB - Computer Vision Foundation

T2 - Workshop and Challenge on DeepFake Analysis and Detection, ICCV 2023

Y2 - 2 October 2023 through 2 October 2023

ER -