Home > Research > Publications & Outputs > Identifying and minimising the impact of fake v...

Links

Text available via DOI:

View graph of relations

Identifying and minimising the impact of fake visual media: Current and future directions

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

Identifying and minimising the impact of fake visual media: Current and future directions. / Nightingale, Sophie; Wade, Kimberley.
In: Memory, Mind & Media, Vol. 1, e15, 20.10.2022, p. 1-13.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

APA

Vancouver

Nightingale S, Wade K. Identifying and minimising the impact of fake visual media: Current and future directions. Memory, Mind & Media. 2022 Oct 20;1:1-13. e15. doi: 10.1017/mem.2022.8

Author

Nightingale, Sophie ; Wade, Kimberley. / Identifying and minimising the impact of fake visual media : Current and future directions. In: Memory, Mind & Media. 2022 ; Vol. 1. pp. 1-13.

Bibtex

@article{d1534f54262c48a487205bff01f483a3,
title = "Identifying and minimising the impact of fake visual media: Current and future directions",
abstract = " Over the past two decades, society has seen incredible advances in digital technology, resulting in the wide availability of cheap and easy-to-use software for creating highly sophisticated fake visual content. This democratisation of creating such content, paired with the ease of sharing it via social media, means that ill-intended fake images and videos pose a significant threat to society. To minimise this threat, it is necessary to be able to distinguish between real and fake content; to date, however, human perceptual research indicates that people have an extremely limited ability to do so. Generally, computational techniques fair better in these tasks, yet remain imperfect. What's more, this challenge is best considered as an arms race – as scientists improve detection techniques, fraudsters find novel ways to deceive. We believe that it is crucial to continue to raise awareness of the visual forgeries afforded by new technology and to examine both human and computational ability to sort the real from the fake. In this article, we outline three considerations for how society deals with future technological developments that aim to help secure the benefits of that technology while minimising its possible threats. We hope these considerations will encourage interdisciplinary discussion and collaboration that ultimately goes some way to limit the proliferation of harmful content and help to restore trust online.",
author = "Sophie Nightingale and Kimberley Wade",
year = "2022",
month = oct,
day = "20",
doi = "10.1017/mem.2022.8",
language = "English",
volume = "1",
pages = "1--13",
journal = "Memory, Mind & Media",

}

RIS

TY - JOUR

T1 - Identifying and minimising the impact of fake visual media

T2 - Current and future directions

AU - Nightingale, Sophie

AU - Wade, Kimberley

PY - 2022/10/20

Y1 - 2022/10/20

N2 - Over the past two decades, society has seen incredible advances in digital technology, resulting in the wide availability of cheap and easy-to-use software for creating highly sophisticated fake visual content. This democratisation of creating such content, paired with the ease of sharing it via social media, means that ill-intended fake images and videos pose a significant threat to society. To minimise this threat, it is necessary to be able to distinguish between real and fake content; to date, however, human perceptual research indicates that people have an extremely limited ability to do so. Generally, computational techniques fair better in these tasks, yet remain imperfect. What's more, this challenge is best considered as an arms race – as scientists improve detection techniques, fraudsters find novel ways to deceive. We believe that it is crucial to continue to raise awareness of the visual forgeries afforded by new technology and to examine both human and computational ability to sort the real from the fake. In this article, we outline three considerations for how society deals with future technological developments that aim to help secure the benefits of that technology while minimising its possible threats. We hope these considerations will encourage interdisciplinary discussion and collaboration that ultimately goes some way to limit the proliferation of harmful content and help to restore trust online.

AB - Over the past two decades, society has seen incredible advances in digital technology, resulting in the wide availability of cheap and easy-to-use software for creating highly sophisticated fake visual content. This democratisation of creating such content, paired with the ease of sharing it via social media, means that ill-intended fake images and videos pose a significant threat to society. To minimise this threat, it is necessary to be able to distinguish between real and fake content; to date, however, human perceptual research indicates that people have an extremely limited ability to do so. Generally, computational techniques fair better in these tasks, yet remain imperfect. What's more, this challenge is best considered as an arms race – as scientists improve detection techniques, fraudsters find novel ways to deceive. We believe that it is crucial to continue to raise awareness of the visual forgeries afforded by new technology and to examine both human and computational ability to sort the real from the fake. In this article, we outline three considerations for how society deals with future technological developments that aim to help secure the benefits of that technology while minimising its possible threats. We hope these considerations will encourage interdisciplinary discussion and collaboration that ultimately goes some way to limit the proliferation of harmful content and help to restore trust online.

U2 - 10.1017/mem.2022.8

DO - 10.1017/mem.2022.8

M3 - Journal article

VL - 1

SP - 1

EP - 13

JO - Memory, Mind & Media

JF - Memory, Mind & Media

M1 - e15

ER -