Home > Research > Publications & Outputs > Synthetic faces

Associated organisational unit

Links

Text available via DOI:

View graph of relations

Synthetic faces: how perceptually convincing are they?

Research output: Contribution to Journal/MagazineMeeting abstract

Published

Standard

Synthetic faces: how perceptually convincing are they? / Nightingale, Sophie; Agarwal, Shruti; Härkönen, Erik et al.
In: Journal of Vision, Vol. 21, No. 9, 2015, 27.09.2021.

Research output: Contribution to Journal/MagazineMeeting abstract

Harvard

Nightingale, S, Agarwal, S, Härkönen, E, Lehtinen, J & Farid, H 2021, 'Synthetic faces: how perceptually convincing are they?', Journal of Vision, vol. 21, no. 9, 2015. https://doi.org/10.1167/jov.21.9.2015

APA

Nightingale, S., Agarwal, S., Härkönen, E., Lehtinen, J., & Farid, H. (2021). Synthetic faces: how perceptually convincing are they? Journal of Vision, 21(9), Article 2015. https://doi.org/10.1167/jov.21.9.2015

Vancouver

Nightingale S, Agarwal S, Härkönen E, Lehtinen J, Farid H. Synthetic faces: how perceptually convincing are they? Journal of Vision. 2021 Sept 27;21(9):2015. doi: 10.1167/jov.21.9.2015

Author

Nightingale, Sophie ; Agarwal, Shruti ; Härkönen, Erik et al. / Synthetic faces : how perceptually convincing are they?. In: Journal of Vision. 2021 ; Vol. 21, No. 9.

Bibtex

@article{0754371219a64ae5b3c7f1132ed5f234,
title = "Synthetic faces: how perceptually convincing are they?",
abstract = "Recent advances in machine learning, specifically generative adversarial networks (GANs), have made it possible to synthesize highly photo-realistic faces. Such synthetic faces have been used in the creation of fraudulent social media accounts, including the creation of a fictional candidate for U.S. Congress. It has been shown that deep neural networks can be trained to discriminate between real and synthesized faces; it remains unknown, however, if humans can. We examined people{\textquoteright}s ability to discriminate between synthetic and real faces. We selected 400 faces synthesized using the state of the art StyleGAN2, further ensuring diversity across gender, age, and race. A convolutional neural network descriptor was used to extract a low-dimensional, perceptually meaningful, representation of each face. For each of the 400 synthesized faces, this representation was used to find the most similar real faces in the Flickr-Faces-HQ (FFHQ) dataset. From these, we manually selected a matching face that did not contain additional discriminative cues (e.g., complex background, other people in the scene). Participants (N=315) were recruited from Mechanical Turk and given a brief tutorial consisting of examples of synthesized and real faces. Each participant then saw 128 trials, each consisting of a single face, either synthesized or real, and had unlimited time to classify the face accordingly. Although unknown to the participant, half of the faces were real and half were synthesized. Across the 128 trials, faces were equally balanced in terms of gender and race. Average performance was close to chance with no response bias (d-prime = -0.09; beta = 0.99). These results suggest that StyleGAN2 can successfully synthesize faces that are realistic enough to fool naive observers. We are examining whether a more detailed training session, raising participants{\textquoteright} awareness of some common synthesis artifacts, will improve their ability to detect synthetic faces.",
author = "Sophie Nightingale and Shruti Agarwal and Erik H{\"a}rk{\"o}nen and Jaakko Lehtinen and Hany Farid",
year = "2021",
month = sep,
day = "27",
doi = "10.1167/jov.21.9.2015",
language = "English",
volume = "21",
journal = "Journal of Vision",
issn = "1534-7362",
publisher = "Association for Research in Vision and Ophthalmology Inc.",
number = "9",
note = "Vision Sciences Society Annual Meeting 2021 ; Conference date: 21-05-2021 Through 26-05-2021",

}

RIS

TY - JOUR

T1 - Synthetic faces

T2 - Vision Sciences Society Annual Meeting 2021

AU - Nightingale, Sophie

AU - Agarwal, Shruti

AU - Härkönen, Erik

AU - Lehtinen, Jaakko

AU - Farid, Hany

PY - 2021/9/27

Y1 - 2021/9/27

N2 - Recent advances in machine learning, specifically generative adversarial networks (GANs), have made it possible to synthesize highly photo-realistic faces. Such synthetic faces have been used in the creation of fraudulent social media accounts, including the creation of a fictional candidate for U.S. Congress. It has been shown that deep neural networks can be trained to discriminate between real and synthesized faces; it remains unknown, however, if humans can. We examined people’s ability to discriminate between synthetic and real faces. We selected 400 faces synthesized using the state of the art StyleGAN2, further ensuring diversity across gender, age, and race. A convolutional neural network descriptor was used to extract a low-dimensional, perceptually meaningful, representation of each face. For each of the 400 synthesized faces, this representation was used to find the most similar real faces in the Flickr-Faces-HQ (FFHQ) dataset. From these, we manually selected a matching face that did not contain additional discriminative cues (e.g., complex background, other people in the scene). Participants (N=315) were recruited from Mechanical Turk and given a brief tutorial consisting of examples of synthesized and real faces. Each participant then saw 128 trials, each consisting of a single face, either synthesized or real, and had unlimited time to classify the face accordingly. Although unknown to the participant, half of the faces were real and half were synthesized. Across the 128 trials, faces were equally balanced in terms of gender and race. Average performance was close to chance with no response bias (d-prime = -0.09; beta = 0.99). These results suggest that StyleGAN2 can successfully synthesize faces that are realistic enough to fool naive observers. We are examining whether a more detailed training session, raising participants’ awareness of some common synthesis artifacts, will improve their ability to detect synthetic faces.

AB - Recent advances in machine learning, specifically generative adversarial networks (GANs), have made it possible to synthesize highly photo-realistic faces. Such synthetic faces have been used in the creation of fraudulent social media accounts, including the creation of a fictional candidate for U.S. Congress. It has been shown that deep neural networks can be trained to discriminate between real and synthesized faces; it remains unknown, however, if humans can. We examined people’s ability to discriminate between synthetic and real faces. We selected 400 faces synthesized using the state of the art StyleGAN2, further ensuring diversity across gender, age, and race. A convolutional neural network descriptor was used to extract a low-dimensional, perceptually meaningful, representation of each face. For each of the 400 synthesized faces, this representation was used to find the most similar real faces in the Flickr-Faces-HQ (FFHQ) dataset. From these, we manually selected a matching face that did not contain additional discriminative cues (e.g., complex background, other people in the scene). Participants (N=315) were recruited from Mechanical Turk and given a brief tutorial consisting of examples of synthesized and real faces. Each participant then saw 128 trials, each consisting of a single face, either synthesized or real, and had unlimited time to classify the face accordingly. Although unknown to the participant, half of the faces were real and half were synthesized. Across the 128 trials, faces were equally balanced in terms of gender and race. Average performance was close to chance with no response bias (d-prime = -0.09; beta = 0.99). These results suggest that StyleGAN2 can successfully synthesize faces that are realistic enough to fool naive observers. We are examining whether a more detailed training session, raising participants’ awareness of some common synthesis artifacts, will improve their ability to detect synthetic faces.

U2 - 10.1167/jov.21.9.2015

DO - 10.1167/jov.21.9.2015

M3 - Meeting abstract

VL - 21

JO - Journal of Vision

JF - Journal of Vision

SN - 1534-7362

IS - 9

M1 - 2015

Y2 - 21 May 2021 through 26 May 2021

ER -