Home > Research > Publications & Outputs > Creating and analysing a multimodal corpus of n...

Links

Text available via DOI:

View graph of relations

Creating and analysing a multimodal corpus of news texts with Google Cloud Vision's automatic image tagger

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published
Article number100043
<mark>Journal publication date</mark>30/04/2023
<mark>Journal</mark>Applied Corpus Linguistics
Issue number1
Volume3
Number of pages10
Publication StatusPublished
Early online date2/02/23
<mark>Original language</mark>English

Abstract

This study describes the creation and analysis of a small multimodal corpus of British news articles about obesity, where tags were assigned to images in the articles using the automatic tagger Google Cloud Vision. In order to illustrate the potential for analysis of image tags, the corpus analysis tool WordSmith was used to identify differences between newspapers in the ways that obesity was framed. Three forms of analysis were carried out – the first simply compared keywords across the newspapers, the second examined key visual tags and their collocates associated with each newspaper, while the third incorporated a combined analysis of words and image tags. The three analyses produced complementary findings, indicating the value in using Google Cloud Vision in creating and analysing multimodal corpora. The paper ends by reflecting on the method undertaken, while considering how additional research could improve our understanding of image tagging.