Home > Research > Publications & Outputs > VoiceYourView
View graph of relations

VoiceYourView: mapping public confidence in policing

Research output: Contribution to conference - Without ISBN/ISSN Conference paperpeer-review

Published
Close
Publication date2010
<mark>Original language</mark>English
EventHorizon Digital Futures 2010 - , United Kingdom
Duration: 11/10/201012/10/2010

Conference

ConferenceHorizon Digital Futures 2010
Country/TerritoryUnited Kingdom
Period11/10/1012/10/10

Abstract

Voice Your View (vYv) is a RCUK Digital Economy-funded project that investigates how digital technologies can be applied in public spaces to make them safer. vYv is developing a range of mobile and static devices that allow the public to voice their opinions and concerns on their environment as and when thoughts occur to them. As a simple example, a jogger might feel unsafe in his/her local park due to lack of lighting. Rather than go home and write a letter to the local council – a strategy which would likely end up with the observation being forgotten – the
jogger can simply document his/her concern using a vYv mobile phone application. Crucially, vYv is developing and applying advanced natural language processing algorithms to filter, structure and make sense of the potentially vast amount of textual comments received using the system. An early trial of vYv in
Lancaster had 600 users [1]; results on automatically analyzing the textual comments from these users showed that vYv identified an acceptable theme in a comment 78% of the time [2].
This paper applies the vYv text analysis engine to qualitative questionnaire responses from a Derry District Policing Partnership (DDPP) survey carried out on public confidence and satisfaction with policing in Derry in 2009. The aim of the study was to investigate the feasibility of creating public perception maps automatically – that is, given a large number of textual commentaries (such as those produced by vYv) expressing people’s perceptions of their environment, how feasible is it to automatically extract the meaning of the comments and plot them on maps? These so-called public perception maps will capture the ‘buzz’ of public feeling at a given snapshot in time, and can potentially be updated in real-time as new comments arrive.