Final published version
Licence: CC BY-NC-ND: Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
Research output: Contribution to Journal/Magazine › Journal article › peer-review
<mark>Journal publication date</mark> | 23/08/2024 |
---|---|
<mark>Journal</mark> | Learning, Media and Technology |
Number of pages | 15 |
Publication Status | E-pub ahead of print |
Early online date | 23/08/24 |
<mark>Original language</mark> | English |
The accessibility of academic literature has improved considerably because of the internet, with a range of platforms providing access online. It is now common for academic literature databases to use ranking algorithms to sort search results by ‘relevance’. However, it is often unclear how relevance is defined, and it varies across different platforms. This lack of transparency can potentially introduce bias, and impact the rigour of literature reviews. While there is a lack of clarity on the technical features of algorithms, online academic literature databases are now used extensively. There is a critical question of how those using the platforms perceive ranking to function in this context, and how they adapt their information-seeking behaviour. In this paper we present findings from a mixed-methods study, involving an online survey and in-depth interviews with academics, to understand their beliefs and assumptions about relevance ranking algorithms and their implications for academic practice.