Home > Research > Publications & Outputs > Trust me, I am an Intelligent and Autonomous Sy...

Electronic data

  • Trust me, I am an Intelligent and Autonomous System

    Rights statement: This is the authors accepted manuscript, the published version of the chapter may differ.

    Accepted author manuscript, 315 KB, PDF document

    Embargo ends: 1/01/50

View graph of relations

Trust me, I am an Intelligent and Autonomous System: Trustworthy AI in Africa as Distributed Concern

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNChapter (peer-reviewed)peer-review

Publication date23/04/2024
Host publicationTrustworthy AI: African Perspectives
EditorsD. Eke, K. Wakunuma, S. Akintoye, G. Ogoh
Place of PublicationCham
PublisherPalgrave Macmillan
Number of pages21
<mark>Original language</mark>English


Over the last decade, we’ve witnessed the re-convergence of Human-computer Interaction (HCI) to emerging spaces such as artificial intelligence (AI), big data, edge computing and so on. Specific to the agentistic turn in HCI, researchers and practitioners have grappled with the central issues around AI as a research program or a methodological instrument - from cognitive science emphasis on technical and computational cognitive systems to philosophy and ethics focus on agency, perception, interpretation, action, meaning, and understanding. Even with the proliferation of AI discourses globally, researchers have recognized how the discourse of AI from Africa is undermined. Consequently, researchers interested in HCI and AI in Africa have identified the growing need for exploring the potentials and challenges associated with the design and adoption of AI-mediated technologies in critical sectors of the economy as a matter of socio-technical interest or concern. In this chapter, we consider how the normative framing of AI in Africa - from ethical, responsible, and trustworthy - can be better understood when their subject matters are conceived as a Latourian "Distributed Concern". Building on Bruno Latour’s analytical framing of “matters of facts” as “matters of concerns”, we argue that operationalizing trustworthy AI as a distributed concern – which is ethical, socio cultural, geo-political, economic, pedagogical, technical, and so on – entails a continual process of reconciling value(s). To highlight the scalable dimension of trustworthiness in AI research and design, we engaged in sustained discursive argumentation in showing how the procedural analysis of trust as a spectrum might explicate the modalities that sustained the normalization of trustworthy AI as ethical, lawful, or robust.