Home > Research > Publications & Outputs > A Study on Human Rights Impact with the Advance...

Links

Text available via DOI:

View graph of relations

A Study on Human Rights Impact with the Advancement of Artificial Intelligence

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

A Study on Human Rights Impact with the Advancement of Artificial Intelligence. / Chan, H.W.H.; Lo, N.P.K.
In: Journal of Posthumanism, Vol. 5, No. 2, 05.04.2025, p. 1114-1153.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

APA

Vancouver

Chan HWH, Lo NPK. A Study on Human Rights Impact with the Advancement of Artificial Intelligence. Journal of Posthumanism. 2025 Apr 5;5(2):1114-1153. doi: 10.63332/joph.v5i2.490

Author

Chan, H.W.H. ; Lo, N.P.K. / A Study on Human Rights Impact with the Advancement of Artificial Intelligence. In: Journal of Posthumanism. 2025 ; Vol. 5, No. 2. pp. 1114-1153.

Bibtex

@article{668606fca16a462591a0515a35c3536c,
title = "A Study on Human Rights Impact with the Advancement of Artificial Intelligence",
abstract = "The widespread use of AI-powered surveillance technologies by government agencies and commercial enterprises poses a significant and unprecedented threat to the fundamental human right to privacy. This article examines the use of advanced AI systems, such as facial recognition, predictive policing algorithms, AI-powered drones, and smart sensors, to facilitate pervasive surveillance. These technologies enable the covert collection, integration, and analysis of revealing personal information, rendering traditional notions of privacy obsolete. This study reviews technical documentation, legislative frameworks, business practices, and social implications across various countries, illustrating the widespread implementation of AI surveillance akin to a digital Panopticon. The findings highlight critical deficiencies in current legal protections and ethical principles, particularly concerning consent, human rights, and democratic values. The lack of transparency, fairness, and accountability in AI systems often marginalises vulnerable populations and establishes privatised systems of social control. Furthermore, the paper demonstrates how the normalisation of continuous monitoring has begun to erode societal norms, cultural perspectives, and fundamental human behaviours regarding privacy. Without intervention, such technologies risk creating a dystopian future where individuality, freedom of choice, and opposition are illusions under oppressive AI surveillance. In response, this research advocates for corrective frameworks that prioritise human rights, including privacy-by-design, algorithmic transparency, and human oversight. By fostering collaboration among policymakers, technology developers, and civil society, the article provides practical recommendations to ensure AI developments align with the protection of human dignity, democratic liberties, and ethical principles foundational to civilised societies.",
author = "H.W.H. Chan and N.P.K. Lo",
year = "2025",
month = apr,
day = "5",
doi = "10.63332/joph.v5i2.490",
language = "English",
volume = "5",
pages = "1114--1153",
journal = "Journal of Posthumanism",
number = "2",

}

RIS

TY - JOUR

T1 - A Study on Human Rights Impact with the Advancement of Artificial Intelligence

AU - Chan, H.W.H.

AU - Lo, N.P.K.

PY - 2025/4/5

Y1 - 2025/4/5

N2 - The widespread use of AI-powered surveillance technologies by government agencies and commercial enterprises poses a significant and unprecedented threat to the fundamental human right to privacy. This article examines the use of advanced AI systems, such as facial recognition, predictive policing algorithms, AI-powered drones, and smart sensors, to facilitate pervasive surveillance. These technologies enable the covert collection, integration, and analysis of revealing personal information, rendering traditional notions of privacy obsolete. This study reviews technical documentation, legislative frameworks, business practices, and social implications across various countries, illustrating the widespread implementation of AI surveillance akin to a digital Panopticon. The findings highlight critical deficiencies in current legal protections and ethical principles, particularly concerning consent, human rights, and democratic values. The lack of transparency, fairness, and accountability in AI systems often marginalises vulnerable populations and establishes privatised systems of social control. Furthermore, the paper demonstrates how the normalisation of continuous monitoring has begun to erode societal norms, cultural perspectives, and fundamental human behaviours regarding privacy. Without intervention, such technologies risk creating a dystopian future where individuality, freedom of choice, and opposition are illusions under oppressive AI surveillance. In response, this research advocates for corrective frameworks that prioritise human rights, including privacy-by-design, algorithmic transparency, and human oversight. By fostering collaboration among policymakers, technology developers, and civil society, the article provides practical recommendations to ensure AI developments align with the protection of human dignity, democratic liberties, and ethical principles foundational to civilised societies.

AB - The widespread use of AI-powered surveillance technologies by government agencies and commercial enterprises poses a significant and unprecedented threat to the fundamental human right to privacy. This article examines the use of advanced AI systems, such as facial recognition, predictive policing algorithms, AI-powered drones, and smart sensors, to facilitate pervasive surveillance. These technologies enable the covert collection, integration, and analysis of revealing personal information, rendering traditional notions of privacy obsolete. This study reviews technical documentation, legislative frameworks, business practices, and social implications across various countries, illustrating the widespread implementation of AI surveillance akin to a digital Panopticon. The findings highlight critical deficiencies in current legal protections and ethical principles, particularly concerning consent, human rights, and democratic values. The lack of transparency, fairness, and accountability in AI systems often marginalises vulnerable populations and establishes privatised systems of social control. Furthermore, the paper demonstrates how the normalisation of continuous monitoring has begun to erode societal norms, cultural perspectives, and fundamental human behaviours regarding privacy. Without intervention, such technologies risk creating a dystopian future where individuality, freedom of choice, and opposition are illusions under oppressive AI surveillance. In response, this research advocates for corrective frameworks that prioritise human rights, including privacy-by-design, algorithmic transparency, and human oversight. By fostering collaboration among policymakers, technology developers, and civil society, the article provides practical recommendations to ensure AI developments align with the protection of human dignity, democratic liberties, and ethical principles foundational to civilised societies.

U2 - 10.63332/joph.v5i2.490

DO - 10.63332/joph.v5i2.490

M3 - Journal article

VL - 5

SP - 1114

EP - 1153

JO - Journal of Posthumanism

JF - Journal of Posthumanism

IS - 2

ER -