Home > Research > Publications & Outputs > Assessing risk of bias in toxicological studies...

Links

Text available via DOI:

View graph of relations

Assessing risk of bias in toxicological studies in the era of artificial intelligence

Research output: Contribution to Journal/MagazineReview articlepeer-review

E-pub ahead of print
Close
<mark>Journal publication date</mark>4/07/2025
<mark>Journal</mark>Archives of Toxicology
Publication StatusE-pub ahead of print
Early online date4/07/25
<mark>Original language</mark>English

Abstract

Risk of bias is a critical factor influencing the reliability and validity of toxicological studies, impacting evidence synthesis and decision-making in regulatory and public health contexts. The traditional approaches for assessing risk of bias are often subjective and time-consuming. Recent advancements in artificial intelligence (AI) offer promising solutions for automating and enhancing bias detection and evaluation. This article reviews key types of biases-such as selection, performance, detection, attrition, and reporting biases-in in vivo, in vitro, and in silico studies. It further discusses specialized tools, including the SYRCLE and OHAT frameworks, designed to address such biases. The integration of AI-based tools into risk of bias assessments can significantly improve the efficiency, consistency, and accuracy of evaluations. However, AI models are themselves susceptible to algorithmic and data biases, necessitating robust validation and transparency in their development. The article highlights the need for standardized, AI-enabled risk of bias assessment methodologies, training, and policy implementation to mitigate biases in AI-driven analyses. The strategies for leveraging AI to screen studies, detect anomalies, and support systematic reviews are explored. By adopting these advanced methodologies, toxicologists and regulators can enhance the quality and reliability of toxicological evidence, promoting evidence-based practices and ensuring more informed decision-making. The way forward includes fostering interdisciplinary collaboration, developing bias-resilient AI models, and creating a research culture that actively addresses bias through transparent and rigorous practices.