Home > Research > Publications & Outputs > Enhancing human-centered dynamic scene understa...

Links

Text available via DOI:

View graph of relations

Enhancing human-centered dynamic scene understanding via multiple LLMs collaborated reasoning

Research output: Contribution to Journal/MagazineJournal articlepeer-review

E-pub ahead of print

Standard

Enhancing human-centered dynamic scene understanding via multiple LLMs collaborated reasoning. / Zhang, Hang; Zhang, Wenxiao; Qu, Haoxuan et al.
In: Visual Intelligence, Vol. 3, No. 1, 3, 31.12.2025.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

APA

Vancouver

Zhang H, Zhang W, Qu H, Liu J. Enhancing human-centered dynamic scene understanding via multiple LLMs collaborated reasoning. Visual Intelligence. 2025 Dec 31;3(1):3. Epub 2025 Mar 17. doi: 10.1007/s44267-025-00074-1

Author

Zhang, Hang ; Zhang, Wenxiao ; Qu, Haoxuan et al. / Enhancing human-centered dynamic scene understanding via multiple LLMs collaborated reasoning. In: Visual Intelligence. 2025 ; Vol. 3, No. 1.

Bibtex

@article{fc4c6940b3134c3e95add0a796746c28,
title = "Enhancing human-centered dynamic scene understanding via multiple LLMs collaborated reasoning",
abstract = "Human-centered dynamic scene understanding plays a pivotal role in enhancing the capability of robotic and autonomous systems, where video-based human-object interaction (V-HOI) detection is a crucial task in semantic scene understanding, which aims to comprehensively understand HOI relationships within a video to benefit the behavioral decisions of mobile robots and autonomous driving systems. Although previous V-HOI detection models have made significant advances in accurate detection on specific datasets, they still lack the general reasoning ability of humans to effectively induce HOI relationships. In this study, we propose V-HOI multi-LLMs collaborated reasoning (V-HOI MLCR), a novel framework consisting of a series of plug-and-play modules that could facilitate the performance of current V-HOI detection models by leveraging the strong reasoning ability of different off-the-shelf pre-trained large language models (LLMs). We design a two-stage collaboration system of different LLMs for the V-HOI task. Specifically, in the first stage, we design a cross-agents reasoning scheme to leverage the LLM to perform reasoning from different aspects. In the second stage, we perform multi-LLMs debate to get the final reasoning answer based on the different knowledge in different LLMs. Additionally, we develop an auxiliary training strategy using CLIP, a large vision-language model to enhance the base V-HOI models{\textquoteright} discriminative ability to better cooperate with LLMs. We validate the superiority of our design by demonstrating its effectiveness in improving the predictive accuracy of the base V-HOI model through reasoning from multiple perspectives.",
keywords = "Scene understanding, Knowledge-based reasoning, Large language models",
author = "Hang Zhang and Wenxiao Zhang and Haoxuan Qu and Jun Liu",
year = "2025",
month = mar,
day = "17",
doi = "10.1007/s44267-025-00074-1",
language = "English",
volume = "3",
journal = "Visual Intelligence",
issn = "2097-3330",
publisher = "Springer Nature Singapore",
number = "1",

}

RIS

TY - JOUR

T1 - Enhancing human-centered dynamic scene understanding via multiple LLMs collaborated reasoning

AU - Zhang, Hang

AU - Zhang, Wenxiao

AU - Qu, Haoxuan

AU - Liu, Jun

PY - 2025/3/17

Y1 - 2025/3/17

N2 - Human-centered dynamic scene understanding plays a pivotal role in enhancing the capability of robotic and autonomous systems, where video-based human-object interaction (V-HOI) detection is a crucial task in semantic scene understanding, which aims to comprehensively understand HOI relationships within a video to benefit the behavioral decisions of mobile robots and autonomous driving systems. Although previous V-HOI detection models have made significant advances in accurate detection on specific datasets, they still lack the general reasoning ability of humans to effectively induce HOI relationships. In this study, we propose V-HOI multi-LLMs collaborated reasoning (V-HOI MLCR), a novel framework consisting of a series of plug-and-play modules that could facilitate the performance of current V-HOI detection models by leveraging the strong reasoning ability of different off-the-shelf pre-trained large language models (LLMs). We design a two-stage collaboration system of different LLMs for the V-HOI task. Specifically, in the first stage, we design a cross-agents reasoning scheme to leverage the LLM to perform reasoning from different aspects. In the second stage, we perform multi-LLMs debate to get the final reasoning answer based on the different knowledge in different LLMs. Additionally, we develop an auxiliary training strategy using CLIP, a large vision-language model to enhance the base V-HOI models’ discriminative ability to better cooperate with LLMs. We validate the superiority of our design by demonstrating its effectiveness in improving the predictive accuracy of the base V-HOI model through reasoning from multiple perspectives.

AB - Human-centered dynamic scene understanding plays a pivotal role in enhancing the capability of robotic and autonomous systems, where video-based human-object interaction (V-HOI) detection is a crucial task in semantic scene understanding, which aims to comprehensively understand HOI relationships within a video to benefit the behavioral decisions of mobile robots and autonomous driving systems. Although previous V-HOI detection models have made significant advances in accurate detection on specific datasets, they still lack the general reasoning ability of humans to effectively induce HOI relationships. In this study, we propose V-HOI multi-LLMs collaborated reasoning (V-HOI MLCR), a novel framework consisting of a series of plug-and-play modules that could facilitate the performance of current V-HOI detection models by leveraging the strong reasoning ability of different off-the-shelf pre-trained large language models (LLMs). We design a two-stage collaboration system of different LLMs for the V-HOI task. Specifically, in the first stage, we design a cross-agents reasoning scheme to leverage the LLM to perform reasoning from different aspects. In the second stage, we perform multi-LLMs debate to get the final reasoning answer based on the different knowledge in different LLMs. Additionally, we develop an auxiliary training strategy using CLIP, a large vision-language model to enhance the base V-HOI models’ discriminative ability to better cooperate with LLMs. We validate the superiority of our design by demonstrating its effectiveness in improving the predictive accuracy of the base V-HOI model through reasoning from multiple perspectives.

KW - Scene understanding

KW - Knowledge-based reasoning

KW - Large language models

U2 - 10.1007/s44267-025-00074-1

DO - 10.1007/s44267-025-00074-1

M3 - Journal article

VL - 3

JO - Visual Intelligence

JF - Visual Intelligence

SN - 2097-3330

IS - 1

M1 - 3

ER -