Final published version
Licence: CC BY: Creative Commons Attribution 4.0 International License
Research output: Contribution to Journal/Magazine › Journal article › peer-review
Research output: Contribution to Journal/Magazine › Journal article › peer-review
}
TY - JOUR
T1 - Enhancing human-centered dynamic scene understanding via multiple LLMs collaborated reasoning
AU - Zhang, Hang
AU - Zhang, Wenxiao
AU - Qu, Haoxuan
AU - Liu, Jun
PY - 2025/3/17
Y1 - 2025/3/17
N2 - Human-centered dynamic scene understanding plays a pivotal role in enhancing the capability of robotic and autonomous systems, where video-based human-object interaction (V-HOI) detection is a crucial task in semantic scene understanding, which aims to comprehensively understand HOI relationships within a video to benefit the behavioral decisions of mobile robots and autonomous driving systems. Although previous V-HOI detection models have made significant advances in accurate detection on specific datasets, they still lack the general reasoning ability of humans to effectively induce HOI relationships. In this study, we propose V-HOI multi-LLMs collaborated reasoning (V-HOI MLCR), a novel framework consisting of a series of plug-and-play modules that could facilitate the performance of current V-HOI detection models by leveraging the strong reasoning ability of different off-the-shelf pre-trained large language models (LLMs). We design a two-stage collaboration system of different LLMs for the V-HOI task. Specifically, in the first stage, we design a cross-agents reasoning scheme to leverage the LLM to perform reasoning from different aspects. In the second stage, we perform multi-LLMs debate to get the final reasoning answer based on the different knowledge in different LLMs. Additionally, we develop an auxiliary training strategy using CLIP, a large vision-language model to enhance the base V-HOI models’ discriminative ability to better cooperate with LLMs. We validate the superiority of our design by demonstrating its effectiveness in improving the predictive accuracy of the base V-HOI model through reasoning from multiple perspectives.
AB - Human-centered dynamic scene understanding plays a pivotal role in enhancing the capability of robotic and autonomous systems, where video-based human-object interaction (V-HOI) detection is a crucial task in semantic scene understanding, which aims to comprehensively understand HOI relationships within a video to benefit the behavioral decisions of mobile robots and autonomous driving systems. Although previous V-HOI detection models have made significant advances in accurate detection on specific datasets, they still lack the general reasoning ability of humans to effectively induce HOI relationships. In this study, we propose V-HOI multi-LLMs collaborated reasoning (V-HOI MLCR), a novel framework consisting of a series of plug-and-play modules that could facilitate the performance of current V-HOI detection models by leveraging the strong reasoning ability of different off-the-shelf pre-trained large language models (LLMs). We design a two-stage collaboration system of different LLMs for the V-HOI task. Specifically, in the first stage, we design a cross-agents reasoning scheme to leverage the LLM to perform reasoning from different aspects. In the second stage, we perform multi-LLMs debate to get the final reasoning answer based on the different knowledge in different LLMs. Additionally, we develop an auxiliary training strategy using CLIP, a large vision-language model to enhance the base V-HOI models’ discriminative ability to better cooperate with LLMs. We validate the superiority of our design by demonstrating its effectiveness in improving the predictive accuracy of the base V-HOI model through reasoning from multiple perspectives.
KW - Scene understanding
KW - Knowledge-based reasoning
KW - Large language models
U2 - 10.1007/s44267-025-00074-1
DO - 10.1007/s44267-025-00074-1
M3 - Journal article
VL - 3
JO - Visual Intelligence
JF - Visual Intelligence
SN - 2097-3330
IS - 1
M1 - 3
ER -