Home > Research > Publications & Outputs > SceneLLM

Electronic data

  • 1-s2.0-S0031320325006521-main

    Accepted author manuscript, 2.09 MB, PDF document

    Available under license: CC BY: Creative Commons Attribution 4.0 International License

Links

Text available via DOI:

View graph of relations

SceneLLM: Implicit language reasoning in LLM for dynamic scene graph generation

Research output: Contribution to Journal/MagazineJournal articlepeer-review

E-pub ahead of print

Standard

SceneLLM: Implicit language reasoning in LLM for dynamic scene graph generation. / Zhang, Hang; Li, Zhuoling; Liu, Jun.
In: Pattern Recognition, Vol. 170, 111992, 28.02.2026.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

APA

Vancouver

Zhang H, Li Z, Liu J. SceneLLM: Implicit language reasoning in LLM for dynamic scene graph generation. Pattern Recognition. 2026 Feb 28;170:111992. Epub 2025 Jun 26. doi: 10.1016/j.patcog.2025.111992

Author

Bibtex

@article{025ff783b1bb44d191173133188cb979,
title = "SceneLLM: Implicit language reasoning in LLM for dynamic scene graph generation",
abstract = "Dynamic scenes contain intricate spatio-temporal information, crucial for mobile robots, UAVs, and autonomous driving systems to make informed decisions. Parsing these scenes into semantic triplets Subject-Predicate-Object for accurate Scene Graph Generation (SGG) is highly challenging due to the fluctuating spatio-temporal complexity. Inspired by the reasoning capabilities of Large Language Models (LLMs), we propose SceneLLM, a novel framework that leverages LLMs as powerful scene analyzers for dynamic SGG. Our framework introduces a Video-to-Language (V2L) mapping module that transforms video frames into linguistic signals (scene tokens), making the input more comprehensible for LLMs. To better encode spatial information, we devise a Spatial Information Aggregation (SIA) scheme, inspired by the structure of Chinese characters, which encodes spatial data into tokens. Using Optimal Transport (OT), we generate an implicit language signal from the frame-level token sequence that captures the video's spatio-temporal information. To further improve the LLM's ability to process this implicit linguistic input, we apply Low-Rank Adaptation (LoRA) to fine-tune the model. Finally, we use a transformer-based SGG predictor to decode the LLM's reasoning and predict semantic triplets. Our method achieves state-of-the-art results on the Action Genome (AG) benchmark, and extensive experiments show the effectiveness of SceneLLM in understanding and generating accurate dynamic scene graphs.",
author = "Hang Zhang and Zhuoling Li and Jun Liu",
year = "2025",
month = jun,
day = "26",
doi = "10.1016/j.patcog.2025.111992",
language = "English",
volume = "170",
journal = "Pattern Recognition",
issn = "0031-3203",
publisher = "Elsevier Ltd",

}

RIS

TY - JOUR

T1 - SceneLLM

T2 - Implicit language reasoning in LLM for dynamic scene graph generation

AU - Zhang, Hang

AU - Li, Zhuoling

AU - Liu, Jun

PY - 2025/6/26

Y1 - 2025/6/26

N2 - Dynamic scenes contain intricate spatio-temporal information, crucial for mobile robots, UAVs, and autonomous driving systems to make informed decisions. Parsing these scenes into semantic triplets Subject-Predicate-Object for accurate Scene Graph Generation (SGG) is highly challenging due to the fluctuating spatio-temporal complexity. Inspired by the reasoning capabilities of Large Language Models (LLMs), we propose SceneLLM, a novel framework that leverages LLMs as powerful scene analyzers for dynamic SGG. Our framework introduces a Video-to-Language (V2L) mapping module that transforms video frames into linguistic signals (scene tokens), making the input more comprehensible for LLMs. To better encode spatial information, we devise a Spatial Information Aggregation (SIA) scheme, inspired by the structure of Chinese characters, which encodes spatial data into tokens. Using Optimal Transport (OT), we generate an implicit language signal from the frame-level token sequence that captures the video's spatio-temporal information. To further improve the LLM's ability to process this implicit linguistic input, we apply Low-Rank Adaptation (LoRA) to fine-tune the model. Finally, we use a transformer-based SGG predictor to decode the LLM's reasoning and predict semantic triplets. Our method achieves state-of-the-art results on the Action Genome (AG) benchmark, and extensive experiments show the effectiveness of SceneLLM in understanding and generating accurate dynamic scene graphs.

AB - Dynamic scenes contain intricate spatio-temporal information, crucial for mobile robots, UAVs, and autonomous driving systems to make informed decisions. Parsing these scenes into semantic triplets Subject-Predicate-Object for accurate Scene Graph Generation (SGG) is highly challenging due to the fluctuating spatio-temporal complexity. Inspired by the reasoning capabilities of Large Language Models (LLMs), we propose SceneLLM, a novel framework that leverages LLMs as powerful scene analyzers for dynamic SGG. Our framework introduces a Video-to-Language (V2L) mapping module that transforms video frames into linguistic signals (scene tokens), making the input more comprehensible for LLMs. To better encode spatial information, we devise a Spatial Information Aggregation (SIA) scheme, inspired by the structure of Chinese characters, which encodes spatial data into tokens. Using Optimal Transport (OT), we generate an implicit language signal from the frame-level token sequence that captures the video's spatio-temporal information. To further improve the LLM's ability to process this implicit linguistic input, we apply Low-Rank Adaptation (LoRA) to fine-tune the model. Finally, we use a transformer-based SGG predictor to decode the LLM's reasoning and predict semantic triplets. Our method achieves state-of-the-art results on the Action Genome (AG) benchmark, and extensive experiments show the effectiveness of SceneLLM in understanding and generating accurate dynamic scene graphs.

U2 - 10.1016/j.patcog.2025.111992

DO - 10.1016/j.patcog.2025.111992

M3 - Journal article

VL - 170

JO - Pattern Recognition

JF - Pattern Recognition

SN - 0031-3203

M1 - 111992

ER -