Home > Research > Publications & Outputs > SemTrack
View graph of relations

SemTrack: Large-Scale Dataset for Semantic Tracking in the Wild

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published

Standard

SemTrack: Large-Scale Dataset for Semantic Tracking in the Wild. / Wang, Pengfei; Hui, Xiaofei; Wu, Jing et al.
Computer Vision -- ECCV 2024: 18th European Conference, Milan, Italy, September 29–October 4, 2024, Proceedings, Part XXIV. ed. / Aleš Leonardis; Elissa Ricci; Stefan Roth; Olga Russakovsky; Torsten Sattler; Gül Varol. Cham: Springer, 2024. p. 486-504 (Lecture Notes in Computer Science; Vol. 15082).

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Harvard

Wang, P, Hui, X, Wu, J, Yang, Z, Ong, KE, Zhao, X, Lu, B, Huang, D, Ling, E, Chen, W, Ma, KT, Hur, M & Liu, J 2024, SemTrack: Large-Scale Dataset for Semantic Tracking in the Wild. in A Leonardis, E Ricci, S Roth, O Russakovsky, T Sattler & G Varol (eds), Computer Vision -- ECCV 2024: 18th European Conference, Milan, Italy, September 29–October 4, 2024, Proceedings, Part XXIV. Lecture Notes in Computer Science, vol. 15082, Springer, Cham, pp. 486-504. https://doi.org/10.1007/978-3-031-72691-0_27

APA

Wang, P., Hui, X., Wu, J., Yang, Z., Ong, K. E., Zhao, X., Lu, B., Huang, D., Ling, E., Chen, W., Ma, K. T., Hur, M., & Liu, J. (2024). SemTrack: Large-Scale Dataset for Semantic Tracking in the Wild. In A. Leonardis, E. Ricci, S. Roth, O. Russakovsky, T. Sattler, & G. Varol (Eds.), Computer Vision -- ECCV 2024: 18th European Conference, Milan, Italy, September 29–October 4, 2024, Proceedings, Part XXIV (pp. 486-504). (Lecture Notes in Computer Science; Vol. 15082). Springer. https://doi.org/10.1007/978-3-031-72691-0_27

Vancouver

Wang P, Hui X, Wu J, Yang Z, Ong KE, Zhao X et al. SemTrack: Large-Scale Dataset for Semantic Tracking in the Wild. In Leonardis A, Ricci E, Roth S, Russakovsky O, Sattler T, Varol G, editors, Computer Vision -- ECCV 2024: 18th European Conference, Milan, Italy, September 29–October 4, 2024, Proceedings, Part XXIV. Cham: Springer. 2024. p. 486-504. (Lecture Notes in Computer Science). Epub 2024 Nov 3. doi: 10.1007/978-3-031-72691-0_27

Author

Wang, Pengfei ; Hui, Xiaofei ; Wu, Jing et al. / SemTrack : Large-Scale Dataset for Semantic Tracking in the Wild. Computer Vision -- ECCV 2024: 18th European Conference, Milan, Italy, September 29–October 4, 2024, Proceedings, Part XXIV. editor / Aleš Leonardis ; Elissa Ricci ; Stefan Roth ; Olga Russakovsky ; Torsten Sattler ; Gül Varol. Cham : Springer, 2024. pp. 486-504 (Lecture Notes in Computer Science).

Bibtex

@inproceedings{9bbdb7cb4fea409599e36b9f37a86fd4,
title = "SemTrack: Large-Scale Dataset for Semantic Tracking in the Wild",
abstract = "Knowing merely where the target is located is not sufficient for many real-life scenarios. In contrast, capturing rich details about the tracked target via its semantic trajectory, i.e. who/what this target is interacting with and when, where, and how they are interacting over time, is especially crucial and beneficial for various applications (e.g., customer analytics, public safety). We term such tracking as Semantic Tracking and define it as tracking the target based on the user{\textquoteright}s input and then, most importantly, capturing the semantic trajectory of this target. Acquiring such information can have significant impacts on sales, public safety, etc. However, currently, there is no dataset for such comprehensive tracking of the target. To address this gap, we create SemTrack, a large and comprehensive dataset containing annotations of the target{\textquoteright}s semantic trajectory. The dataset contains 6.7 million frames from 6961 videos, covering a wide range of 52 different interaction classes with 115 different object classes spanning 10 different supercategories in 12 types of different scenes, including both indoor and outdoor environments. We also propose SemTracker, a simple and effective method, and incorporate a meta-learning approach to better handle the challenges of this task. Our dataset and code can be found at https://sutdcv.github.io/SemTrack.",
author = "Pengfei Wang and Xiaofei Hui and Jing Wu and Zile Yang and Ong, {Kian Eng} and Xinge Zhao and Beijia Lu and Dezhao Huang and Evan Ling and Weiling Chen and Ma, {Keng Teck} and Minhoe Hur and Jun Liu",
year = "2024",
month = dec,
day = "7",
doi = "10.1007/978-3-031-72691-0_27",
language = "English",
isbn = "9783031726910",
series = "Lecture Notes in Computer Science",
publisher = "Springer",
pages = "486--504",
editor = "Leonardis, {Ale{\v s} } and Elissa Ricci and Stefan Roth and Olga Russakovsky and Torsten Sattler and G{\"u}l Varol",
booktitle = "Computer Vision -- ECCV 2024",

}

RIS

TY - GEN

T1 - SemTrack

T2 - Large-Scale Dataset for Semantic Tracking in the Wild

AU - Wang, Pengfei

AU - Hui, Xiaofei

AU - Wu, Jing

AU - Yang, Zile

AU - Ong, Kian Eng

AU - Zhao, Xinge

AU - Lu, Beijia

AU - Huang, Dezhao

AU - Ling, Evan

AU - Chen, Weiling

AU - Ma, Keng Teck

AU - Hur, Minhoe

AU - Liu, Jun

PY - 2024/12/7

Y1 - 2024/12/7

N2 - Knowing merely where the target is located is not sufficient for many real-life scenarios. In contrast, capturing rich details about the tracked target via its semantic trajectory, i.e. who/what this target is interacting with and when, where, and how they are interacting over time, is especially crucial and beneficial for various applications (e.g., customer analytics, public safety). We term such tracking as Semantic Tracking and define it as tracking the target based on the user’s input and then, most importantly, capturing the semantic trajectory of this target. Acquiring such information can have significant impacts on sales, public safety, etc. However, currently, there is no dataset for such comprehensive tracking of the target. To address this gap, we create SemTrack, a large and comprehensive dataset containing annotations of the target’s semantic trajectory. The dataset contains 6.7 million frames from 6961 videos, covering a wide range of 52 different interaction classes with 115 different object classes spanning 10 different supercategories in 12 types of different scenes, including both indoor and outdoor environments. We also propose SemTracker, a simple and effective method, and incorporate a meta-learning approach to better handle the challenges of this task. Our dataset and code can be found at https://sutdcv.github.io/SemTrack.

AB - Knowing merely where the target is located is not sufficient for many real-life scenarios. In contrast, capturing rich details about the tracked target via its semantic trajectory, i.e. who/what this target is interacting with and when, where, and how they are interacting over time, is especially crucial and beneficial for various applications (e.g., customer analytics, public safety). We term such tracking as Semantic Tracking and define it as tracking the target based on the user’s input and then, most importantly, capturing the semantic trajectory of this target. Acquiring such information can have significant impacts on sales, public safety, etc. However, currently, there is no dataset for such comprehensive tracking of the target. To address this gap, we create SemTrack, a large and comprehensive dataset containing annotations of the target’s semantic trajectory. The dataset contains 6.7 million frames from 6961 videos, covering a wide range of 52 different interaction classes with 115 different object classes spanning 10 different supercategories in 12 types of different scenes, including both indoor and outdoor environments. We also propose SemTracker, a simple and effective method, and incorporate a meta-learning approach to better handle the challenges of this task. Our dataset and code can be found at https://sutdcv.github.io/SemTrack.

U2 - 10.1007/978-3-031-72691-0_27

DO - 10.1007/978-3-031-72691-0_27

M3 - Conference contribution/Paper

SN - 9783031726910

T3 - Lecture Notes in Computer Science

SP - 486

EP - 504

BT - Computer Vision -- ECCV 2024

A2 - Leonardis, Aleš

A2 - Ricci, Elissa

A2 - Roth, Stefan

A2 - Russakovsky, Olga

A2 - Sattler, Torsten

A2 - Varol, Gül

PB - Springer

CY - Cham

ER -