Home > Research > Publications & Outputs > Interventional Video Grounding with Dual Contra...

Links

Text available via DOI:

View graph of relations

Interventional Video Grounding with Dual Contrastive Learning

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published

Standard

Interventional Video Grounding with Dual Contrastive Learning. / Nan, Guoshun; Qiao, Rui; Xiao, Yao et al.
Proceedings - 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2021. IEEE Computer Society Press, 2021. p. 2764-2774 (Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition).

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Harvard

Nan, G, Qiao, R, Xiao, Y, Liu, J, Leng, S, Zhang, H & Lu, W 2021, Interventional Video Grounding with Dual Contrastive Learning. in Proceedings - 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2021. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, IEEE Computer Society Press, pp. 2764-2774, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2021, Virtual, Online, United States, 19/06/21. https://doi.org/10.1109/CVPR46437.2021.00279

APA

Nan, G., Qiao, R., Xiao, Y., Liu, J., Leng, S., Zhang, H., & Lu, W. (2021). Interventional Video Grounding with Dual Contrastive Learning. In Proceedings - 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2021 (pp. 2764-2774). (Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition). IEEE Computer Society Press. https://doi.org/10.1109/CVPR46437.2021.00279

Vancouver

Nan G, Qiao R, Xiao Y, Liu J, Leng S, Zhang H et al. Interventional Video Grounding with Dual Contrastive Learning. In Proceedings - 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2021. IEEE Computer Society Press. 2021. p. 2764-2774. (Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition). Epub 2021 Jun 21. doi: 10.1109/CVPR46437.2021.00279

Author

Nan, Guoshun ; Qiao, Rui ; Xiao, Yao et al. / Interventional Video Grounding with Dual Contrastive Learning. Proceedings - 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2021. IEEE Computer Society Press, 2021. pp. 2764-2774 (Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition).

Bibtex

@inproceedings{575a67a253b84d328d49580d22b04475,
title = "Interventional Video Grounding with Dual Contrastive Learning",
abstract = "Video grounding aims to localize a moment from an untrimmed video for a given textual query. Existing approaches focus more on the alignment of visual and language stimuli with various likelihood-based matching or regression strategies, i.e., P(Y |X). Consequently, these models may suffer from spurious correlations between the language and video features due to the selection bias of the dataset. 1) To uncover the causality behind the model and data, we first propose a novel paradigm from the perspective of the causal inference, i.e., interventional video grounding (IVG) that leverages backdoor adjustment to deconfound the selection bias based on structured causal model (SCM) and do-calculus P(Y |do(X)). Then, we present a simple yet effective method to approximate the unobserved confounder as it cannot be directly sampled from the dataset. 2) Meanwhile, we introduce a dual contrastive learning approach (DCL) to better align the text and video by maximizing the mutual information (MI) between query and video clips, and the MI between start/end frames of a target moment and the others within a video to learn more informative visual representations. Experiments on three standard benchmarks show the effectiveness of our approaches.",
author = "Guoshun Nan and Rui Qiao and Yao Xiao and Jun Liu and Sicong Leng and Hao Zhang and Wei Lu",
note = "Publisher Copyright: {\textcopyright} 2021 IEEE; 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2021 ; Conference date: 19-06-2021 Through 25-06-2021",
year = "2021",
month = nov,
day = "2",
doi = "10.1109/CVPR46437.2021.00279",
language = "English",
isbn = "9781665445108",
series = "Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition",
publisher = "IEEE Computer Society Press",
pages = "2764--2774",
booktitle = "Proceedings - 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2021",

}

RIS

TY - GEN

T1 - Interventional Video Grounding with Dual Contrastive Learning

AU - Nan, Guoshun

AU - Qiao, Rui

AU - Xiao, Yao

AU - Liu, Jun

AU - Leng, Sicong

AU - Zhang, Hao

AU - Lu, Wei

N1 - Publisher Copyright: © 2021 IEEE

PY - 2021/11/2

Y1 - 2021/11/2

N2 - Video grounding aims to localize a moment from an untrimmed video for a given textual query. Existing approaches focus more on the alignment of visual and language stimuli with various likelihood-based matching or regression strategies, i.e., P(Y |X). Consequently, these models may suffer from spurious correlations between the language and video features due to the selection bias of the dataset. 1) To uncover the causality behind the model and data, we first propose a novel paradigm from the perspective of the causal inference, i.e., interventional video grounding (IVG) that leverages backdoor adjustment to deconfound the selection bias based on structured causal model (SCM) and do-calculus P(Y |do(X)). Then, we present a simple yet effective method to approximate the unobserved confounder as it cannot be directly sampled from the dataset. 2) Meanwhile, we introduce a dual contrastive learning approach (DCL) to better align the text and video by maximizing the mutual information (MI) between query and video clips, and the MI between start/end frames of a target moment and the others within a video to learn more informative visual representations. Experiments on three standard benchmarks show the effectiveness of our approaches.

AB - Video grounding aims to localize a moment from an untrimmed video for a given textual query. Existing approaches focus more on the alignment of visual and language stimuli with various likelihood-based matching or regression strategies, i.e., P(Y |X). Consequently, these models may suffer from spurious correlations between the language and video features due to the selection bias of the dataset. 1) To uncover the causality behind the model and data, we first propose a novel paradigm from the perspective of the causal inference, i.e., interventional video grounding (IVG) that leverages backdoor adjustment to deconfound the selection bias based on structured causal model (SCM) and do-calculus P(Y |do(X)). Then, we present a simple yet effective method to approximate the unobserved confounder as it cannot be directly sampled from the dataset. 2) Meanwhile, we introduce a dual contrastive learning approach (DCL) to better align the text and video by maximizing the mutual information (MI) between query and video clips, and the MI between start/end frames of a target moment and the others within a video to learn more informative visual representations. Experiments on three standard benchmarks show the effectiveness of our approaches.

U2 - 10.1109/CVPR46437.2021.00279

DO - 10.1109/CVPR46437.2021.00279

M3 - Conference contribution/Paper

AN - SCOPUS:85108272164

SN - 9781665445108

T3 - Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition

SP - 2764

EP - 2774

BT - Proceedings - 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2021

PB - IEEE Computer Society Press

T2 - 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2021

Y2 - 19 June 2021 through 25 June 2021

ER -