Home > Research > Publications & Outputs > Straight to the Point

Electronic data

  • 2020_cvpr_ramos

    Rights statement: ©2020 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

    Accepted author manuscript, 9.91 MB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

Links

Text available via DOI:

View graph of relations

Straight to the Point: Fast-forwarding Videos via Reinforcement Learning Using Textual Data

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published

Standard

Straight to the Point: Fast-forwarding Videos via Reinforcement Learning Using Textual Data. / De Souza Ramos, Washington; Silva, Michel M.; Araujo, Edson R. et al.
Proceedings of the 2020 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2020. IEEE, 2020. p. 10928-10937.

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Harvard

De Souza Ramos, W, Silva, MM, Araujo, ER, Soriano Marcolino, L & Nascimento, ER 2020, Straight to the Point: Fast-forwarding Videos via Reinforcement Learning Using Textual Data. in Proceedings of the 2020 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2020. IEEE, pp. 10928-10937. https://doi.org/10.1109/CVPR42600.2020.01094

APA

De Souza Ramos, W., Silva, M. M., Araujo, E. R., Soriano Marcolino, L., & Nascimento, E. R. (2020). Straight to the Point: Fast-forwarding Videos via Reinforcement Learning Using Textual Data. In Proceedings of the 2020 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2020 (pp. 10928-10937). IEEE. https://doi.org/10.1109/CVPR42600.2020.01094

Vancouver

De Souza Ramos W, Silva MM, Araujo ER, Soriano Marcolino L, Nascimento ER. Straight to the Point: Fast-forwarding Videos via Reinforcement Learning Using Textual Data. In Proceedings of the 2020 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2020. IEEE. 2020. p. 10928-10937 doi: 10.1109/CVPR42600.2020.01094

Author

De Souza Ramos, Washington ; Silva, Michel M. ; Araujo, Edson R. et al. / Straight to the Point : Fast-forwarding Videos via Reinforcement Learning Using Textual Data. Proceedings of the 2020 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2020. IEEE, 2020. pp. 10928-10937

Bibtex

@inproceedings{8f40d54897a64e8aa2444af79f13014d,
title = "Straight to the Point: Fast-forwarding Videos via Reinforcement Learning Using Textual Data",
abstract = "The rapid increase in the amount of published visual data and the limited time of users bring the demand for processing untrimmed videos to produce shorter versions that convey the same information. Despite the remarkable progress that has been made by summarization methods, most of them can only select a few frames or skims, which creates visual gaps and breaks the video context. In this paper, we present a novel methodology based on a reinforcement learning formulation to accelerate instructional videos. Our approach can adaptively select frames that are not relevant to convey the information without creating gaps in the final video. Our agent is textually and visually oriented to select which frames to remove to shrink the input video. Additionally, we propose a novel network, called Visually-guided Document Attention Network (VDAN), able to generate a highly discriminative embedding space to represent both textual and visual data. Our experiments show that our method achieves the best performance in terms of F1 Score and coverage at the video segment level.",
author = "{De Souza Ramos}, Washington and Silva, {Michel M.} and Araujo, {Edson R.} and {Soriano Marcolino}, Leandro and Nascimento, {Erickson R.}",
note = "{\textcopyright}2020 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. ",
year = "2020",
month = aug,
day = "5",
doi = "10.1109/CVPR42600.2020.01094",
language = "English",
pages = "10928--10937",
booktitle = "Proceedings of the 2020 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2020",
publisher = "IEEE",

}

RIS

TY - GEN

T1 - Straight to the Point

T2 - Fast-forwarding Videos via Reinforcement Learning Using Textual Data

AU - De Souza Ramos, Washington

AU - Silva, Michel M.

AU - Araujo, Edson R.

AU - Soriano Marcolino, Leandro

AU - Nascimento, Erickson R.

N1 - ©2020 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

PY - 2020/8/5

Y1 - 2020/8/5

N2 - The rapid increase in the amount of published visual data and the limited time of users bring the demand for processing untrimmed videos to produce shorter versions that convey the same information. Despite the remarkable progress that has been made by summarization methods, most of them can only select a few frames or skims, which creates visual gaps and breaks the video context. In this paper, we present a novel methodology based on a reinforcement learning formulation to accelerate instructional videos. Our approach can adaptively select frames that are not relevant to convey the information without creating gaps in the final video. Our agent is textually and visually oriented to select which frames to remove to shrink the input video. Additionally, we propose a novel network, called Visually-guided Document Attention Network (VDAN), able to generate a highly discriminative embedding space to represent both textual and visual data. Our experiments show that our method achieves the best performance in terms of F1 Score and coverage at the video segment level.

AB - The rapid increase in the amount of published visual data and the limited time of users bring the demand for processing untrimmed videos to produce shorter versions that convey the same information. Despite the remarkable progress that has been made by summarization methods, most of them can only select a few frames or skims, which creates visual gaps and breaks the video context. In this paper, we present a novel methodology based on a reinforcement learning formulation to accelerate instructional videos. Our approach can adaptively select frames that are not relevant to convey the information without creating gaps in the final video. Our agent is textually and visually oriented to select which frames to remove to shrink the input video. Additionally, we propose a novel network, called Visually-guided Document Attention Network (VDAN), able to generate a highly discriminative embedding space to represent both textual and visual data. Our experiments show that our method achieves the best performance in terms of F1 Score and coverage at the video segment level.

U2 - 10.1109/CVPR42600.2020.01094

DO - 10.1109/CVPR42600.2020.01094

M3 - Conference contribution/Paper

SP - 10928

EP - 10937

BT - Proceedings of the 2020 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2020

PB - IEEE

ER -