Home > Research > Publications & Outputs > Text-driven video acceleration

Electronic data

  • TextDrivenVideoAcceleration_TPAMI2022

    Rights statement: ©2022 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

    Accepted author manuscript, 13 MB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

Links

Text available via DOI:

View graph of relations

Text-driven video acceleration: A weakly-supervised reinforcement learning method

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published
  • W.L.D.S. Ramos
  • M.M.D. Silva
  • E. Araujo
  • V. Moura
  • K.C. Martins de Oliveira
  • L. Soriano Marcolino
  • E. Nascimento
Close
<mark>Journal publication date</mark>28/02/2023
<mark>Journal</mark>IEEE Transactions on Pattern Analysis and Machine Intelligence
Issue number2
Volume45
Number of pages13
Pages (from-to)2492-2504
Publication StatusPublished
Early online date7/03/22
<mark>Original language</mark>English

Abstract

The growth of videos in our digital age and the users' limited time raise the demand for processing untrimmed videos to produce shorter versions conveying the same information. Despite the remarkable progress that summarization methods have made, most of them can only select a few frames or skims, creating visual gaps and breaking the video context. This paper presents a novel weakly-supervised methodology based on a reinforcement learning formulation to accelerate instructional videos using text. A novel joint reward function guides our agent to select which frames to remove and reduce the input video to a target length without creating gaps in the final video. We also propose the Extended Visually-guided Document Attention Network (VDAN+), which can generate a highly discriminative embedding space to represent both textual and visual data. Our experiments show that our method achieves the best performance in Precision, Recall, and F1 Score against the baselines while effectively controlling the video's output length. IEEE

Bibliographic note

©2022 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.