Home > Research > Publications & Outputs > Text-driven Video Acceleration

Electronic data

Links

Text available via DOI:

View graph of relations

Text-driven Video Acceleration

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published

Standard

Text-driven Video Acceleration. / De Souza Ramos, Washington; Soriano Marcolino, Leandro; Nascimento, Erickson R.
37th Conference on Graphics, Patterns and Images (SIBGRAPI): Workshop of Theses and Dissertations (WTD). 2024. p. 35-41.

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Harvard

De Souza Ramos, W, Soriano Marcolino, L & Nascimento, ER 2024, Text-driven Video Acceleration. in 37th Conference on Graphics, Patterns and Images (SIBGRAPI): Workshop of Theses and Dissertations (WTD). pp. 35-41, 2024 37th SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI), Manaus, Brazil, 30/09/24. https://doi.org/10.5753/sibgrapi.est.2024.31642

APA

De Souza Ramos, W., Soriano Marcolino, L., & Nascimento, E. R. (2024). Text-driven Video Acceleration. In 37th Conference on Graphics, Patterns and Images (SIBGRAPI): Workshop of Theses and Dissertations (WTD) (pp. 35-41) https://doi.org/10.5753/sibgrapi.est.2024.31642

Vancouver

De Souza Ramos W, Soriano Marcolino L, Nascimento ER. Text-driven Video Acceleration. In 37th Conference on Graphics, Patterns and Images (SIBGRAPI): Workshop of Theses and Dissertations (WTD). 2024. p. 35-41 doi: 10.5753/sibgrapi.est.2024.31642

Author

De Souza Ramos, Washington ; Soriano Marcolino, Leandro ; Nascimento, Erickson R. / Text-driven Video Acceleration. 37th Conference on Graphics, Patterns and Images (SIBGRAPI): Workshop of Theses and Dissertations (WTD). 2024. pp. 35-41

Bibtex

@inproceedings{23f92ed899e34924ab24d71b4e47aaa5,
title = "Text-driven Video Acceleration",
abstract = "From the dawn of the digital revolution until today, data has grown exponentially, especially in images and videos. Smartphones and wearable devices with high storage and long battery life contribute to continuous recording and massive uploads to social media. This rapid increase in visual data, combined with users' limited time, demands methods to produce shorter videos that convey the same information. Semantic Fast-Forwarding reduces viewing time by adaptively accelerating videos and slowing down for relevant segments. However, current methods require predefined visual concepts or user supervision, which is costly and time-consuming. This work explores using textual data to create text-driven fast-forwarding methods that generate semantically meaningful videos without explicit user input. Our proposed approaches outperform baselines, achieving F1 Score improvements up to 12.8 percentage points over the best competitors. Comprehensive user and ablation studies, along with quantitative and qualitative evaluations, confirm their superiority. Visual results are available at https://youtu.be/cOYqumJQOY and https://youtu.be/u6ODTv7-9C4 .",
author = "{De Souza Ramos}, Washington and {Soriano Marcolino}, Leandro and Nascimento, {Erickson R.}",
year = "2024",
month = sep,
day = "30",
doi = "10.5753/sibgrapi.est.2024.31642",
language = "English",
pages = "35--41",
booktitle = "37th Conference on Graphics, Patterns and Images (SIBGRAPI)",
note = "2024 37th SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI) ; Conference date: 30-09-2024 Through 03-10-2024",
url = "https://sibgrapi.sbc.org.br/2024/",

}

RIS

TY - GEN

T1 - Text-driven Video Acceleration

AU - De Souza Ramos, Washington

AU - Soriano Marcolino, Leandro

AU - Nascimento, Erickson R.

PY - 2024/9/30

Y1 - 2024/9/30

N2 - From the dawn of the digital revolution until today, data has grown exponentially, especially in images and videos. Smartphones and wearable devices with high storage and long battery life contribute to continuous recording and massive uploads to social media. This rapid increase in visual data, combined with users' limited time, demands methods to produce shorter videos that convey the same information. Semantic Fast-Forwarding reduces viewing time by adaptively accelerating videos and slowing down for relevant segments. However, current methods require predefined visual concepts or user supervision, which is costly and time-consuming. This work explores using textual data to create text-driven fast-forwarding methods that generate semantically meaningful videos without explicit user input. Our proposed approaches outperform baselines, achieving F1 Score improvements up to 12.8 percentage points over the best competitors. Comprehensive user and ablation studies, along with quantitative and qualitative evaluations, confirm their superiority. Visual results are available at https://youtu.be/cOYqumJQOY and https://youtu.be/u6ODTv7-9C4 .

AB - From the dawn of the digital revolution until today, data has grown exponentially, especially in images and videos. Smartphones and wearable devices with high storage and long battery life contribute to continuous recording and massive uploads to social media. This rapid increase in visual data, combined with users' limited time, demands methods to produce shorter videos that convey the same information. Semantic Fast-Forwarding reduces viewing time by adaptively accelerating videos and slowing down for relevant segments. However, current methods require predefined visual concepts or user supervision, which is costly and time-consuming. This work explores using textual data to create text-driven fast-forwarding methods that generate semantically meaningful videos without explicit user input. Our proposed approaches outperform baselines, achieving F1 Score improvements up to 12.8 percentage points over the best competitors. Comprehensive user and ablation studies, along with quantitative and qualitative evaluations, confirm their superiority. Visual results are available at https://youtu.be/cOYqumJQOY and https://youtu.be/u6ODTv7-9C4 .

U2 - 10.5753/sibgrapi.est.2024.31642

DO - 10.5753/sibgrapi.est.2024.31642

M3 - Conference contribution/Paper

SP - 35

EP - 41

BT - 37th Conference on Graphics, Patterns and Images (SIBGRAPI)

T2 - 2024 37th SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)

Y2 - 30 September 2024 through 3 October 2024

ER -