Home > Research > Publications & Outputs > Motion Adaptive Pose Estimation from Compressed...

Links

Text available via DOI:

View graph of relations

Motion Adaptive Pose Estimation from Compressed Videos

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published
Close
Publication date28/02/2022
Host publicationProceedings - 2021 IEEE/CVF International Conference on Computer Vision, ICCV 2021
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages11699-11708
Number of pages10
ISBN (electronic)9781665428125
ISBN (print)9781665428132
<mark>Original language</mark>English
Event18th IEEE/CVF International Conference on Computer Vision, ICCV 2021 - Virtual, Online, Canada
Duration: 11/10/202117/10/2021

Conference

Conference18th IEEE/CVF International Conference on Computer Vision, ICCV 2021
Country/TerritoryCanada
CityVirtual, Online
Period11/10/2117/10/21

Publication series

NameProceedings of the IEEE International Conference on Computer Vision
ISSN (Print)1550-5499

Conference

Conference18th IEEE/CVF International Conference on Computer Vision, ICCV 2021
Country/TerritoryCanada
CityVirtual, Online
Period11/10/2117/10/21

Abstract

Human pose estimation from videos has many real-world applications. Existing methods focus on applying models with a uniform computation profile on fully decoded frames, ignoring the freely-available motion signals and motion-compensation residuals from the compressed stream. A novel model, called Motion Adaptive Pose Net is proposed to exploit the compressed streams to efficiently decode pose sequences from videos. The model incorporates a Motion Compensated ConvLSTM to propagate the spatially aligned features, along with an adaptive gate to dynamically determine if the computationally expensive features should be extracted from fully decoded frames to compensate the motion-warped features, solely based on the residual errors. Leveraging the informative yet readily available signals from compressed streams, we propagate the latent features through our Motion Adaptive Pose Net efficiently Our model outperforms the state-of-the-art models in pose-estimation accuracy on two widely used datasets with only around half of the computation complexity.

Bibliographic note

Publisher Copyright: © 2021 IEEE