Home > Research > Publications & Outputs > Deep-learning based precoding techniques for ne...

Electronic data

View graph of relations

Deep-learning based precoding techniques for next-generation video compression

Research output: Contribution to conference - Without ISBN/ISSN Conference paperpeer-review

Published

Standard

Deep-learning based precoding techniques for next-generation video compression. / Chadha, Aaron; Bourtsoulatze, Eirina; Giotsas, Vasileios et al.
2019. Paper presented at International Broadcasting Convention, Amsterdam, Netherlands.

Research output: Contribution to conference - Without ISBN/ISSN Conference paperpeer-review

Harvard

Chadha, A, Bourtsoulatze, E, Giotsas, V, Andreopoulos, Y & Grce, S 2019, 'Deep-learning based precoding techniques for next-generation video compression', Paper presented at International Broadcasting Convention, Amsterdam, Netherlands, 13/09/19 - 17/09/19.

APA

Chadha, A., Bourtsoulatze, E., Giotsas, V., Andreopoulos, Y., & Grce, S. (2019). Deep-learning based precoding techniques for next-generation video compression. Paper presented at International Broadcasting Convention, Amsterdam, Netherlands.

Vancouver

Chadha A, Bourtsoulatze E, Giotsas V, Andreopoulos Y, Grce S. Deep-learning based precoding techniques for next-generation video compression. 2019. Paper presented at International Broadcasting Convention, Amsterdam, Netherlands.

Author

Chadha, Aaron ; Bourtsoulatze, Eirina ; Giotsas, Vasileios et al. / Deep-learning based precoding techniques for next-generation video compression. Paper presented at International Broadcasting Convention, Amsterdam, Netherlands.

Bibtex

@conference{f20b62f349cc40a283f7332a97d9a8d9,
title = "Deep-learning based precoding techniques for next-generation video compression",
abstract = "Several research groups worldwide are currently investigating how deep learning may advance the state-of-the-art in image and video coding. An open question is how to make deep neural networks work in conjunction with existing (and upcoming) video codecs, such as MPEG AVC/H.264, HEVC, VVC, Google VP9 and AOMedia AV1, as well as existing container and transport formats. Such compatibility is a crucial aspect, as the video content industry and hardware manufacturers are expected to remain committed to supporting these standards for the foreseeable future.We propose deep neural networks as precoding components for current and future codec ecosystems. In our current deployments for DASH/HLS adaptive streaming, this comprises downscaling neural networks. Precoding via deep learning allows for full compatibility to current and future codec and transport standards while providing for significant savings. Our results with HD content show that 23%-43% rate reduction takes place under a range of state-of-the-art video codec implementations. The use of precoding can also lead to significant encoding complexity reduction, which is essential for the cloud deployment of complex encoders like AV1 and MPEG VVC. Therefore, beyond bitrate saving, deep-learning based precoding may reduce the required cloud resources for video transcoding and make cloud-based solutions competitive or superior to state-of-the-art captive deployments. ",
author = "Aaron Chadha and Eirina Bourtsoulatze and Vasileios Giotsas and Yiannis Andreopoulos and Sergio Grce",
year = "2019",
month = sep,
day = "13",
language = "English",
note = "International Broadcasting Convention, IBC 2019 ; Conference date: 13-09-2019 Through 17-09-2019",

}

RIS

TY - CONF

T1 - Deep-learning based precoding techniques for next-generation video compression

AU - Chadha, Aaron

AU - Bourtsoulatze, Eirina

AU - Giotsas, Vasileios

AU - Andreopoulos, Yiannis

AU - Grce, Sergio

PY - 2019/9/13

Y1 - 2019/9/13

N2 - Several research groups worldwide are currently investigating how deep learning may advance the state-of-the-art in image and video coding. An open question is how to make deep neural networks work in conjunction with existing (and upcoming) video codecs, such as MPEG AVC/H.264, HEVC, VVC, Google VP9 and AOMedia AV1, as well as existing container and transport formats. Such compatibility is a crucial aspect, as the video content industry and hardware manufacturers are expected to remain committed to supporting these standards for the foreseeable future.We propose deep neural networks as precoding components for current and future codec ecosystems. In our current deployments for DASH/HLS adaptive streaming, this comprises downscaling neural networks. Precoding via deep learning allows for full compatibility to current and future codec and transport standards while providing for significant savings. Our results with HD content show that 23%-43% rate reduction takes place under a range of state-of-the-art video codec implementations. The use of precoding can also lead to significant encoding complexity reduction, which is essential for the cloud deployment of complex encoders like AV1 and MPEG VVC. Therefore, beyond bitrate saving, deep-learning based precoding may reduce the required cloud resources for video transcoding and make cloud-based solutions competitive or superior to state-of-the-art captive deployments.

AB - Several research groups worldwide are currently investigating how deep learning may advance the state-of-the-art in image and video coding. An open question is how to make deep neural networks work in conjunction with existing (and upcoming) video codecs, such as MPEG AVC/H.264, HEVC, VVC, Google VP9 and AOMedia AV1, as well as existing container and transport formats. Such compatibility is a crucial aspect, as the video content industry and hardware manufacturers are expected to remain committed to supporting these standards for the foreseeable future.We propose deep neural networks as precoding components for current and future codec ecosystems. In our current deployments for DASH/HLS adaptive streaming, this comprises downscaling neural networks. Precoding via deep learning allows for full compatibility to current and future codec and transport standards while providing for significant savings. Our results with HD content show that 23%-43% rate reduction takes place under a range of state-of-the-art video codec implementations. The use of precoding can also lead to significant encoding complexity reduction, which is essential for the cloud deployment of complex encoders like AV1 and MPEG VVC. Therefore, beyond bitrate saving, deep-learning based precoding may reduce the required cloud resources for video transcoding and make cloud-based solutions competitive or superior to state-of-the-art captive deployments.

M3 - Conference paper

T2 - International Broadcasting Convention

Y2 - 13 September 2019 through 17 September 2019

ER -