Final published version, 1.6 MB, PDF document
Research output: Contribution to conference - Without ISBN/ISSN › Conference paper › peer-review
Research output: Contribution to conference - Without ISBN/ISSN › Conference paper › peer-review
}
TY - CONF
T1 - Deep-learning based precoding techniques for next-generation video compression
AU - Chadha, Aaron
AU - Bourtsoulatze, Eirina
AU - Giotsas, Vasileios
AU - Andreopoulos, Yiannis
AU - Grce, Sergio
PY - 2019/9/13
Y1 - 2019/9/13
N2 - Several research groups worldwide are currently investigating how deep learning may advance the state-of-the-art in image and video coding. An open question is how to make deep neural networks work in conjunction with existing (and upcoming) video codecs, such as MPEG AVC/H.264, HEVC, VVC, Google VP9 and AOMedia AV1, as well as existing container and transport formats. Such compatibility is a crucial aspect, as the video content industry and hardware manufacturers are expected to remain committed to supporting these standards for the foreseeable future.We propose deep neural networks as precoding components for current and future codec ecosystems. In our current deployments for DASH/HLS adaptive streaming, this comprises downscaling neural networks. Precoding via deep learning allows for full compatibility to current and future codec and transport standards while providing for significant savings. Our results with HD content show that 23%-43% rate reduction takes place under a range of state-of-the-art video codec implementations. The use of precoding can also lead to significant encoding complexity reduction, which is essential for the cloud deployment of complex encoders like AV1 and MPEG VVC. Therefore, beyond bitrate saving, deep-learning based precoding may reduce the required cloud resources for video transcoding and make cloud-based solutions competitive or superior to state-of-the-art captive deployments.
AB - Several research groups worldwide are currently investigating how deep learning may advance the state-of-the-art in image and video coding. An open question is how to make deep neural networks work in conjunction with existing (and upcoming) video codecs, such as MPEG AVC/H.264, HEVC, VVC, Google VP9 and AOMedia AV1, as well as existing container and transport formats. Such compatibility is a crucial aspect, as the video content industry and hardware manufacturers are expected to remain committed to supporting these standards for the foreseeable future.We propose deep neural networks as precoding components for current and future codec ecosystems. In our current deployments for DASH/HLS adaptive streaming, this comprises downscaling neural networks. Precoding via deep learning allows for full compatibility to current and future codec and transport standards while providing for significant savings. Our results with HD content show that 23%-43% rate reduction takes place under a range of state-of-the-art video codec implementations. The use of precoding can also lead to significant encoding complexity reduction, which is essential for the cloud deployment of complex encoders like AV1 and MPEG VVC. Therefore, beyond bitrate saving, deep-learning based precoding may reduce the required cloud resources for video transcoding and make cloud-based solutions competitive or superior to state-of-the-art captive deployments.
M3 - Conference paper
T2 - International Broadcasting Convention
Y2 - 13 September 2019 through 17 September 2019
ER -