Deep learning has the potential to advance the state-of-the-art in image and video coding. An open question is how to make deep neural networks work in conjunction with existing (and upcoming) video codecs, such as MPEG AVC/H.264, HEVC, VVC, Google VP9 and AOMedia AV1, as well as existing container and transport formats. Such compatibility is a crucial aspect, as the video content industry and hardware manufacturers are expected to remain committed to supporting these standards for the foreseeable future. This project investigates deep neural networks as precoding components for current and future codec ecosystem focusing on DASH/HLS adaptive streaming. Precoding via deep learning allows for full compatibility to current and future codec and transport standards while providing for significant savings.