Home > Research > Publications & Outputs > A Congestion Control Framework Based on In-Netw...

Electronic data

  • A Congestion Control Framework based on In-Network Resource Pooling

    Rights statement: ©2021 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

    Accepted author manuscript, 3.4 MB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

Links

Text available via DOI:

View graph of relations

A Congestion Control Framework Based on In-Network Resource Pooling

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published
Close
<mark>Journal publication date</mark>30/04/2022
<mark>Journal</mark>IEEE/ACM Transactions on Networking
Issue number2
Volume30
Number of pages15
Pages (from-to)683-697
Publication StatusPublished
Early online date30/11/21
<mark>Original language</mark>English

Abstract

Congestion control has traditionally relied on monitoring packet-level performance (e.g. latency, loss) through feedback signals propagating end-to-end together with various queue management practices (e.g. carefully setting various parameters, such as router buffer thresholds) in order to regulate traffic flow. Due to its end-to-end nature, this approach is known to transfer data according to the path's slowest link, requiring several RTTs to transmit even a few tens of KB during slow start. In this paper, we take a radically different approach to control congestion, which obviates end-to-end performance monitoring and careful setting of network parameters. The resulting In-Network Resource Pooling Protocol (INRPP) extends the resource pooling principle to exploit in-network resources such as router storage and unused bandwidth along alternative sub-paths. In INRPP, content caches or large (possibly bloated) router buffers are used as a place of temporary custody for incoming data packets in a store and forward manner. Data senders push data in the network and when it hits the bottleneck link, in-network caches at every hop store data in excess of the link capacity; nodes progressively move/send data (from one cache to the next) towards the destination. At the same time alternative sub-paths are exploited to move data faster towards the destination. We demonstrate through extensive simulations that INRPP is TCP friendly, and improves flow completion time and fairness by as much as 50% compared to RCP, MPTCP and TCP, under realistic network conditions.

Bibliographic note

©2021 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.