Home > Research > Publications & Outputs > Guessing random additive noise decoding of netw...

Electronic data

  • Chatzigeorgiou_and_Savostyanov_TVT_2023_05114

    Rights statement: ©2024 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

    Accepted author manuscript, 2.05 MB, PDF document

    Available under license: CC BY: Creative Commons Attribution 4.0 International License

Links

Text available via DOI:

View graph of relations

Guessing random additive noise decoding of network coded data transmitted over burst error channels

Research output: Contribution to Journal/MagazineJournal articlepeer-review

E-pub ahead of print
<mark>Journal publication date</mark>1/04/2024
<mark>Journal</mark>IEEE Transactions on Vehicular Technology
Number of pages16
Publication StatusE-pub ahead of print
Early online date1/04/24
<mark>Original language</mark>English

Abstract

We consider a transmitter that encodes data packets using network coding and broadcasts coded packets. A receiver employing network decoding recovers the data packets if a sufficient number of error-free coded packets are gathered. The receiver does not abandon its efforts to recover the data packets if network decoding is unsuccessful; instead, it employs syndrome decoding (SD) in an effort to repair erroneous received coded packets, and then reattempts network decoding. Most decoding techniques, including SD, assume that errors are independently and identically distributed within received coded packets. Motivated by the guessing random additive noise decoding (GRAND) framework, we propose transversal GRAND (T-GRAND): an algorithm that exploits statistical dependence in the occurrence of errors, complements network decoding and recovers all data packets with a higher probability than SD. T-GRAND examines error vectors in order of their likelihood of occurring and altering the transmitted packets. Calculation and sorting of the likelihood values of all error vectors is a simple but computationally expensive process. To reduce the complexity of T-GRAND, we take advantage of the properties of the likelihood function and develop an efficient method, which identifies the most likely error vectors without computing and ordering all likelihood values.