Home > Research > Publications & Outputs > A Reinforcement Learning-based Trust Model for ...

Electronic data

  • TCCN-Meehong-for Pure

    Rights statement: ©2019 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

    Accepted author manuscript, 3.1 MB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

Links

Text available via DOI:

View graph of relations

A Reinforcement Learning-based Trust Model for Cluster Size Adjustment Scheme in Distributed Cognitive Radio Networks

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published
  • Mee Hong Ling
  • Kok-lim Alvin Yau
  • Junaid Qadir
  • Qiang Ni
Close
<mark>Journal publication date</mark>1/03/2019
<mark>Journal</mark> IEEE Transactions on Cognitive Communications and Networking
Issue number1
Volume5
Number of pages17
Pages (from-to)28 - 43
Publication StatusPublished
Early online date13/11/18
<mark>Original language</mark>English

Abstract

Cognitive radio enables secondary users (SUs) to explore and exploit the underutilized licensed channels (or white spaces) owned by the primary users. To improve the network scalability, the SUs are organized into clusters. This article proposes a novel artificial intelligence based trust model approach that uses reinforcement learning (RL) to improve traditional budget-based cluster size adjustment schemes. The RL-based trust model enables the clusterhead to observe and learn about the behaviors of its SU member nodes, and revoke the membership of malicious SUs in order to ameliorate the effects of intelligent and collaborative attacks, while adjusting the cluster size dynamically according to the availability of white spaces. The malicious SUs launch attacks on clusterheads causing the cluster size to become inappropriately sized while learning to remain undetected. In any attack and defense scenario, both the attackers and the clusterhead adopt RL approaches. Simulation results have shown that the single-agent RL (SARL) attackers have caused the cluster size to reduce significantly; while the SARL clusterhead has slightly helped increase its cluster size, and this motivates a rule-based approach to efficiently counterattack. Multi-agent RL attacks have shown to be less effective in an operating environment that is dynamic.

Bibliographic note

©2019 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.