Accepted author manuscript, 4.92 MB, PDF document
Available under license: CC BY: Creative Commons Attribution 4.0 International License
Final published version
Research output: Contribution to Journal/Magazine › Journal article › peer-review
<mark>Journal publication date</mark> | 11/06/2024 |
---|---|
<mark>Journal</mark> | IEEE Transactions on Fuzzy Systems |
Number of pages | 15 |
Pages (from-to) | 1-15 |
Publication Status | E-pub ahead of print |
Early online date | 11/06/24 |
<mark>Original language</mark> | English |
Deep Neural Networks (DNN) has been widely applied in big data-driven Internet of Things (IoT) for excellent learning ability, while the black-box nature of DNN leads to uncertainty of inference results. With higher interpretability, Convolutional Fuzzy Neural Network (CFNN) becomes an alternative choice for the model of IoT applications. IoT applications are often latency-sensitive. By jointly utilizing computing power of IoT devices and edge servers, end-edge collaborative CFNN inference improves the insufficiency of local computing resources and reduces the latency of computing-intensive CFNN inference. However, the calculation amount of fuzzy layers is hard to get directly, bringing difficulty to CFNN partition. Additionally, the profit of service providers is often ignored in existing work on distributed inference. In this paper, an end-edge collaborative inference framework of CFNNs for big data-driven IoT, named DisCFNN, is proposed. Specifically, a novel CFNN structure and a method of fuzzy layer calculation amount assessment are designed at first. Next, computing resource allocation and CFNN partition decisions are generated on each edge server based on deep reinforcement learning. Then, each IoT device sends the request of CFNN inference service to a certain edge server or infer the whole CFNN locally according to the task offloading strategy obtained through many-to-one matching game. Finally, the effectiveness of DisCFNN is evaluated through extensive experiments.