Home > Research > Publications & Outputs > MESC

Electronic data

  • mecs

    Accepted author manuscript, 1.15 MB, PDF document

    Available under license: CC BY: Creative Commons Attribution 4.0 International License

View graph of relations

MESC: Re-thinking Algorithmic Priority and/or Criticality Inversions for Heterogeneous MCSs

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published

Standard

MESC: Re-thinking Algorithmic Priority and/or Criticality Inversions for Heterogeneous MCSs. / Guan, Jiapeng; Wei, Ran; You, Dean et al.
2024 IEEE Real-Time Systems Symposium (RTSS). IEEE, 2025. p. 1-14.

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Harvard

Guan, J, Wei, R, You, D, Wang, Y, Yang, R, Wang, H & Jiang, Z 2025, MESC: Re-thinking Algorithmic Priority and/or Criticality Inversions for Heterogeneous MCSs. in 2024 IEEE Real-Time Systems Symposium (RTSS). IEEE, pp. 1-14.

APA

Guan, J., Wei, R., You, D., Wang, Y., Yang, R., Wang, H., & Jiang, Z. (2025). MESC: Re-thinking Algorithmic Priority and/or Criticality Inversions for Heterogeneous MCSs. In 2024 IEEE Real-Time Systems Symposium (RTSS) (pp. 1-14). IEEE.

Vancouver

Guan J, Wei R, You D, Wang Y, Yang R, Wang H et al. MESC: Re-thinking Algorithmic Priority and/or Criticality Inversions for Heterogeneous MCSs. In 2024 IEEE Real-Time Systems Symposium (RTSS). IEEE. 2025. p. 1-14 Epub 2024 Dec 10.

Author

Guan, Jiapeng ; Wei, Ran ; You, Dean et al. / MESC : Re-thinking Algorithmic Priority and/or Criticality Inversions for Heterogeneous MCSs. 2024 IEEE Real-Time Systems Symposium (RTSS). IEEE, 2025. pp. 1-14

Bibtex

@inproceedings{8d4f5c242f1848d6a9d396d6b61dce5e,
title = "MESC: Re-thinking Algorithmic Priority and/or Criticality Inversions for Heterogeneous MCSs",
abstract = "Modern Mixed-Criticality Systems (MCSs) rely on hardware heterogeneity to satisfy ever-increasing computational demands. However, most of the heterogeneous co-processors are designed to achieve high throughput, with their micro-architectures executing the workloads in a streaming manner. This streaming execution is often non-preemptive or limited-preemptive, preventing tasks{\textquoteright} prioritisation based on their importance and resulting in frequent occurrences of algorithmic priority and/or criticality inversions. Such problems present a significant barrier to guaranteeing the systems{\textquoteright} real-time predictability, especially when co-processors dominate the execution of the workloads (e.g., DNNs and transformers).In contrast to existing works that typically enable coarse-grained context switch by splitting the workloads/algorithms, we demonstrate a method that provides fine-grained context switch on a widely used open-source DNN accelerator by enabling instruction-level preemption without any workloads/algorithms modifications. As a systematic solution, we build a real system, i.e., Make Each Switch Count (MESC), from the SoC and ISA to the OS kernel. A theoretical model and analysis are also provided for timing guarantees. Experimental results reveal that, compared to conventional MCSs using non-preemptive DNN accelerators, MESC achieved a 250 x and 300 x speedup in resolving algorithmic priority and criticality inversions, with less than 5% overhead. To our knowledge, this is the first work investigating algorithmic priority and criticality inversions for MCSs at the instruction level.",
author = "Jiapeng Guan and Ran Wei and Dean You and Yingquan Wang and Ruizhe Yang and Hui Wang and Zhe Jiang",
year = "2025",
month = jan,
day = "21",
language = "English",
isbn = "9798331540272",
pages = "1--14",
booktitle = "2024 IEEE Real-Time Systems Symposium (RTSS)",
publisher = "IEEE",

}

RIS

TY - GEN

T1 - MESC

T2 - Re-thinking Algorithmic Priority and/or Criticality Inversions for Heterogeneous MCSs

AU - Guan, Jiapeng

AU - Wei, Ran

AU - You, Dean

AU - Wang, Yingquan

AU - Yang, Ruizhe

AU - Wang, Hui

AU - Jiang, Zhe

PY - 2025/1/21

Y1 - 2025/1/21

N2 - Modern Mixed-Criticality Systems (MCSs) rely on hardware heterogeneity to satisfy ever-increasing computational demands. However, most of the heterogeneous co-processors are designed to achieve high throughput, with their micro-architectures executing the workloads in a streaming manner. This streaming execution is often non-preemptive or limited-preemptive, preventing tasks’ prioritisation based on their importance and resulting in frequent occurrences of algorithmic priority and/or criticality inversions. Such problems present a significant barrier to guaranteeing the systems’ real-time predictability, especially when co-processors dominate the execution of the workloads (e.g., DNNs and transformers).In contrast to existing works that typically enable coarse-grained context switch by splitting the workloads/algorithms, we demonstrate a method that provides fine-grained context switch on a widely used open-source DNN accelerator by enabling instruction-level preemption without any workloads/algorithms modifications. As a systematic solution, we build a real system, i.e., Make Each Switch Count (MESC), from the SoC and ISA to the OS kernel. A theoretical model and analysis are also provided for timing guarantees. Experimental results reveal that, compared to conventional MCSs using non-preemptive DNN accelerators, MESC achieved a 250 x and 300 x speedup in resolving algorithmic priority and criticality inversions, with less than 5% overhead. To our knowledge, this is the first work investigating algorithmic priority and criticality inversions for MCSs at the instruction level.

AB - Modern Mixed-Criticality Systems (MCSs) rely on hardware heterogeneity to satisfy ever-increasing computational demands. However, most of the heterogeneous co-processors are designed to achieve high throughput, with their micro-architectures executing the workloads in a streaming manner. This streaming execution is often non-preemptive or limited-preemptive, preventing tasks’ prioritisation based on their importance and resulting in frequent occurrences of algorithmic priority and/or criticality inversions. Such problems present a significant barrier to guaranteeing the systems’ real-time predictability, especially when co-processors dominate the execution of the workloads (e.g., DNNs and transformers).In contrast to existing works that typically enable coarse-grained context switch by splitting the workloads/algorithms, we demonstrate a method that provides fine-grained context switch on a widely used open-source DNN accelerator by enabling instruction-level preemption without any workloads/algorithms modifications. As a systematic solution, we build a real system, i.e., Make Each Switch Count (MESC), from the SoC and ISA to the OS kernel. A theoretical model and analysis are also provided for timing guarantees. Experimental results reveal that, compared to conventional MCSs using non-preemptive DNN accelerators, MESC achieved a 250 x and 300 x speedup in resolving algorithmic priority and criticality inversions, with less than 5% overhead. To our knowledge, this is the first work investigating algorithmic priority and criticality inversions for MCSs at the instruction level.

M3 - Conference contribution/Paper

SN - 9798331540272

SP - 1

EP - 14

BT - 2024 IEEE Real-Time Systems Symposium (RTSS)

PB - IEEE

ER -