Home > Research > Publications & Outputs > Overhead Measurement Noise in Different Runtime...

Links

View graph of relations

Overhead Measurement Noise in Different Runtime Environments

Research output: Contribution to Journal/MagazineConference articlepeer-review

Published

Standard

Overhead Measurement Noise in Different Runtime Environments. / Reichelt, David Georg ; Jung, Reiner; van Hoorn, André.
In: Softwaretechnik-Trends, Vol. 44, No. 4, 31.12.2024, p. 12-14.

Research output: Contribution to Journal/MagazineConference articlepeer-review

Harvard

Reichelt, DG, Jung, R & van Hoorn, A 2024, 'Overhead Measurement Noise in Different Runtime Environments', Softwaretechnik-Trends, vol. 44, no. 4, pp. 12-14. <https://arxiv.org/abs/2411.05491>

APA

Reichelt, D. G., Jung, R., & van Hoorn, A. (2024). Overhead Measurement Noise in Different Runtime Environments. Softwaretechnik-Trends, 44(4), 12-14. https://arxiv.org/abs/2411.05491

Vancouver

Reichelt DG, Jung R, van Hoorn A. Overhead Measurement Noise in Different Runtime Environments. Softwaretechnik-Trends. 2024 Dec 31;44(4):12-14. Epub 2024 Nov 6.

Author

Reichelt, David Georg ; Jung, Reiner ; van Hoorn, André. / Overhead Measurement Noise in Different Runtime Environments. In: Softwaretechnik-Trends. 2024 ; Vol. 44, No. 4. pp. 12-14.

Bibtex

@article{c7403454166c4cc2bc320634c2c7ddc0,
title = "Overhead Measurement Noise in Different Runtime Environments",
abstract = "In order to detect performance changes, measurements are performed with the same execution environment. In cloud environments, the noise from different processes running on the same cluster nodes might change measurement results and thereby make performance changes hard to measure. The benchmark MooBench determines the overhead of different observability tools and is executed continuously. In this study, we compare the suitability of different execution environments to benchmark the observability overhead using MooBench. To do so, we compare the execution times and standard deviation of MooBench in a cloud execution environment to three bare-metal execution environments. We find that bare metal servers have lower runtime and standard deviation for multi-threaded MooBench execution. Nevertheless, we see that performance changes up to 4.41 % are detectable by GitHub actions, as long as only sequential workloads are examined.",
author = "Reichelt, {David Georg} and Reiner Jung and {van Hoorn}, Andr{\'e}",
year = "2024",
month = dec,
day = "31",
language = "English",
volume = "44",
pages = "12--14",
journal = "Softwaretechnik-Trends",
number = "4",
note = "15th Symposium on Software Performance 2024 ; Conference date: 06-11-2024 Through 07-11-2024",

}

RIS

TY - JOUR

T1 - Overhead Measurement Noise in Different Runtime Environments

AU - Reichelt, David Georg

AU - Jung, Reiner

AU - van Hoorn, André

PY - 2024/12/31

Y1 - 2024/12/31

N2 - In order to detect performance changes, measurements are performed with the same execution environment. In cloud environments, the noise from different processes running on the same cluster nodes might change measurement results and thereby make performance changes hard to measure. The benchmark MooBench determines the overhead of different observability tools and is executed continuously. In this study, we compare the suitability of different execution environments to benchmark the observability overhead using MooBench. To do so, we compare the execution times and standard deviation of MooBench in a cloud execution environment to three bare-metal execution environments. We find that bare metal servers have lower runtime and standard deviation for multi-threaded MooBench execution. Nevertheless, we see that performance changes up to 4.41 % are detectable by GitHub actions, as long as only sequential workloads are examined.

AB - In order to detect performance changes, measurements are performed with the same execution environment. In cloud environments, the noise from different processes running on the same cluster nodes might change measurement results and thereby make performance changes hard to measure. The benchmark MooBench determines the overhead of different observability tools and is executed continuously. In this study, we compare the suitability of different execution environments to benchmark the observability overhead using MooBench. To do so, we compare the execution times and standard deviation of MooBench in a cloud execution environment to three bare-metal execution environments. We find that bare metal servers have lower runtime and standard deviation for multi-threaded MooBench execution. Nevertheless, we see that performance changes up to 4.41 % are detectable by GitHub actions, as long as only sequential workloads are examined.

UR - https://fb-swt.gi.de/fileadmin/FB/SWT/Softwaretechnik-Trends/Verzeichnis/Band\_44\_Heft\_4/SSP24\_03\_camera-ready\_7783.pdf

M3 - Conference article

VL - 44

SP - 12

EP - 14

JO - Softwaretechnik-Trends

JF - Softwaretechnik-Trends

IS - 4

T2 - 15th Symposium on Software Performance 2024

Y2 - 6 November 2024 through 7 November 2024

ER -