Home > Research > Publications & Outputs > More is Less in Kieker?

Electronic data

View graph of relations

More is Less in Kieker?: The Paradox of No Logging Being Slower Than Logging

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published

Standard

More is Less in Kieker? The Paradox of No Logging Being Slower Than Logging. / Reichelt, David Georg; Jung, Reiner; Hoorn, André van et al.
14th Symposium on Software Performance. 2023.

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Harvard

Reichelt, DG, Jung, R, Hoorn, AV & Hoorn, AV 2023, More is Less in Kieker? The Paradox of No Logging Being Slower Than Logging. in 14th Symposium on Software Performance. 14th Symposium on Software Performance, Karlsruhe, Germany, 6/11/23.

APA

Reichelt, D. G., Jung, R., Hoorn, A. V., & Hoorn, A. V. (2023). More is Less in Kieker? The Paradox of No Logging Being Slower Than Logging. In 14th Symposium on Software Performance

Vancouver

Reichelt DG, Jung R, Hoorn AV, Hoorn AV. More is Less in Kieker? The Paradox of No Logging Being Slower Than Logging. In 14th Symposium on Software Performance. 2023

Author

Reichelt, David Georg ; Jung, Reiner ; Hoorn, André van et al. / More is Less in Kieker? The Paradox of No Logging Being Slower Than Logging. 14th Symposium on Software Performance. 2023.

Bibtex

@inproceedings{bc68f90fde3e40e2a796d054d6bfd235,
title = "More is Less in Kieker?: The Paradox of No Logging Being Slower Than Logging",
abstract = "Understanding the sources of monitoring overhead is crucial for understanding the performance of a monitored application. The MooBench bench mark measures the monitoring overhead and its sources. MooBench assumes that benchmarking overhead emerges from the instrumentation, the data collection, and the writing of data. These three parts are measured through individual factorial experiments. We made the counter-intuitive observation that MooBench consistently and reproducibly reported higher overhead for Kieker and other monitoring frameworks when not writing data. Intuitively, writing should consume resources and therefore slow down (or, since is parallelized, at least not speed up) the monitoring. In this paper, we present an investigation of this problem in Kieker. We find that lock contention at Kieker{\textquoteright}s writing queue causes the problem. Therefore, we propose to add a new queue that dumps all elements. Thereby, a realistic measurement of data collection without writing can be provided.",
author = "Reichelt, {David Georg} and Reiner Jung and Hoorn, {Andr{\'e} van} and Hoorn, {Andr{\'e} van}",
year = "2023",
month = nov,
day = "8",
language = "English",
booktitle = "14th Symposium on Software Performance",
note = "14th Symposium on Software Performance ; Conference date: 06-11-2023 Through 08-11-2023",
url = "https://www.performance-symposium.org/2023/",

}

RIS

TY - GEN

T1 - More is Less in Kieker?

T2 - 14th Symposium on Software Performance

AU - Reichelt, David Georg

AU - Jung, Reiner

AU - Hoorn, André van

AU - Hoorn, André van

PY - 2023/11/8

Y1 - 2023/11/8

N2 - Understanding the sources of monitoring overhead is crucial for understanding the performance of a monitored application. The MooBench bench mark measures the monitoring overhead and its sources. MooBench assumes that benchmarking overhead emerges from the instrumentation, the data collection, and the writing of data. These three parts are measured through individual factorial experiments. We made the counter-intuitive observation that MooBench consistently and reproducibly reported higher overhead for Kieker and other monitoring frameworks when not writing data. Intuitively, writing should consume resources and therefore slow down (or, since is parallelized, at least not speed up) the monitoring. In this paper, we present an investigation of this problem in Kieker. We find that lock contention at Kieker’s writing queue causes the problem. Therefore, we propose to add a new queue that dumps all elements. Thereby, a realistic measurement of data collection without writing can be provided.

AB - Understanding the sources of monitoring overhead is crucial for understanding the performance of a monitored application. The MooBench bench mark measures the monitoring overhead and its sources. MooBench assumes that benchmarking overhead emerges from the instrumentation, the data collection, and the writing of data. These three parts are measured through individual factorial experiments. We made the counter-intuitive observation that MooBench consistently and reproducibly reported higher overhead for Kieker and other monitoring frameworks when not writing data. Intuitively, writing should consume resources and therefore slow down (or, since is parallelized, at least not speed up) the monitoring. In this paper, we present an investigation of this problem in Kieker. We find that lock contention at Kieker’s writing queue causes the problem. Therefore, we propose to add a new queue that dumps all elements. Thereby, a realistic measurement of data collection without writing can be provided.

UR - https://dl.gi.de/items/217bcab8-f2ee-49a7-b961-8bf7ed05d068

M3 - Conference contribution/Paper

BT - 14th Symposium on Software Performance

Y2 - 6 November 2023 through 8 November 2023

ER -