Home > Research > Publications & Outputs > Assessing the state and improving the art of pa...

Links

Text available via DOI:

View graph of relations

Assessing the state and improving the art of parallel testing for C

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published

Standard

Assessing the state and improving the art of parallel testing for C. / Schwahn, Oliver; Coppik, Nicolas; Winter, Stefan; Suri, Neeraj.

ISSTA 2019 Proceedings of the 28th ACM SIGSOFT International Symposium on Software Testing and Analysis. ACM, 2019. p. 123-133.

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Harvard

Schwahn, O, Coppik, N, Winter, S & Suri, N 2019, Assessing the state and improving the art of parallel testing for C. in ISSTA 2019 Proceedings of the 28th ACM SIGSOFT International Symposium on Software Testing and Analysis. ACM, pp. 123-133. https://doi.org/10.1145/3293882.3330573

APA

Schwahn, O., Coppik, N., Winter, S., & Suri, N. (2019). Assessing the state and improving the art of parallel testing for C. In ISSTA 2019 Proceedings of the 28th ACM SIGSOFT International Symposium on Software Testing and Analysis (pp. 123-133). ACM. https://doi.org/10.1145/3293882.3330573

Vancouver

Schwahn O, Coppik N, Winter S, Suri N. Assessing the state and improving the art of parallel testing for C. In ISSTA 2019 Proceedings of the 28th ACM SIGSOFT International Symposium on Software Testing and Analysis. ACM. 2019. p. 123-133 https://doi.org/10.1145/3293882.3330573

Author

Schwahn, Oliver ; Coppik, Nicolas ; Winter, Stefan ; Suri, Neeraj. / Assessing the state and improving the art of parallel testing for C. ISSTA 2019 Proceedings of the 28th ACM SIGSOFT International Symposium on Software Testing and Analysis. ACM, 2019. pp. 123-133

Bibtex

@inproceedings{eca97f8d84234db6a6a1a888171e5d13,
title = "Assessing the state and improving the art of parallel testing for C",
abstract = "The execution latency of a test suite strongly depends on the degree of concurrency with which test cases are executed. However, if test cases are not designed for concurrent execution, they may interfere, causing result deviations compared to sequential execution. To prevent this, each test case can be provided with an isolated execution environment, but the resulting overheads diminish the merit of parallel testing. Our large-scale analysis of the Debian Buster package repository shows that existing test suites in C projects make limited use of parallelization. We present an approach to (a) analyze the potential of C test suites for safe concurrent execution, i.e., result invariance compared to sequential execution, and (b) execute tests concurrently with different parallelization strategies using processes or threads if it is found to be safe. Applying our approach to 9 C projects, we find that most of them cannot safely execute tests in parallel due to unsafe test code or unsafe usage of shared variables or files within the program code. Parallel test execution shows a significant acceleration over sequential execution for most projects. We find that multi-threading rarely outperforms multi-processing. Finally, we observe that the lack of a common test framework for C leaves make as the standard driver for running tests, which introduces unnecessary performance overheads for test execution.",
author = "Oliver Schwahn and Nicolas Coppik and Stefan Winter and Neeraj Suri",
year = "2019",
month = jul,
day = "15",
doi = "10.1145/3293882.3330573",
language = "English",
isbn = "9781450362245",
pages = "123--133",
booktitle = "ISSTA 2019 Proceedings of the 28th ACM SIGSOFT International Symposium on Software Testing and Analysis",
publisher = "ACM",

}

RIS

TY - GEN

T1 - Assessing the state and improving the art of parallel testing for C

AU - Schwahn, Oliver

AU - Coppik, Nicolas

AU - Winter, Stefan

AU - Suri, Neeraj

PY - 2019/7/15

Y1 - 2019/7/15

N2 - The execution latency of a test suite strongly depends on the degree of concurrency with which test cases are executed. However, if test cases are not designed for concurrent execution, they may interfere, causing result deviations compared to sequential execution. To prevent this, each test case can be provided with an isolated execution environment, but the resulting overheads diminish the merit of parallel testing. Our large-scale analysis of the Debian Buster package repository shows that existing test suites in C projects make limited use of parallelization. We present an approach to (a) analyze the potential of C test suites for safe concurrent execution, i.e., result invariance compared to sequential execution, and (b) execute tests concurrently with different parallelization strategies using processes or threads if it is found to be safe. Applying our approach to 9 C projects, we find that most of them cannot safely execute tests in parallel due to unsafe test code or unsafe usage of shared variables or files within the program code. Parallel test execution shows a significant acceleration over sequential execution for most projects. We find that multi-threading rarely outperforms multi-processing. Finally, we observe that the lack of a common test framework for C leaves make as the standard driver for running tests, which introduces unnecessary performance overheads for test execution.

AB - The execution latency of a test suite strongly depends on the degree of concurrency with which test cases are executed. However, if test cases are not designed for concurrent execution, they may interfere, causing result deviations compared to sequential execution. To prevent this, each test case can be provided with an isolated execution environment, but the resulting overheads diminish the merit of parallel testing. Our large-scale analysis of the Debian Buster package repository shows that existing test suites in C projects make limited use of parallelization. We present an approach to (a) analyze the potential of C test suites for safe concurrent execution, i.e., result invariance compared to sequential execution, and (b) execute tests concurrently with different parallelization strategies using processes or threads if it is found to be safe. Applying our approach to 9 C projects, we find that most of them cannot safely execute tests in parallel due to unsafe test code or unsafe usage of shared variables or files within the program code. Parallel test execution shows a significant acceleration over sequential execution for most projects. We find that multi-threading rarely outperforms multi-processing. Finally, we observe that the lack of a common test framework for C leaves make as the standard driver for running tests, which introduces unnecessary performance overheads for test execution.

U2 - 10.1145/3293882.3330573

DO - 10.1145/3293882.3330573

M3 - Conference contribution/Paper

SN - 9781450362245

SP - 123

EP - 133

BT - ISSTA 2019 Proceedings of the 28th ACM SIGSOFT International Symposium on Software Testing and Analysis

PB - ACM

ER -