Home > Research > Publications & Outputs > Highly-parallelized simulation of a pixelated L...

Associated organisational unit

Electronic data

  • 2212.09807

    Accepted author manuscript, 14.5 MB, PDF document

    Available under license: CC BY: Creative Commons Attribution 4.0 International License

Links

Text available via DOI:

View graph of relations

Highly-parallelized simulation of a pixelated LArTPC on a GPU

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published
Article numberP04034
<mark>Journal publication date</mark>26/04/2023
<mark>Journal</mark>Journal of Instrumentation
Volume18
Number of pages34
Publication StatusPublished
<mark>Original language</mark>English

Abstract

The rapid development of general-purpose computing on graphics processing units (GPGPU) is allowing the implementation of highly-parallelized Monte Carlo simulation chains for particle physics experiments. This technique is particularly suitable for the simulation of a pixelated charge readout for time projection chambers, given the large number of channels that this technology employs. Here we present the first implementation of a full microphysical simulator of a liquid argon time project chamber (LArTPC) equipped with light readout and pixelated charge readout, developed for the DUNE Near Detector. The software is implemented with an end-to-end set of GPU-optimized algorithms. The algorithms have been written in Python and translated into CUDA kernels using Numba, a just-in-time compiler for a subset of Python and NumPy instructions. The GPU implementation achieves a speed up of four orders of magnitude compared with the equivalent CPU version. The simulation of the current induced on $10^3$ pixels takes around 1 ms on the GPU, compared with approximately 10 s on the CPU. The results of the simulation are compared against data from a pixel-readout LArTPC prototype.