Home > Research > Publications & Outputs > Memory handling in the ATLAS submission system ...

Associated organisational unit

Links

Text available via DOI:

View graph of relations

Memory handling in the ATLAS submission system from job definition to sites limits

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published
  • A. C. Forti
  • R. Walker
  • T. Maeno
  • Peter Allan Love
  • N. Rauschmayr
  • A. Filipcic
  • A. Di Girolamo
Close
Article number052004
<mark>Journal publication date</mark>23/11/2017
<mark>Journal</mark>Journal of Physics: Conference Series
Issue number3
Volume898
Number of pages9
Publication StatusPublished
<mark>Original language</mark>English

Abstract

In the past few years the increased luminosity of the LHC, changes in the linux kernel and a move to a 64bit architecture have affected the ATLAS jobs memory usage and the ATLAS workload management system had to be adapted to be more flexible and pass memory parameters to the batch systems, which in the past wasn't a necessity. This paper describes the steps required to add the capability to better handle memory requirements, included the review of how each component definition and parametrization of the memory is mapped to the other components, and what changes had to be applied to make the submission chain work. These changes go from the definition of tasks and the way tasks memory requirements are set using scout jobs, through the new memory tool developed to do that, to how these values are used by the submission component of the system and how the jobs are treated by the sites through the CEs, batch systems and ultimately the kernel.

Bibliographic note

Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.