Analysis Grand Challenge
Research conducted by Ludwig Maximilian University of Munich (LMU).
The IRIS-HEP team has developed a robust effort to showcase the feature completeness and scalability of scikit-HEP tools. Central to this initiative is the coffea framework, which provides a high-level interface for efficient columnar analysis. Comprehensive resources, including documentation and source code, are available on GitHub and ReadTheDocs.
ttbar-Analysis is a comprehensive approach that includes 1-lepton event selection, top reconstruction, cross-section measurement, and on-the-fly evaluation of systematic uncertainties. Utilizing 3.44 TB of CMS open data, the analysis efficiently reads only about 138 GB data (just 4% of the total dataset), covering 948 million events and 10 variables. The workload is distributed across multiple workers using dask-jobqueue, all managed within a single Jupyter notebook, making the analysis code both user-friendly and scalable. This setup serves as a potential model for future HL-LHC analyses, exemplifying how advanced analyses could be conducted with speed and efficiency.
Benchmarks
Currently benchmarks are performed on three different sites:
- LMU institute cluster at LMU Munich. The cluster consists of one powerful node and desktop computers. The SLURM job scheduler operates here. The cluster reads the data via xrootd from LRZ.
- LRZ WLCG Tier-2 site in Munich. The SLURM job scheduler is used here. The data is stored on regular Grid storage (HDD) as well as on a XCache server (SSD).
- Vispa analysis facility at RWTH Aachen. It provides a web-based terminal, code editor and jupyter hub. HTCondor is used as job scheduler here. The data is stored locally on SSDs and read directly. Vispa also has a very dedicated caching system (arXiv) that has not been tested with AGC yet.
Measuring runtime
For benchmarking, we focus exclusively on the distributed portion of the analysis, which involves fetching metadata, and reading and processing data. This allows us to directly measure key performance metrics, including the total runtime, the overall processing time, and the cumulative processing time across all workers (represented by the sum of all blue rectangles). Coffea’s built-in tools provide a straightforward way to capture these essential insights.
XCache measurements
Runtimes at LRZ, with or without XCache enabled, show no significant difference. This indicates that, with the current setup, the analysis is not significantly limited by I/O constraints.
Investigation
The following points should be investigated further:
- the cause of the 2-3 minutes offset
- the cause of the large overhead / job inefficiency
- bottlenecks of the analysis
- deployment of different, more I/O heavy algorithms to test the effect of caches
Future work
We plan to work in the following directions during FIDIUM 1.5:
- Analysis Tests with heterogeneous CPU/GPU resources. We will conduct analysis tests using a mix of CPU and GPU resources, leveraging existing open data from ATLAS and CMS applications. Dask is going to be used for parallel execution, and we will be exploring the potential for integrating COBalD/Tardis where applicable.
- Extended tests of data formats. We will conduct extended tests on various data formats, focusing on ROOT’s new RNtuple format. This involves comparing it with established and alternative formats like ROOT Tree and Parquet. The tests will cover a complex range of parameters, including file size, compression, I/O speed for streaming, performance in sparse reading, and differences between remote and local access with caching. These evaluations are to be based on open data samples.