Advancing BESIII workloads on MOGON II and exploring ad-hoc file systems

Research conducted by Johannes Gutenberg University Mainz.

 

We are making significant progress in analyzing BESIII workloads on the MOGON II cluster at JGU. Our efforts focus on optimizing data processing and system performance. Below, we outline our achievements with BESIII workflows, contributions to the I/O trace initiative, and exploration of ad-hoc file systems for better data management in high-performance computing (HPC).

Progress on BESIII Workloads Analysis on MOGON II

Our work on analyzing BESIII workloads on the MOGON II cluster at Johannes Gutenberg University (JGU) is progressing. Here’s what we’ve achieved so far:

  • Successfully ran Beijing Spectrometer (BESIII) workloads on the MOGON II cluster.
  • Utilized the BESIII Offline Software System (BOSS) to process and analyze BESIII data.
  • Established a complete workflow for BOSS simulations and analyses.
  • Deployed BOSS within a Singularity container to streamline operations.
  • Collected initial I/O profiles, providing valuable insights into system performance.

Advancing the I/O Trace Initiative and Exploring Ad-Hoc File Systems

We are actively contributing to the I/O trace initiative, a cross-project and international collaboration within the EuroHPC joint undertaking. This initiative aims to build and curate a repository of citable I/O traces, providing valuable data for the HPC community. You can learn more about our recent accomplishments at https://hpcioanalysis.zdv.uni-mainz.de/.

Looking ahead, we plan to apply the tracing framework specifically to BESIII workloads, enabling us to compare different parameters and further refine our analysis techniques.

In parallel, we are exploring the compatibility of HPC applications and workflows with ad-hoc file systems. These systems are designed to:

  • Minimize uncoordinated usage of parallel file systems (PFS).
  • Reduce redundant data movement.
  • Schedule data transfers efficiently to alleviate PFS contention.
  • Improve data locality by performing actions where the data is stored and utilizing node-local SSDs.

During our work with ad-hoc file systems, we encountered some challenges that limited their effectiveness:

  • Lack of transparency. The system’s usage isn’t seamless, making it harder to manage.
  • Manual intervention, as users have to start and stop the file system and move data (staging) manually.
  • Data inconsistency risks since data is stored in two locations.

To address these issues, we’ve decided to:

  • Integrate ad-hoc file systems into the Parallel File System (PFS) through hierarchical storage management (HSM), creating a more unified and efficient system.
  • Implement a staging service (e.g., Cargo) to handle data movement automatically and transparently.
  • Resolve conflicts with flush-back strategies, ensuring data consistency even during simultaneous access.

In addition, we are exploring client-side Lustre caching with ad-hoc file systems, aiming to further enhance performance and efficiency.

Future work

During our ongoing work, we’ve discovered that ad-hoc file systems perform well when integrated directly into application jobs. However, this approach limits the file system to the job’s context, requiring all data to be moved off compute nodes and then back again for similar workflows. This process leads to redundant data transfers, which can strain system resources.

To address these challenges, we propose focusing on how file systems interact with applications and exploring NVMe over Fabrics (NVMe-oF). Our plan includes:

  • Evaluating file system and application I/O interference to identify and resolve bottlenecks.
  • Utilizing the NVMe-oF protocol to significantly reduce CPU node utilization.
  • Implementing intelligent data movement strategies to keep data in node-local storage, minimizing unnecessary transfers.

As part of our broader research, the I/O trace initiative has highlighted the need for a central I/O trace repository, as HPC I/O behavior remains fragmented and poorly understood. To tackle this, we aim to:

  • Study I/O behavior in data lakes using existing tools and open formats.
  • Develop a comprehensive monitoring framework that protects data while tracking application behavior.

Additionally, we’ve identified that traditional file systems remain rigid in their I/O protocols, whereas ad-hoc file systems offer flexibility. To further explore I/O malleability in the FIDIUM 1.5 project, we propose:

  • Building on the I/O behavior study of data lakes to better understand their unique requirements.
  • Extending GekkoFS’s I/O protocols to adapt the file system to meet the specific needs of different applications.
Cookie Consent mit Real Cookie Banner