HPC, Big Data and Data Science

Flux: Solving Exascale Workflow and Resource Challenges

Plus - How Open-Source Drives Our Project Design
D.hpc
Stephen Herbein
Many emerging scientific workflows that target high-end HPC systems require a complex interplay with resource and job management software (RJMS). However, portable, efficient and easy-to-use scheduling of these workflows is still an unsolved problem. In this talk, we present Flux, a next-generation RJMS designed specifically to address the key scheduling challenges of modern workflows in a scalable, easy-to-use, and portable manner. At the heart of Flux lies its ability to be seamlessly nested within batch allocations created by itself as well as other system schedulers (e.g., SLURM, MOAB, LSF, etc), serving the target workflows as their “personal RJMS instances”. In particular, Flux’s consistent and rich set of well-defined APIs portably and efficiently support those workflows that can feature non-traditional patterns such as complex co-scheduling, massive ensembles of small jobs and coordination among jobs in an ensemble. We will also cover how the Flux-Framework project is structured around open-source development, including our use of the Collective Code Construction Contract (C4), RFCs, LGPL, and various online open-source platforms. We discuss how these choices of open-source processes have influenced the repo structure, the code, our collaborations, and even the sub-teams within the project.
Expected prior knowledge / intended audience: Audience should have basic knowledge of batch job systems; knowledge of or experience with running scientific workflows is a plus. There will be some background on common workflows in the talk. This will be interesting to HPC users, workflow developers, and admins. Speaker bio: Stephen Herbein is a computer scientist in Livermore Computing at Lawrence Livermore National Laboratory. His research interests include batch job scheduling, parallel IO, and data analytics. He is a part of the Flux team, developing next-generation IO-aware and multi-level schedulers for HPC. Links to previous talks by the speaker: - http://flux-framework.org/papers/Flux-DevDay-2018-Slides.pdf - https://github.com/flux-framework/tutorials See https://herbein.net/Herbein_CV.pdf for more (including papers on Flux)

Additional information

Type devroom

More sessions

2/6/21
HPC, Big Data and Data Science
Ali Hajiabadi
D.hpc
With the end of Moore’s law, improving single-core processor performance can be extremely difficult to do in an energy-efficient manner. One alternative is to rethink conventional processor design methodologies and propose innovative ideas to unlock additional performance and efficiency. In an attempt to overcome these difficulties, we propose a compiler-informed non-speculative out-of-order commit processor, that attacks the limitations of in-order commit in current out-of-order cores to ...
2/6/21
HPC, Big Data and Data Science
Christian Kniep
D.hpc
The Container ecosystem spans from spawning a process into an isolated and constrained region of the kernel at bottom layer, building and distributing images just above to discussions on how to schedule a fleet of containers around the world at the very top. While the top layers get all the attention and buzz, this session will base-line the audiences' understanding of how to execute containers.
2/6/21
HPC, Big Data and Data Science
Nicolas Poggi
D.hpc
Over the years, there has been extensive efforts to improve Apache Spark SQL performance. This talk will introduce the new Adaptive Query Execution (AQE) framework and how it can automatically improve user query performance. AQE leverages query runtime statistics to dynamically guide Spark's execution as queries run along. The talk will go over the main features in AQE and provide examples on how it can improve on the previous static query plans. Finally, we'll present the significant ...
2/6/21
HPC, Big Data and Data Science
Mohammad Norouzi
D.hpc
This talk introduces DiscoPoP, a tool which identifies parallelization opportunities in sequential programs and suggests programmers how to parallelize them using OpenMP. The tool first identifies computational units which, in our terminology, are the atoms of parallelization. Then, it profiles memory accesses inside the source code to detect data dependencies. Mapping dependencies to CUs, we create a data structure which we call the program execution tree (PET). Further, DiscoPoP inspects the ...
2/6/21
HPC, Big Data and Data Science
Alaina Edwards
D.hpc
In this talk we explore two programming models for GPU accelerated computing in a Fortran application: OpenMP with target directives and CUDA. We use an example application Riemann problem, a common problem in fluid dynamics, as our testing ground. This example application is implemented in GenASiS, a code being developed for astrophysics simulations. While OpenMP and CUDA are supported on the Summit supercomputer, its successor, an exascale supercomputer Frontier, will support OpenMP and ...
2/6/21
HPC, Big Data and Data Science
Bob Dröge
D.hpc
The European Environment for Scientific Software Installations (EESSI, pronounced as “easy”) is a collaboration between different HPC sites and industry partners, with the common goal to set up a shared repository of scientific software installations that can be used on a variety of systems, regardless of which flavor/version of Linux distribution or processor architecture is used, or whether it is a full-size HPC cluster, a cloud environment or a personal workstation. The EESSI codebase ...
2/6/21
HPC, Big Data and Data Science
Robert McLay
D.hpc
XALT is a tool run on clusters to find out what programs and libraries are run. XALT uses the environment variable LD_PRELOAD to attach a shared library to execute code before and after main(). This means that the XALT shared library is a developer on every program run under linux. This shared library is part of every program run. This talk will discuss the various lessons about routine names and memory usage. Adding XALT to track container usage presents new issues because of what shared ...