HPC, Big Data & Data Science

ROCm™ on TheRock(s)

H.1308 (Rolin)
Jan-Patrick Lehr
<p>ROCm™ has been AMD’s software foundation for both high-performance computing (HPC) and AI workloads and continues to support the distinct needs of each domain. As these domains increasingly converge, ROCm™ is evolving into a more modular and flexible platform. Soon, the distribution model shifts to a core SDK with domain-specific add-ons—such as HPC—allowing users to select only the components they need. This reduces unnecessary overhead while maintaining a cohesive and interoperable stack.</p> <p>To support this modularity, AMD transitions to TheRock, an open-source build system that enables component-level integration, nightly and weekly builds, and streamlined delivery across the ROCm™ stack. TheRock is designed to handle the complexity of building and packaging ROCm™ in a way that’s scalable and transparent for developers. It plays a central role in how ROCm™ is assembled and delivered, especially as the platform moves toward more frequent and flexible release cycles.</p> <p>In this talk, we’ll cover the entire development and delivery pipeline—from the consolidation into three super-repos to how ROCm™ is built, tested, and shipped. This includes an overview of the development process, the delivery mechanism, TheRock’s implementation, and the testing infrastructure. We’ll also explain how contributors can engage with ROCm™—whether through code, documentation, or domain-specific enhancements—making it easier for developers to help shape the platform.</p> <p>Online resources TheRock: https://github.com/ROCm/TheRock rocm-libraries: https://github.com/ROCm/rocm-libraries rocm-systems: https://github.com/ROCm/rocm-systems </p> <p>Most projects are under MIT license.</p> <p>Speaker JP Lehr, Senior Member of Technical Staff, ROCm™ GPU Compiler, AMD</p> <p>© 2026 Advanced Micro Devices, Inc. All rights reserved. AMD, the AMD Arrow logo, ROCm, and combinations thereof are trademarks of Advanced Micro Devices, Inc. Other product names used in this publication are for identification purposes only and may be trademarks of their respective companies. LLVM is a trademark of LLVM Foundation. The OpenMP name and the OpenMP logo are registered trademarks of the OpenMP Architecture Review Board.</p>

Additional information

Live Stream https://live.fosdem.org/watch/h1308
Type devroom
Language English

More sessions

2/1/26
HPC, Big Data & Data Science
H.1308 (Rolin)
<p>Scientific models are today limited by compute resources, forcing approximations driven by feasibility rather than theory. They consequently miss important physical processes and decision-relevant regional details. Advances in AI-driven supercomputing — specialized tensor accelerators, AI compiler stacks, and novel distributed systems — offer unprecedented computational power. Yet, scientific applications such as ocean models, often written in Fortran, C++, or Julia and built for ...
2/1/26
HPC, Big Data & Data Science
Thomas Breuer
H.1308 (Rolin)
<p>Wherever research software is developed and used, it needs to be installed, tested in various ways, benchmarked, and set up within complex workflows. Typically, in order to perform such tasks, either individual solutions are implemented - imposing significant restrictions due to the lack of portability - or the necessary steps are performed manually by developers or users, a time-consuming process, highly susceptible to errors. Furthermore, particularly in the field of high-performance ...
2/1/26
HPC, Big Data & Data Science
Boris Martin
H.1308 (Rolin)
<h2>Content</h2> <p>High-frequency wave simulations in 3D (with e.g. Finite Elements) involve systems with hundreds of millions unknowns (up to 600M in our runs), prompting the use of massively parallel algorithms. In the harmonic regime, we favor Domain Decomposition Methods (DDMs) where local problems are solved in smaller regions (subdomains) and the full solution of the PDE is recovered iteratively. This requires each rank to own a portion of the mesh and to have a view on neighboring ...
2/1/26
HPC, Big Data & Data Science
Jade Abraham
H.1308 (Rolin)
<p>As the computing needs of the world have grown, the need for parallel systems has grown to match. However, the programming languages used to target those systems have not had the same growth. General parallel programming targeting distributed CPUs and GPUs is frequently locked behind low-level and unfriendly programming languages and frameworks. Programmers must choose between parallel performance with low-level programming or productivity with high-level languages.</p> <p><a ...
2/1/26
HPC, Big Data & Data Science
Mahendra Paipuri
H.1308 (Rolin)
<p>With the rapid acceleration of ML/AI research in the last couple of years, the already energy-hungry HPC platforms have become even more demanding. A major part of this energy consumption is due to users’ workloads and it is only by the participation of end users that it is possible to reduce the overall energy consumption of the platforms. However, most of the HPC platforms do not provide any sort of metrics related to energy consumption, nor the performance metrics out of the box, which ...
2/1/26
HPC, Big Data & Data Science
Tobias Kremer
H.1308 (Rolin)
<p><a href="www.ecmwf.int">ECMWF</a> manages petabytes of meteorological data critical for weather and climate research. But traditional storage formats pose challenges for machine learning, big-data analytics, and on-demand workflows. </p> <p>We propose a solution which introduces a Zarr store implementation for creating virtual views of ECMWF’s Fields Database (FDB), enabling users to access GRIB data as if it were a native Zarr dataset. Unlike existing approaches such as VirtualiZarr or ...
2/1/26
HPC, Big Data & Data Science
H.1308 (Rolin)
<p>Over the last five years, we ran an HPC system for life sciences on top of OpenStack, with a deployment pipeline built from Ansible, manual steps (see <a href="https://archive.fosdem.org/2020/schedule/event/hpc_openstack/">FOSDEM 2020 talk</a>). It worked—but it wasn’t something we could easily rebuild from scratch or apply consistently to other parts of our infrastructure.</p> <p>As we designed our new HPC system (coming online in early 2026), we set ourselves a goal: treat the cluster ...