LLVM

Building Interactive C/C++ workflows in Jupyter through clang-repl

<p>C++ remains central to high-performance and scientific computing, yet interactive workflows for the language have historically been fragmented or unavailable. Developers rely on REPL-driven exploration, rapid iteration, rich visualisation, and debugging, but C++ lacked incremental execution, notebook integration, browser-based execution, and JIT debugging. With the introduction of <a href="https://clang.llvm.org/docs/ClangRepl.html">clang-repl</a>, LLVM now provides an upstream incremental compilation engine built on Clang, the IncrementalParser, and the ORC JIT.</p> <p>This talk presents how the Project Jupyter, Clang/clang-repl, and Emscripten communities collaborated to build a complete, upstream-aligned interactive C++ environment. <a href="https://github.com/compiler-research/xeus-cpp">Xeus-Cpp</a> embeds clang-repl as a native C/C++ Jupyter kernel across Linux, macOS, and Windows, enabling widgets, plots, inline documentation, and even CUDA/OpenMP use cases. <a href="https://compiler-research.org/xeus-cpp-wasm/lab/index.html">Xeus-Cpp-Lite</a> extends this model to the browser via WebAssembly and JupyterLite, compiling LLVM and Clang to WASM and using wasm-ld to dynamically link shared wasm modules generated per cell at runtime.</p> <p>To complete the workflow, Xeus-Cpp integrates LLDB-DAP through clang-repl’s out-of-process execution model, enabling breakpoints, stepping, variable inspection, and full debugging of JIT-generated code directly in JupyterLab.</p> <p>The talk will detail how clang-repl, ORC JIT, wasm-ld, LLDB, and LLDB-DAP come together to deliver a modern, sustainable interactive C++ workflow on both desktop and browser platforms, with live demonstrations of native and WebAssembly execution along the way.</p> <p><strong>LLVM Components Involved :</strong> clang, clang-repl, orc jit, wasm-ld, lldb, lldb-dap.</p> <p><strong>Target Audience :</strong> Researchers, Educators, Students, C/C++ Practitioners</p> <p><strong>Note :</strong> Please make sure to check out the demos/links added to the Resource section. These demos would be shown live in the talk.</p>

Weitere Infos

Live Stream https://live.fosdem.org/watch/ud6215
Format devroom
Sprache Englisch

Weitere Sessions

31.01.26
LLVM
UD6.215
<p>A word of welcome by the LLVM Dev room organizers.</p>
31.01.26
LLVM
Peter Smith
UD6.215
<p>LLVM has recently gained support for an ELF implementation of the AArch64 Pointer Authentication ABI (PAuthABI) for a Linux Musl target. This talk will cover: * An introduction to the PAuthABI and its LLVM support. * How to experiment with it on any Linux machine using qemu-aarch64 emulation. * How to adapt the Linux Musl target to a bare-metal target using LLVM libc.</p> <p>The AArch64 Pointer Authentication Code instructions are currently deployed on Linux to protect the return address on ...
31.01.26
LLVM
Pablo Marcos
UD6.215
<p>Ever been debugging a production issue and wished you'd added just one more log statement? Now you have to rebuild, wait for CI, deploy... all that time wasted. We've all been there, cursing our past selves.</p> <p>We've integrated LLVM's XRay into ClickHouse to solve this. It lets us hot-patch running production systems to inject logging, profiling, and even deliberate delays into any function. No rebuild required.</p> <p>XRay reserves space at function entry/exit that can be atomically ...
31.01.26
LLVM
Jan-Patrick Lehr
UD6.215
<p>Over the past two years, the LLVM community has been building a general-purpose GPU offloading library. While still in its early stages, this library aims to provide a unified interface for launching kernels across different GPU vendors. The long-term vision is to enable diverse projects—ranging from OpenMP® to SYCL™ and beyond—to leverage a common GPU offloading infrastructure.</p> <p>Developing this new library alongside the existing OpenMP® offloading infrastructure has introduced ...
31.01.26
LLVM
Stefan Gränitz
UD6.215
<p>LLVM’s ORC JIT [1] is a powerful framework for just-in-time compilation of LLVM IR. However, when applied to large codebases, ORC often exhibits a surprisingly high front-load ratio: we have to parse all IR modules before execution even reaches main(). This diminishes the benefits of JITing and contributes to phenomena as the “time to first plot” latency in Julia, one of ORC’s large-scale users [2].</p> <p>The llvm-autojit plugin [3] is a new experimental compiler extension for ...
31.01.26
LLVM
Josse Van Delm
UD6.215
<p>Every new AI workload seems to need new hardware. Companies spend months designing NPUs (neural processing units), then more months building compilers for them—only to discover the hardware doesn't efficiently run their target workloads. By the time they iterate, the algorithm has moved on.</p> <p>We present a work-in-progress approach that generates NPU hardware directly from algorithm specifications using MLIR and CIRCT. Starting from a computation expressed in MLIR's Linalg dialect, our ...
31.01.26
LLVM
Jonas Devlieghere
UD6.215
<p>WebAssembly support in Swift started as a community project and became an official part of Swift 6.2. As Swift on WebAssembly matures, developers need robust debugging tools to match. This talk presents our work adding native debugging support for Swift targeting Wasm in LLDB. WebAssembly has some unique characteristics, such as its segmented memory address space, and we'll explore how we made that work with LLDB's architecture. Additionally, we'll cover how extensions to the GDB remote ...