| Live Stream | https://live.fosdem.org/watch/h1308 |
|---|---|
| Type | devroom |
| Language | English |
| 1/31/26 |
<p>eBPF programs often behave differently than developers expect, not because of incorrect logic, but because of subtle behaviours of the hookpoints themselves. In this talk, we focus on a small set of high-impact, commonly misunderstood attachment types — kprobes/fentry, tracepoints and uprobes, and expose the internal kernel mechanics that cause surprising edge cases.</p> <p>Rather than attempting to cover all eBPF hooks, this session distills a practical set of real-world gotchas that ...
|
| 1/31/26 |
<p>Training large models requires significant resources and failure of any GPU or Host can significantly prolong training times. At Meta, we observed that 17% of our jobs fail due to RDMA-related syscall errors which arise due to bugs in the RDMA driver code. Unlike other parts of the Kernel RDMA-related syscalls are opaque and the errors create a mismatched application/kernel view of hardware resources. As a result of this opacity and mismatch existing observability tools provided limited ...
|
| 1/31/26 |
<p>Any eBPF project that has started in the last couple of years is most likely written to take advantage of CO-RE, compiling your eBPF programs ahead of time, and being able to run that program on a wide range of kernels and machines.</p> <p>Before CO-RE it was common to ship the whole toolchain and compile on target. This is what Cillium currently still does. Compiling on target empowered a core value of Cilium: "you do not pay for what you do not use". But it turns out that with CO-RE ...
|
| 1/31/26 |
<p><a href="http://github.com/parca-dev/oomprof">OOMProf</a> is a Go library that installs a eBPF programs that listen to Linux kernel tracepoints involved in OOM killing and records a memory profile before your Go program is dead and gone. The memory profile can be logged as a pprof file or sent to a <a href="http://parca.dev/">Parca</a> server for storage and analysis. This talk will be a deep dive into the implementation and its limitations and possible future directions.</p>
|
| 1/31/26 |
<p>XDP and AF_XDP provide a high-performance mechanism for driver-layer packet processing and zero-copy delivery of packets into userspace, while maintaining access to standard kernel networking constructs — capabilities that distinguish them from full kernel-bypass frameworks such as DPDK. However, the current AF_XDP implementation offers no efficient in-kernel mechanism for forwarding packets between AF_XDP sockets in a zero-copy manner. Because AF_XDP operates without the conventional full ...
|
| 1/31/26 |
<p>The eBPF eXpress Data Path (XDP) allows high-speed packet processing applications. Achieving high throughput requires careful design and profiling of XDP applications. However, existing profiling tools lack eBPF support. We introduce InXpect, a lightweight monitoring framework that profiles eBPF programs with fine granularity and minimal overhead, making it suitable for XDP-based in-production systems. We demonstrate how InXpect outperforms existing tools in profiling overhead and ...
|
| 1/31/26 |
<h1>XDP Virtual Server: An eBPF Load Balancer library</h1> <p>Faced with the looming retirement of our traditional load balancer appliances we decided to give XDP a try. Facebook's Katran library did not support layer 2 switching, which was still a requirement, so we built an eBPF application in C and a supporting library with Golang.</p> <p>We came across a few issues along the way - driver support for network cards gave me headaches - but on the whole eBPF has made what would have been ...
|