The Large Hadron Collider at CERN is one of the largest and most precise machines mankind has built. As a particle accelerator, it enables us to study proton collisions in large detector experiments such as ATLAS and CMS. These detectors basically work like huge cameras with millions of channels taking up to one billion snapshots of the collisions per second. In a large fraction of these collisions, reactions take place that have been studied and understood for decades now. The very rare processes, however, are those that are especially interesting, but at the same time challenging to extract. The probabilities of processes, that have been studied and confirmed so far, span a range of 14 orders of magnitude. Finding the needle in the haystack of events we record at the LHC is like trying to score a field goal in basketball from space. The rareness of the interesting phenomena not only calls for pure physics understanding, but it also requires advanced techniques in data mining to find as many events of interest as possible while reducing the number of incorrectly accepted events. Hence, data handling in high energy physics means to dig into petabytes of data to filter out and catch a hand full of interesting reactions.