Session
FOSDEM 2021 Schedule
HPC, Big Data and Data Science

Make life easier for big data users on Arm platform

D.hpc
Zhenyu Zheng
Currently, there are more and more ARM based datacenter hardware options on the market, and their performance has been continuously improving. Thus more and more users and customers are starting to consider using these datacenter hardware options for their business. Big Data is one of the most important areas. On the contrary, the open source ecosystem for Big Data on ARM is not that perfect: most of the software in the Big Data ecosystem does not care too much about running on ARM in advance, or developers have not officially tested their codes on ARM, and there are a lot of unsolved problems. In order to make those software solutions able to run on ARM, one has to search and read tons of articles and to do a lot of patches and build a numbers of dependencies on their own. And once the upstream changes or upgrades, there might be new problems since it is not tested on ARM in upstream. All these challenges made users concerned to use ARM for their business. In order to change this situation and make the Big Data open source ecosystem more friendly to ARM platform and its users, our team started by proposing adding ARM CI to those open source projects. By doing this, the projects will be fully tested on ARM and also all future changes will as well be tested on ARM. In the process, we fixed a lot of problems directly in upstream, which benefits all users. And then, we started to perform performance comparison tests between ARM and x86, to give users an overview of the status. And there are also large numbers of TODO items, for the future. In this session, you can learn the current status of ARM CI for Big Data ecosystem projects like Hadoop, Spark, Hbase, Flink, Storm, Kudu, Impala etc. and our efforts on fixing ARM related problems. We will also introduce our future plans.

Additional information

Type devroom

More sessions

2/6/21
HPC, Big Data and Data Science
Ali Hajiabadi
D.hpc
With the end of Moore’s law, improving single-core processor performance can be extremely difficult to do in an energy-efficient manner. One alternative is to rethink conventional processor design methodologies and propose innovative ideas to unlock additional performance and efficiency. In an attempt to overcome these difficulties, we propose a compiler-informed non-speculative out-of-order commit processor, that attacks the limitations of in-order commit in current out-of-order cores to ...
2/6/21
HPC, Big Data and Data Science
Christian Kniep
D.hpc
The Container ecosystem spans from spawning a process into an isolated and constrained region of the kernel at bottom layer, building and distributing images just above to discussions on how to schedule a fleet of containers around the world at the very top. While the top layers get all the attention and buzz, this session will base-line the audiences' understanding of how to execute containers.
2/6/21
HPC, Big Data and Data Science
Nicolas Poggi
D.hpc
Over the years, there has been extensive and continuous effort on improving Spark SQL's query optimizer and planner, in order to generate high quality query execution plans. One of the biggest improvements is the cost-based optimization framework that collects and leverages a variety of data statistics (e.g., row count, number of distinct values, NULL values, max/min values, etc.) to help Spark make better decisions in picking the most optimal query plan.
2/6/21
HPC, Big Data and Data Science
Mohammad Norouzi
D.hpc
This talk introduces DiscoPoP, a tool which identifies parallelization opportunities in sequential programs and suggests programmers how to parallelize them using OpenMP. The tool first identifies computational units which, in our terminology, are the atoms of parallelization. Then, it profiles memory accesses inside the source code to detect data dependencies. Mapping dependencies to CUs, we create a data structure which we call the program execution tree (PET). Further, DiscoPoP inspects the ...
2/6/21
HPC, Big Data and Data Science
Alaina Edwards
D.hpc
In this talk we explore two programming models for GPU accelerated computing in a Fortran application: OpenMP with target directives and CUDA. We use an example application Riemann problem, a common problem in fluid dynamics, as our testing ground. This example application is implemented in GenASiS, a code being developed for astrophysics simulations. While OpenMP and CUDA are supported on the Summit supercomputer, its successor, an exascale supercomputer Frontier, will support OpenMP and ...
2/6/21
HPC, Big Data and Data Science
Bob Dröge
D.hpc
The European Environment for Scientific Software Installations (EESSI, pronounced as “easy”) is a collaboration between different HPC sites and industry partners, with the common goal to set up a shared repository of scientific software installations that can be used on a variety of systems, regardless of which flavor/version of Linux distribution or processor architecture is used, or whether it is a full-size HPC cluster, a cloud environment or a personal workstation. The EESSI codebase ...
2/6/21
HPC, Big Data and Data Science
Robert McLay
D.hpc
XALT is a tool run on clusters to find out what programs and libraries are run. XALT uses the environment variable LD_PRELOAD to attach a shared library to execute code before and after main(). This means that the XALT shared library is a developer on every program run under linux. This shared library is part of every program run. This talk will discuss the various lessons about routine names and memory usage. Adding XALT to track container usage presents new issues because of what shared ...