Session
Schedule FOSDEM 2020
Testing and Automation

Automated Performance Testing for Virtualization with MMTests

The Tools, the Challenges and also some War-Stories about Performance Testing Hypervisors and VMs
UB2.147
Dario Faggioli
What benchmark? How many VMs? How big each VM is? Are they all equal or are they different? What's the host OS? What are the guest OSes? I.e., when wanting to do virtualization performance testing, the matrix of test cases tends to explode pretty quickly. This talk will show how we enhanced an existing benchmarking suite, MMTests, in order to be able to deal a little bit better with such complexity. And what our further activities and plans are, for even more and better automation.

Functional testing is already hard enough, in virtualization. For instance, because we need to make sure that things work with different combinations of versions of the OSes in hosts and guests. Doing performance testing, even more so. In fact, there are much more things to consider, such as how many VMs we use, how big they are, whether or not they are equally big or different, what to run in them, how to partition the host resources for them... And this is true either in case you have a specific (virtualized) workload and some KPI to meet, in which case you need testing and benchmarking to figure out whether or not the tuning you have done has brought you there, or in case you wonder how good (or how bad) a certain configuration of both your host and your guests works, for a number of workloads,

This talk will introduce the problem, showing how the size and the complexity of a typical 'virtualization performance testing matrix' really tend to explode. We will, as an example, show how some specific characteristics of a virtualized system were, despite tuning, causing us to not be able to achieve the desired performance levels. Then we illustrate how, at SUSE, we do automated performance benchmarking, how we enhanced the tool that was in use the most for baremetal benchmarks (the MMTests suite) in order for it to be much more useful in virtualized systems and how we are integrating it with other tools to bring the level of automation even further and achieve something that really resembles a Virtualization Performance CI system.

Additional information

Type devroom

More sessions

2/1/20
Testing and Automation
Alexandros Frantzis
UB2.147
In this talk we will explore some of the FOSS specific mentalities and practices that may discourage adoption of comprehensive automated testing, and present advice for promoting and sustaining automated testing in FOSS projects.
2/1/20
Testing and Automation
Guillaume Tucker
UB2.147
KernelCI is a project dedicated to testing the upstream Linux kernel. Originally created by Linaro in 2014, it started a new chapter by becoming a Linux Foundation project in October 2019. Its future looks bright, with plenty of opportunities for new contributors to join.
2/1/20
Testing and Automation
Nikolai Kondrashov
UB2.147
See how Red Hat’s CKI project uses GitLab CI to test kernel patches as soon as they're posted to maillists.
2/1/20
Testing and Automation
Richard Palethorpe
UB2.147
Overview of SUSE's Linux kernel testing in OpenQA, how we keep track of known issues, explore test results and other features of JDP. The JDP framework is written in Julia, uses Redis as a distributed data cache and Jupyter for interactive reporting. OpenQA is a large application used for testing operating systems and displaying the results.
2/1/20
Testing and Automation
Rajat Singh
UB2.147
OCS stands for Openshift Container storage. It provides container-based storage for OCP(Openshift container platform). It’s easily scalable to bare metal, VMs and cloud platforms. Auto healing is a property of OCS cluster that auto heals a cluster component automatically when passes through an unexpected condition. A component can be a node, a network interface, a service, etc. To make sure auto heals just fine, we introduced negative testing. Negative Testing is defined as, a testing type ...
2/1/20
Testing and Automation
Rolf Madsen
UB2.147
OpenTAP is a project aimed at automation in the test and measurement space. It is designed for test and measurement of hardware in R&D and manufacturing, but is moving more towards software testing e.g. with usage in cloud infrastructure testing. The project started as an internal product by Keysight Technologies and is used as the core of many products and solutions deployed around the world. As of 2019, we have released OpenTAP under the Mozilla Public License v2 and are working on building a ...
2/1/20
Testing and Automation
Boris Feld
UB2.147
Since several years, software quality tools have evolved, CI systems are more and more scalable, there are more testing libraries than ever and they are more mature than ever and we have seen the rise of new tools to improve the quality of code we craft. Unfortunately, most of our CI system still launch a script and check the return code, most of the testing libraries don't allow to select finely which tests to launch and most of CI advanced innovations, parallel running, and remote execution, ...