HPC, Big Data, and Data Science

SCIP: scalable cytometry image processing using Dask in a high performance computing environment

A software for distributed processing of bioimaging datasets
D.hpc
Maxim Lippeveld
<p>Bioimage analysis is the process of extracting novel insights from microscopy images of tissues, cells or other biological entities. Many tools, such as ImageJ, QuPath or CellProfiler are heavily used by researchers to quantitatively interpret those complex images. These tools perform tasks such as normalization, image segmentation, image masking or feature extraction. However, these tools are designed for usage on local workstations with a GUI and for the most part only allow vertical scaling to deal with an increase in dataset size. This limited scalability poses a problem as bioimaging datasets keep growing in volume.</p> <p>Here, we introduce Scalable Cytometry Image Processing (SCIP). SCIP is an open-source tool that implements single-cell bioimage processing on top of Dask, a framework for distributed computing written in Python. By utilizing Dask to execute all computations, scalability is integral to the software allowing it to be executed on high performance computing clusters. Dask's smart task scheduling ensures computational resources are used efficiently. SCIP also takes advantage of Dask features such as fault tolerance, load balancing or data locality. This allows SCIP to process large datasets more efficiently and more robustly compared to other tools.</p>
SCIP was written in Python, and all code is freely available. It can run in local (single-node) or distributed mode. In the latter, Dask's components communicate efficiently using the MPI standard. SCIP can be used as a stand-alone command line tool, or integrated into existing Python scripts using the API. For a similar setup, SCIP showed a 4-fold decrease in runtime compared to CellProfiler. SCIP needed considerably less manual work to prepare the dataset for processing thanks to more efficient data input. Comparisons were executed on the Flemish Supercomputer Center Tier-1 high performance cluster. We processed two cytometry datasets containing images of human blood cells: an imaging flow cytometry dataset containing 270,000 6-channel images of 90 by 90 pixels, and a confocal microscopy dataset containing 869 5-channel images of 1600 by 900 pixels. The imaging flow cytometry dataset is processed by SCIP in 484 seconds using 80 workers running on 3 compute nodes. Each node has 24 cores (Intel Xeon E5-2680v4 @ 2.4GHz) and 120GB of memory, which is divided over the workers. The microscopy dataset is processed by SCIP in 2187 seconds using 16 workers running on 1 compute node. This node has 32 cores (Intel Xeon Silver 4110 CPU @ 2.10GHz) and 364GB memory, which is divided over the workers. This workflow also uses a GPU (Nvidia GeForce RTX 3090) for identifying the cells in the microscopy images.

Additional information

Type devroom

More sessions

2/5/22
HPC, Big Data, and Data Science
Olena Kutsenko
D.hpc
<p>Working with Big Data means that we need tools to organise and understand the data. And you don’t have to be a developer to search, aggregate and visualise your data. Whether you need an affordable business analytics tool or you want to analyse log data in near real time, OpenSearch can help you. And all of it through a visual interface of OpenSearch Dashboards.</p> <p>After listening to this talk you’ll understand the basics of working with an OpenSearch cluster and different use cases ...
2/5/22
HPC, Big Data, and Data Science
Max Meldrum
D.hpc
<p>In this talk, I will present Arcon, a Rust-native streaming runtime that integrates seamlessly with the Apache Arrow ecosystem. The Arcon philosophy is streaming first, similarly to systems such as Apache Flink and Timely Dataflow. However, unlike all existing systems, Arcon features great flexibility when it comes to its application state. Arcon's TSS query language allows extracting and operating on state snapshots consistently based on application-time constraints and interfacing with ...
2/5/22
HPC, Big Data, and Data Science
D.hpc
<p>Any conversation about Big Data would be incomplete without talking about Apache Kafka and Apache Flink: the winning open source combination for high-volume streaming data pipelines.</p> <p>In this talk we'll explore how moving from long running batches to streaming data changes the game completely. We'll show how to build a streaming data pipeline, starting with Apache Kafka for storing and transmitting high throughput and low latency messages. Then we'll add Apache Flink, a distributed ...
2/5/22
HPC, Big Data, and Data Science
John Garbutt
D.hpc
<p>Why build #4 on the Green500 using OpenStack? It makes it easier to manage. Cambridge University started using OpenStack in 2015. Since mid 2020, all new hardware is controlled using OpenStack. Compute nodes, GPU nodes, Lustre nodes, Ceph nodes, almost everything. OpenStack allows large baremetal slurm clusters and dedicated TRE (trusted research environments) to share the same images. Is this a cloud native supercomputer?</p>
2/5/22
HPC, Big Data, and Data Science
Christian Kniep
D.hpc
<p>This short talk will disect the container ecosystem for HPC in four segments and discusses what to look out for, what is already settled and how to navigate containers in 2022.</p>
2/5/22
HPC, Big Data, and Data Science
D.hpc
<p>Optimizing CPU management improves cluster performance and security, but is daunting to almost everyone. CPU management may seem complex, but it can be explained in such a way that even your inner toddler will comprehend. With this talk, we will give a path to success.</p> <p>You may have a multi-socket node cluster where your AI/ML workloads care about the proximity of your CPUs to GPUs. You may be running scientific workloads where you want to pin in cores within containers instead of just ...
2/5/22
HPC, Big Data, and Data Science
Trevor Grant
D.hpc
<p>Working with big data matrices is challenging, Kubernetes allows users to elastically scale, but can only have a pod as large as a node, which may not be large enough to fit the matrix in memory. While Kubernetes allows for other paradigms on top of it which allows pods to coordinate on individual jobs, setting them up and making them play nice with ML platforms is not straightforward. Using Apache Spark and Apache Mahout we can work with matrices of any dimension and distribute them across ...