HPC, Big Data, and Data Science

Snorkel Beambell - Real-time Weak Supervision on Apache Flink

UB5.132
Suneel Marthi
The advent of Deep Learning models has led to a massive growth of real-world machine learning. Deep Learning allows Machine Learning Practitioners to get the state-of-the-art score on benchmarks without any hand-engineered features. These Deep Learning models rely on massive hand-labeled training datasets which is a bottleneck in developing and modifying machine learning models. Most large scale Machine Learning systems today like Google’s DryBell use some form of Weak Supervision to construct lower quality, large scale training datasets that can be used to continuously retrain and deploy models in a real-world scenario. The challenge with continuous retraining is that one needs to maintain prior state (e.g., the learning functions in case of Weak Supervision or a pre-trained model like BERT or Word2Vec for Transfer Learning) that is shared across multiple streams, while continuously updating the model. Apache Beam’s Stateful Stream processing capabilities are a perfect match here including support for scalable Weak Supervision. Prior work on using Beam’s State coupled with Flink’s dynamic processing capabilities to store and update word embeddings for real-time Online Topic Modeling of text has been presented at Flink Forward Berlin 2018. Similar streaming pipelines would also work for real-time model updates using Weak Supervision and Transfer Learning. In this talk, we’ll be looking at a framework - Snorkel BeamBell - a framework leveraging Stanford’s Snorkel library for Weak Supervision and Apache Beam for large scale Weak Supervision Learning for online labeling of large amounts of data that can continuously learn new classification models based on Stateful Learning Functions and user feedback.
The advent of Deep Learning models has led to a massive growth of real-world machine learning. Deep Learning allows Machine Learning Practitioners to get the state-of-the-art score on benchmarks without any hand-engineered features. These Deep Learning models rely on massive hand-labeled training datasets which is a bottleneck in developing and modifying machine learning models. Most large scale Machine Learning systems today like Google’s DryBell use some form of Weak Supervision to construct lower quality, large scale training datasets that can be used to continuously retrain and deploy models in a real-world scenario. The challenge with continuous retraining is that one needs to maintain prior state (e.g., the learning functions in case of Weak Supervision or a pre-trained model like BERT or Word2Vec for Transfer Learning) that is shared across multiple streams, while continuously updating the model. Apache Beam’s Stateful Stream processing capabilities are a perfect match here including support for scalable Weak Supervision. Prior work on using Beam’s State coupled with Flink’s dynamic processing capabilities to store and update word embeddings for real-time Online Topic Modeling of text has been presented at Flink Forward Berlin 2018. Similar streaming pipelines would also work for real-time model updates using Weak Supervision and Transfer Learning. In this talk, we’ll be looking at a framework - Snorkel BeamBell - a framework leveraging Stanford’s Snorkel library for Weak Supervision and Apache Beam for large scale Weak Supervision Learning for online labeling of large amounts of data that can continuously learn new classification models based on Stateful Learning Functions and user feedback. The audience would come away with a better understanding of how Weak Supervision with Apache Beam’s stateful stream processing can be used to accelerate the labeling of training data, and real-time training and update of machine learning models.

Additional information

Type devroom

More sessions

2/2/20
HPC, Big Data, and Data Science
Colin Sauze
UB5.132
This talk will discuss the development of a RaspberryPi cluster for teaching an introduction to HPC. The motivation for this was to overcome four key problems faced by new HPC users: The availability of a real HPC system and the effect running training courses can have on the real system, conversely the availability of spare resources on the real system can cause problems for the training course. A fear of using a large and expensive HPC system for the first time and worries that doing something ...
2/2/20
HPC, Big Data, and Data Science
Adrian Woodhead
UB5.132
This presentation will give an overview of the various tools, software, patterns and approaches that Expedia Group uses to operate a number of large scale data lakes in the cloud and on premise. The data journey undertaken by the Expedia Group is probably similar to many others who have been operating in this space over the past two decades - scaling out from relational databases to on premise Hadoop clusters to a much wider ecosystem in the cloud. This talk will give an overview of that journey ...
2/2/20
HPC, Big Data, and Data Science
Félix-Antoine Fortin
UB5.132
Compute Canada provides HPC infrastructures and support to every academic research institution in Canada. In recent years, Compute Canada has started distributing research software to its HPC clusters using with CERN software distribution service, CVMFS. This opened the possibility for accessing the software from almost any location and therefore allow the replication of the Compute Canada experience outside of its physical infrastructure. From these new possibilities emerged an open-source ...
2/2/20
HPC, Big Data, and Data Science
Moritz Meister
UB5.132
Maggy is an open-source framework built on Apache Spark, for asynchronous parallel execution of trials for machine learning experiments. In this talk, we will present our work to tackle search as a general purpose method efficiently with Maggy, focusing on hyperparameter optimization. We show that an asynchronous system enables state-of-the-art optimization algorithms and allows extensive early stopping in order to increase the number of trials that can be performed in a given period of time on ...
2/2/20
HPC, Big Data, and Data Science
Frank McQuillan
UB5.132
In this session we will present an efficient way to train many deep learning model configurations at the same time with Greenplum, a free and open source massively parallel database based on PostgreSQL. The implementation involves distributing data to the workers that have GPUs available and hopping model state between those workers, without sacrificing reproducibility or accuracy. Then we apply optimization algorithms to generate and prune the set of model configurations to try.
2/2/20
HPC, Big Data, and Data Science
UB5.132
Predictive maintenance and condition monitoring for remote heavy machinery are compelling endeavors to reduce maintenance cost and increase availability. Beneficial factors for such endeavors include the degree of interconnectedness, availability of low cost sensors, and advances in predictive analytics. This work presents a condition monitoring platform built entirely from open-source software. A real world industry example for an escalator use case from Deutsche Bahn underlines the advantages ...
2/2/20
HPC, Big Data, and Data Science
Ludovic Courtès
UB5.132
Jupyter has become a tool of choice for researchers willing to share a narrative and supporting code that their peers can re-run. This talk is about Jupyter’s Achille’s heel: software deployment. I will present Guix-Jupyter, which aims to make notebook self-contained and to support reproducible deployment.