Productionalizing-ML-Webinar-Landing-Page.jpg

Available On Demand

Your ML model makes it into production - yet the job isn't done. The world changes, new data looks different from old, and eventually a model needs to be retrained. How can you tell when your model isn’t performing well and what can you do about it?


During this webinar we will explore how to detect model drift using MLflow and Apache Spark Streaming on Databricks, using IoT sensor data from glassware manufacturing to select products for manual quality inspection. We’ll highlight some subtle problems in online model evaluation, like connecting future ground-truth labels to new data.


Specifically, we will cover:

  • A brief overview of how Databricks helps build end-to-end ML pipeline at scale
  • Best practices to deploy models for batch or real-time inference using MLflow
  • How to score models on a stream of data and detect model drift with live demos

Presenters
Clemens Mewald, Director of Product Management, Databricks
Joel Thomas, Senior Solutions Architect, Databricks

Watch Now