NannyML Cloud
HomeBlogNannyML OSS Docs
v0.24.2
v0.24.2
  • ☂️Introduction
  • Model Monitoring
    • Quickstart
    • Data Preparation
      • How to get data ready for NannyML
    • Tutorials
      • Monitoring a tabular data model
      • Monitoring with segmentation
      • Monitoring a text classification model
      • Monitoring a computer vision model
    • How it works
      • Probabilistic Adaptive Performance Estimation (PAPE)
      • Reverse Concept Drift (RCD)
    • Custom Metrics
      • Creating Custom Metrics
        • Writing Functions for Binary Classification
        • Writing Functions for Multiclass Classification
        • Writing Functions for Regression
        • Handling Missing Values
        • Advanced Tutorial: Creating a MTBF Custom Metric
      • Adding a Custom Metric through NannyML SDK
    • Reporting
      • Creating a new report
      • Report structure
      • Exporting a report
      • Managing reports
      • Report template
      • Add to report feature
  • Product tour
    • Navigation
    • Adding a model
    • Model overview
    • Deleting a model
    • Model side panel
      • Summary
      • Performance
      • Concept drift
      • Covariate shift
      • Data quality
      • Logs
      • Model settings
        • General
        • Data
        • Performance settings
        • Concept Drift settings
        • Covariate Shift settings
        • Descriptive Statistics settings
        • Data Quality settings
    • Account settings
  • Deployment
    • Azure
      • Azure Managed Application
        • Finding the URL to access managed NannyML Cloud
        • Enabling access to storage
      • Azure Software-as-a-Service (SaaS)
    • AWS
      • AMI with CFT
        • Architecture
      • EKS
        • Quick start cluster setup
      • S3 Access
    • Application setup
      • Authentication
      • Notifications
      • Webhooks
      • Permissions
  • NannyML Cloud SDK
    • Getting Started
    • Example
      • Authentication & loading data
      • Setting up the model schema
      • Creating the monitoring model
      • Customizing the monitoring model settings
      • Setting up continuous monitoring
      • Add delayed ground truth (optional)
    • API Reference
  • Probabilistic Model Evaluation
    • Introduction
    • Tutorials
      • Evaluating a binary classification model
      • Data Preparation
    • How it works
      • HDI+ROPE (with minimum precision)
      • Getting Probability Distribution of a Performance Metric with targets
      • Getting Probability Distribution of Performance Metric without targets
      • Getting Probability Distribution of Performance Metric when some observations have labels
      • Defaults for ROPE and estimation precision
  • Experiments Module
    • Introduction
    • Tutorials
      • Running an A/B test
      • Data Preparation
    • How it works
      • Getting probability distribution of the difference of binary downstream metrics
  • miscellaneous
    • Engineering
    • Usage logging in NannyNL
    • Versions
      • Version 0.24.2
      • Version 0.24.1
      • Version 0.24.0
      • Version 0.23.0
      • Version 0.22.0
      • Version 0.21.0
Powered by GitBook
On this page
  • What is NannyML Cloud?
  • The NannyML way
  • 1. Monitor what matters
  • 2. Find what is broken
  • 3. Fix it
  • How to set up NannyML Cloud?
  • NannyML OSS vs NannyML Cloud
  • Where to go next?

Introduction

Monitor what matters, find what is broken, and fix it.

NextQuickstart

What is NannyML Cloud?

NannyML Cloud is a machine learning monitoring platform to monitor, analyze, and improve ML models in production.

To monitor today’s and tomorrow’s ML models properly, we need solutions that frame the monitoring problem better, solutions that have full coverage, and that monitor the model, not just its features. Because of this, NannyML Cloud focuses on model performance estimation and takes a performance-centric monitoring approach. Our monitoring approach goes as follows.

The NannyML way

1. Monitor what matters

Focus on one single metric to know how your model is performing and get alerted when the performance drops.

  • Estimate model performance: Know the performance of your ML models 24/7. NannyML estimates the performance of your ML models even if the ground truth is delayed or absent.

  • Measure the business impact of your models: Tie the performance of your model to monetary or business-oriented outcomes. So that you always know what your ML brings to the table.

  • Avoid alert fatigue: Traditional ML monitoring tends to overwhelm teams with many false alarms. By focusing on what matters, NannyML alerts are always meaningful.

2. Find what is broken

If something is broken, understand the underlying causes by correlating performance issues with data drift alerts and concept drift variations. This approach provides an actionable path for solving model performance issues.

  • Measure the impact of Concept Drift on model performance: Our concept drift algorithms use the latest ground-truth data to validate if the performance change has been due to a change of concept.

  • Uncover the most subtle changes in the data structure: Leverage our multivariate drift detection method to uncover changes that univariate approaches cannot detect.

  • Go on a granular investigation: Apply univariate drift detection methods allow to perform a granular investigation across the model's features. And easily find which ones correlate with the performance changes.

  • Intelligent alert ranking: NannyML links back data drift alerts to changes in the model performance. So you can easily detect which features are causing the performance issues.

  • Segment your data for better interpretability: Use segmentation to divide a dataset into meaningful subgroups, allowing you to monitor and analyze specific subsets of the population more effectively.

3. Fix it

Once you understand what went wrong, it becomes possible to tell whether a model is worth retraining, refactoring or if another issue resolution strategy is needed.

How to set up NannyML Cloud?

NannyML Cloud is available on both the Azure and AWS marketplaces, and you can deploy it in two different ways, depending on your needs.

NannyML OSS vs NannyML Cloud

Feature
NannyML OSS
NannyML Cloud

Performance estimation - CBPE and DLE

✅

✅

PAPE - 10% better performance estimations than CBPE

❌

✅

Concept shift detection

❌

✅

Covariate shift detection

✅

✅

Data quality checks

✅

✅

Intelligent alerting

✅

✅

Interactive visualizations

✅

✅

Slack and email notifications

❌

✅

Customizable dashboards

❌

✅

Programmatic data collection

❌

✅

Metric storage

❌

✅

Scheduling monitoring runs

❌

✅

Segmentation

❌

✅

Feature
NannyML OSS
NannyML Cloud

Performance estimation - CBPE and DLE

✅

✅

PAPE - 10% better performance estimations than CBPE

❌

✅

Concept shift detection

❌

✅

Covariate shift detection

✅

✅

Data quality checks

✅

✅

Intelligent alerting

✅

✅

Interactive visualizations

✅

✅

Slack and email notifications

❌

✅

Customizable dashboards

❌

✅

Programmatic data collection

❌

✅

Metric storage

❌

✅

Scheduling monitoring runs

❌

✅

Segmentation

❌

✅

Where to go next?

Here, you can find several helpful guides to aid with onboarding.

Retrain only when necessary: Leverage to automate monitoring data ingestion and the to trigger retraining pipelines when the estimated performance drops.

Managed Application: With the Managed application, no data will leave your environment. This option will provision the NannyML Cloud components and the required infrastructure within your Azure or AWS subscription. To learn more about this, check out the docs on how to set up NannyML Cloud on and .

Software-as-a-Service (SaaS): With the SaaS option, you send us the monitoring data, and the monitoring happens in our infrastructure. To learn more about this, check out the docs on how to set up NannyML Cloud on and .

NannyML Cloud, built on our open-source , adds new features and algorithms into an all-in-one monitoring platform.

☂️
NannyML Cloud SDK
webhooks
Azure
AWS
Azure
AWS
NannyML library

Get started with investigating the simple use-case

Find out how to use NannyML Cloud

Explore using NannyML Cloud with tabular, text, and image data

Learn how to deploy NannyML Cloud on Azure and AWS

Learn how to interact with NannyML Cloud via code

Learn how the NannyML Cloud model monitoring works under the hood

🏃‍♂️

🧭

🧑‍💻

🚀

</>

👷‍♂️

Quickstart
Product tour
Tutorials
Deployment
SDK
Miscellaneous
Estimated Accuracy using Probabilistic Adaptive Performance Estimation (PAPE)
Concept drift detection panel

Azure Marketplace: NannyML Cloud ↗

AWS Marketplace: NannyML Cloud ↗

Cover
Cover