NannyML Cloud
HomeBlogNannyML OSS Docs
v0.24.0
v0.24.0
  • ☂️Introduction
  • Model Monitoring
    • Quickstart
    • Data Preparation
      • How to get data ready for NannyML
    • Tutorials
      • Monitoring a tabular data model
      • Monitoring with segmentation
      • Monitoring a text classification model
      • Monitoring a computer vision model
    • How it works
      • Probabilistic Adaptive Performance Estimation (PAPE)
      • Reverse Concept Drift (RCD)
    • Custom Metrics
      • Creating Custom Metrics
        • Writing Functions for Binary Classification
        • Writing Functions for Multiclass Classification
        • Writing Functions for Regression
        • Handling Missing Values
        • Advanced Tutorial: Creating a MTBF Custom Metric
      • Adding a Custom Metric through NannyML SDK
    • Reporting
      • Creating a new report
      • Report structure
      • Exporting a report
      • Managing reports
      • Report template
      • Add to report feature
  • Product tour
    • Navigation
    • Adding a model
    • Model overview
    • Deleting a model
    • Model side panel
      • Summary
      • Performance
      • Concept drift
      • Covariate shift
      • Data quality
      • Logs
      • Model settings
        • General
        • Data
        • Performance settings
        • Concept Drift settings
        • Covariate Shift settings
        • Descriptive Statistics settings
        • Data Quality settings
    • Account settings
  • Deployment
    • Azure
      • Azure Managed Application
        • Finding the URL to access managed NannyML Cloud
        • Enabling access to storage
      • Azure Software-as-a-Service (SaaS)
    • AWS
      • AMI with CFT
        • Architecture
      • EKS
        • Quick start cluster setup
      • S3 Access
    • Application setup
      • Authentication
      • Notifications
      • Webhooks
      • Permissions
  • NannyML Cloud SDK
    • Getting Started
    • API Reference
  • Probabilistic Model Evaluation
    • Introduction
    • Tutorials
      • Evaluating a binary classification model
      • Data Preparation
    • How it works
      • HDI+ROPE (with minimum precision)
      • Getting Probability Distribution of a Performance Metric with targets
      • Getting Probability Distribution of Performance Metric without targets
      • Getting Probability Distribution of Performance Metric when some observations have labels
      • Defaults for ROPE and estimation precision
  • Experiments Module
    • Introduction
    • Tutorials
      • Running an A/B test
      • Data Preparation
    • How it works
      • Getting probability distribution of the difference of binary downstream metrics
  • miscellaneous
    • Engineering
    • Usage logging in NannyNL
    • Versions
      • Version 0.24.0
      • Version 0.23.0
      • Version 0.22.0
      • Version 0.21.0
Powered by GitBook
On this page
  1. Product tour

Model overview

PreviousAdding a modelNextDeleting a model

Whether you've just added a new model or you're checking the existing ones, you can see them all on the model overview page. Here, you'll find basic information about each model, such as performance on key metrics, any issues with concept drift or covariate shift, data quality, and whether monitoring runs were successful.

Here is our video guide explaining how to use the model overview page:

The model overview page is made up of six components:

  1. Search model by name

    When you have many monitored models in production, you can type in their names to quickly find the one that you're looking for.

  2. Main performance metric These are a summary of the most critical performance metrics. It shows the name of the metric and the realized and estimated values.

  3. Summary performance metrics These summarise the performance status of the model on the realized and estimated metrics. If available, when hovering, you can see more information about other metrics.

  4. Summary of data shift and quality results These summarise the status of results from concept drift, covariate shift, and data quality. When hovering and clicking, you can see more information on these results.

  5. Monitoring status This indicates whether the last run of NannyML was successful. These are the possible states:

    • Successful: when the most recent run didn't have any errors.

    • Error: when the most recent run did have errors.

    • Empty: When the model was newly created, there were no results yet.

    • Skipped run: when no new data was added, the run was skipped. The results would still be the same: Alerts/No Alerts/Empty.

    What an 'error' means:

    • There was an exception when running NannyML calculators.

    • Something else went wrong that prevented us from running NannyML, e.g., a timeout when starting a new job to run.

Model name To get a more in-depth analysis of the model, click on its name. You can change this name in the .

Model settings
Model overview page.