NannyML Cloud
HomeBlogNannyML OSS Docs
v0.20.2
v0.20.2
  • ☂️Introduction
  • Model Monitoring
    • Quickstart
    • Data Preparation
      • How to get data ready for NannyML
    • Tutorials
      • Monitoring a tabular data model
      • Monitoring a text classification model
      • Monitoring a computer vision model
    • How it works
      • Probabilistic Adaptive Performance Estimation (PAPE)
      • Reverse Concept Drift (RCD)
  • Product tour
    • Navigation
    • Adding a model
    • Model overview
    • Model side panel
      • Summary
      • Performance
      • Concept shift
      • Covariate shift
      • Data quality
      • Logs
      • Model settings
    • Account settings
  • Deployment
    • Azure
      • Azure Managed Application
        • Finding the URL to access managed NannyML Cloud
        • Enabling access to storage
      • Azure Software-as-a-Service (SaaS)
    • AWS
      • AMI with CFT
        • Architecture
      • EKS
        • Quick start cluster setup
      • S3 Access
    • Application setup
      • Webhooks
  • NannyML Cloud SDK
    • Getting Started
    • API Reference
  • Probabilistic Model Evaluation
    • Introduction
    • Tutorials
      • Evaluating a binary classification model
      • Data Preparation
    • How it works
      • HDI+ROPE (with minimum precision)
      • Getting Probability Distribution of a Performance Metric with targets
      • Getting Probability Distribution of Performance Metric without targets
      • Getting Probability Distribution of Performance Metric when some observations have labels
      • Defaults for ROPE and estimation precision
  • Experiments Module
    • Introduction
    • Tutorials
      • Running an A/B test
      • Data Preparation
    • How it works
      • Getting probability distribution of the difference of binary downstream metrics
  • miscellaneous
    • Engineering
    • Usage logging in NannyNL
Powered by GitBook
On this page
  • 1. Filters:
  • 2. Visualisations
  • 3. Plots
  1. Product tour
  2. Model side panel

Performance

PreviousSummaryNextConcept shift

Last updated 1 year ago

The Performance dashboard allows you to do an in-depth analysis of changes in model performance over time. If you prefer a video walkthrough, here is our guide explaining how to use the performance dashboard page:

Here, you can find detailed descriptions of various elements on the performance page:

There are three main components to the Performance dashboard:

1. Filters:

You can select which performance type you want to be displayed. The options include estimated performance, realized performance, and the comparison plot of realized vs estimated performance.

Keep in mind that realized performance depends on the availability of targets in production, so it cannot always be calculated.

Choose which metrics to display based on whether there are no alerts, alerts in any or only the last chunk, alerts in performance metric with the main tag, or include all charts regardless of when and if any alerts occurred.

Filter visualizations by the previously specified tags.

2. Visualisations

You can change the order of charts based on the metric name, number, or recency of the alerts.

You can select a specific period of interest which applies to all charts.

To reset a previously set date period, whether using the date range or slider, simply press the "Reset" button.

Similar to selecting a date range, you can choose a specific period of interest by simply moving the date slider.

The charts are interactive, allowing you to hover over them for more details. Red dotted lines indicate the thresholds, while the blue line shows the metric during the reference period. The light blue represents the metric during the analysis period. The lightly shaded area around the estimated results is the metric's confidence band.

You can also zoom in on any part of a chart. Simply press and hold your mouse button, then draw a square over your area of interest. To reset the zoom, just double-click on the chart.

3. Plots

There are two types of plot formats: line and step. A line plot smoothly connects points with straight lines to show trends, while a step plot uses sharp vertical and horizontal lines to show exact changes between points clearly.

Select datasets to zoom in on reference, analysis, or create a separate subplot for both.

Toggle on or off some components on the charts, like alerts, confidence bands, thresholds, and legends.

You can select which metrics you want to display. Metrics that are not calculated/estimated are not visible under the filter. Selecting metrics to calculate/estimate can be done under

model settings.
Performance dashboard.
Performance metrics filter.
Performance type filter.
Alert status filter.
Tags filter.
Sort by window.
Date range selection.
Date range reset.
Date slider.
Chart zoom.
Datasets plots.
Plot elements.