NannyML Cloud
HomeBlogNannyML OSS Docs
v0.20.2
v0.20.2
  • ☂️Introduction
  • Model Monitoring
    • Quickstart
    • Data Preparation
      • How to get data ready for NannyML
    • Tutorials
      • Monitoring a tabular data model
      • Monitoring a text classification model
      • Monitoring a computer vision model
    • How it works
      • Probabilistic Adaptive Performance Estimation (PAPE)
      • Reverse Concept Drift (RCD)
  • Product tour
    • Navigation
    • Adding a model
    • Model overview
    • Model side panel
      • Summary
      • Performance
      • Concept shift
      • Covariate shift
      • Data quality
      • Logs
      • Model settings
    • Account settings
  • Deployment
    • Azure
      • Azure Managed Application
        • Finding the URL to access managed NannyML Cloud
        • Enabling access to storage
      • Azure Software-as-a-Service (SaaS)
    • AWS
      • AMI with CFT
        • Architecture
      • EKS
        • Quick start cluster setup
      • S3 Access
    • Application setup
      • Webhooks
  • NannyML Cloud SDK
    • Getting Started
    • API Reference
  • Probabilistic Model Evaluation
    • Introduction
    • Tutorials
      • Evaluating a binary classification model
      • Data Preparation
    • How it works
      • HDI+ROPE (with minimum precision)
      • Getting Probability Distribution of a Performance Metric with targets
      • Getting Probability Distribution of Performance Metric without targets
      • Getting Probability Distribution of Performance Metric when some observations have labels
      • Defaults for ROPE and estimation precision
  • Experiments Module
    • Introduction
    • Tutorials
      • Running an A/B test
      • Data Preparation
    • How it works
      • Getting probability distribution of the difference of binary downstream metrics
  • miscellaneous
    • Engineering
    • Usage logging in NannyNL
Powered by GitBook
On this page
  1. Product tour
  2. Model side panel

Model settings

PreviousLogsNextAccount settings

Last updated 1 year ago

Under model settings, you can find all the monitoring parameters of a selected model. These settings are specific to a single model and are not confused with the general NannyML settings in the navbar.

On the left side, you can navigate through the different configuration groupings. There is also a "Run now" button to trigger a new NannyML run. This might be useful after some of the parameters are updated.

General details

Here, you can change the name of your model.

Datasets

Under datasets, you can manually add more analysis and target data.

Schedule

Under schedule, you can define when to run the drift and metric calculators.

Chunking

Here, you can choose how to group the results by time interval or size. For example, choosing "monthly" groups all predictions made in the same month and calculates the results.

Performance

Here, you can select the metrics you want to monitor. There is also the option to configure them further. The metrics will either be calculated and/or estimated depending on the selected performance types. Calculating metrics and thus measuring realized performance is only possible if targets are supplied.

Under every metric configuration, it is possible to specify further if this metric has to be calculated and/or estimated. NannyML automatically extracts thresholds based on the supplied reference data, but it is possible to configure a custom threshold here. All metrics follow this type of configuration except business value.

There are two types of threshold constants and standard deviation-based thresholds:

For business value estimation or calculation, a cost/benefit matrix has to be supplied. This matrix contains the value a single observation in each of the cells of the respective confusion matrix cells brings in or costs. For example, a true positive prediction brings in X amount, and a false positive prediction will cost us Y.

Concept shift

Here, you can specify which concept shift results to run and configure the threshold values.

Covariate shift

In covariate shift settings, you can specify which drift methods to run and also configure the threshold values.

Some methods work for categorical and continuous columns; in that case, it can be selected which of those they must run. Also, the threshold can be manually configured.

Data quality

Here, you can select the type of data quality checks and their threshold values.

Both missing values and unseen values can be normalized along with default thresholds.

We currently only support time-based and size-based chunking; if you need support for number-based chunking, .

💡
contact us
Model settings page.