NannyML Cloud
HomeBlogNannyML OSS Docs
v0.22.0
v0.22.0
  • ☂️Introduction
  • Model Monitoring
    • Quickstart
    • Data Preparation
      • How to get data ready for NannyML
    • Tutorials
      • Monitoring a tabular data model
      • Monitoring with segmentation
      • Monitoring a text classification model
      • Monitoring a computer vision model
    • How it works
      • Probabilistic Adaptive Performance Estimation (PAPE)
      • Reverse Concept Drift (RCD)
  • Product tour
    • Navigation
    • Adding a model
    • Model overview
    • Deleting a model
    • Model side panel
      • Summary
      • Performance
      • Concept drift
      • Covariate shift
      • Data quality
      • Logs
      • Model settings
        • General
        • Data
        • Performance settings
        • Concept Drift settings
        • Covariate Shift settings
        • Descriptive Statistics settings
        • Data Quality settings
    • Account settings
  • Deployment
    • Azure
      • Azure Managed Application
        • Finding the URL to access managed NannyML Cloud
        • Enabling access to storage
      • Azure Software-as-a-Service (SaaS)
    • AWS
      • AMI with CFT
        • Architecture
      • EKS
        • Quick start cluster setup
      • S3 Access
    • Application setup
      • Authentication
      • Notifications
      • Webhooks
      • Permissions
  • NannyML Cloud SDK
    • Getting Started
    • API Reference
  • Probabilistic Model Evaluation
    • Introduction
    • Tutorials
      • Evaluating a binary classification model
      • Data Preparation
    • How it works
      • HDI+ROPE (with minimum precision)
      • Getting Probability Distribution of a Performance Metric with targets
      • Getting Probability Distribution of Performance Metric without targets
      • Getting Probability Distribution of Performance Metric when some observations have labels
      • Defaults for ROPE and estimation precision
  • Experiments Module
    • Introduction
    • Tutorials
      • Running an A/B test
      • Data Preparation
    • How it works
      • Getting probability distribution of the difference of binary downstream metrics
  • miscellaneous
    • Engineering
    • Usage logging in NannyNL
    • Versions
      • Version 0.22.0
      • Version 0.21.0
Powered by GitBook
On this page
  1. Probabilistic Model Evaluation
  2. Tutorials

Data Preparation

Preparing your model data for NannyML

PreviousEvaluating a binary classification modelNextHow it works

What data does NannyML need in order to perform model evaluation? We have a detailed tutorial regarding . NannyML's model evaluation module only assesses whether the model's performance when deployed meets expectations with as little data as possible given a required statistical confidence. In order to do more comprehensive model monitoring over time then NannyML's should be used. Conceptually NannyML needs the following information:

  • The model's predicted probabilities.

  • The actual target values for the model's prediction during the reference period.

  • The classification thresholds for the model's predicted probabilities.

How should that information be encoded for NannyML to consume it? The classification threshold is just a number that is provided during the add new evaluation model wizard. The predicted probabilities and targets are presented in two columns with each row representing a model prediction. Let's see an example:

predicted_probability
target

0.32

0

0.62

1

0.83

1

However, when providing additional evaluation data, the target column is optional. NannyML's model evaluation algorithm can make use of pure model predictions in order to increase its confidence in the evaluated model's performance.

We recommend storing your data as parquet files.

NannyML Cloud supports both parquet and CSV files, but CSV files don't store data type information. CSV files may cause incorrect data types to be inferred. If you later add more data to the model using the SDK or using parquet format, a data type conflict may occur.

evaluating a binary classification model with NannyML
model monitoring module