NannyML Cloud
HomeBlogNannyML OSS Docs
v0.24.2
v0.24.2
  • ☂️Introduction
  • Model Monitoring
    • Quickstart
    • Data Preparation
      • How to get data ready for NannyML
    • Tutorials
      • Monitoring a tabular data model
      • Monitoring with segmentation
      • Monitoring a text classification model
      • Monitoring a computer vision model
    • How it works
      • Probabilistic Adaptive Performance Estimation (PAPE)
      • Reverse Concept Drift (RCD)
    • Custom Metrics
      • Creating Custom Metrics
        • Writing Functions for Binary Classification
        • Writing Functions for Multiclass Classification
        • Writing Functions for Regression
        • Handling Missing Values
        • Advanced Tutorial: Creating a MTBF Custom Metric
      • Adding a Custom Metric through NannyML SDK
    • Reporting
      • Creating a new report
      • Report structure
      • Exporting a report
      • Managing reports
      • Report template
      • Add to report feature
  • Product tour
    • Navigation
    • Adding a model
    • Model overview
    • Deleting a model
    • Model side panel
      • Summary
      • Performance
      • Concept drift
      • Covariate shift
      • Data quality
      • Logs
      • Model settings
        • General
        • Data
        • Performance settings
        • Concept Drift settings
        • Covariate Shift settings
        • Descriptive Statistics settings
        • Data Quality settings
    • Account settings
  • Deployment
    • Azure
      • Azure Managed Application
        • Finding the URL to access managed NannyML Cloud
        • Enabling access to storage
      • Azure Software-as-a-Service (SaaS)
    • AWS
      • AMI with CFT
        • Architecture
      • EKS
        • Quick start cluster setup
      • S3 Access
    • Application setup
      • Authentication
      • Notifications
      • Webhooks
      • Permissions
  • NannyML Cloud SDK
    • Getting Started
    • Example
      • Authentication & loading data
      • Setting up the model schema
      • Creating the monitoring model
      • Customizing the monitoring model settings
      • Setting up continuous monitoring
      • Add delayed ground truth (optional)
    • API Reference
  • Probabilistic Model Evaluation
    • Introduction
    • Tutorials
      • Evaluating a binary classification model
      • Data Preparation
    • How it works
      • HDI+ROPE (with minimum precision)
      • Getting Probability Distribution of a Performance Metric with targets
      • Getting Probability Distribution of Performance Metric without targets
      • Getting Probability Distribution of Performance Metric when some observations have labels
      • Defaults for ROPE and estimation precision
  • Experiments Module
    • Introduction
    • Tutorials
      • Running an A/B test
      • Data Preparation
    • How it works
      • Getting probability distribution of the difference of binary downstream metrics
  • miscellaneous
    • Engineering
    • Usage logging in NannyNL
    • Versions
      • Version 0.24.2
      • Version 0.24.1
      • Version 0.24.0
      • Version 0.23.0
      • Version 0.22.0
      • Version 0.21.0
Powered by GitBook
On this page
  • Release Notes
  • Custom metrics support
  1. miscellaneous
  2. Versions

Version 0.23.0

PreviousVersion 0.24.0NextVersion 0.22.0

Release Notes

We're proud to bring our latest 0.23.0 release!

We've worked hard on a new product feature that is unlocking huge potential within NannyML Cloud: custom metrics!

Custom metrics support

NannyML Cloud includes a suite of performance metrics, but sometimes your use case requires a very specific one. The newly added support for custom metrics allows you to provide us with an implementation of your very own metric, so we can plug it into our algorithms for performance calculation and estimation. We support custom metrics for binary classification, multiclass classification and regression models.

If you are already monitoring models using NannyML Cloud, you can easily add new custom metrics to the monitoring workflow! In the custom metrics overview, you can create a new custom metric.

For custom classification metrics, you'll have to provide one function to calculate the realized performance based on your model predictions, targets or any column available in NannyML Cloud. You can optionally provide an estimation function as well, which will then be plugged into our estimation algorithms.

Once the custom metric has been created we'll assign it to our model.

The next time we calculate the model metrics, the custom metric will be included. You'll be able to see the results in the performance pane of the web application. You can also add it to your model dashboard or use it as a key performance metric!

We've written a lot of documentation to get you started with custom metrics easily!

Check out the Creating Custom Metrics guide to help you set up custom metrics for binary classification, multiclass classification or regression models. For a more advanced, real life example you can read the Advanced Tutorial: Creating a MTBF Custom Metric.

Creating a new custom F2 metric for binary classification
Custom metrics overview
Assigning a custom metric to our model
Enjoy your custom metrics in the performance pane!