NannyML Cloud
search
Ctrlk
HomeBlogNannyML OSS Docs
  • ☂️Introduction
  • Model Monitoring
    • Quickstart
    • Data Preparationchevron-right
    • Tutorialschevron-right
    • How it workschevron-right
    • Custom Metricschevron-right
    • Reportingchevron-right
  • Product tour
    • Navigation
    • Adding a model
    • Model overview
    • Deleting a model
    • Model side panelchevron-right
    • Account settings
  • Deployment
    • Azurechevron-right
    • AWSchevron-right
    • Application setupchevron-right
  • NannyML Cloud SDK
    • Getting Started
    • API Reference
  • Probabilistic Model Evaluation
    • Introduction
    • Tutorialschevron-right
    • How it workschevron-right
      • HDI+ROPE (with minimum precision)
      • Getting Probability Distribution of a Performance Metric with targets
      • Getting Probability Distribution of Performance Metric without targets
      • Getting Probability Distribution of Performance Metric when some observations have labels
      • Defaults for ROPE and estimation precision
  • Experiments Module
    • Introduction
    • Tutorialschevron-right
    • How it workschevron-right
  • miscellaneous
    • Engineering
    • Usage logging in NannyNL
    • Versionschevron-right
gitbookPowered by GitBook
block-quoteOn this pagechevron-down
  1. Probabilistic Model Evaluation

How it works

This section describes the core algorithms of Probabilistic Model Evaluation that is the way the probability distributions for performance metrics are estimated.

HDI+ROPE (with minimum precision)chevron-rightGetting Probability Distribution of a Performance Metric with targetschevron-rightGetting Probability Distribution of Performance Metric without targetschevron-rightGetting Probability Distribution of Performance Metric when some observations have labelschevron-rightDefaults for ROPE and estimation precisionchevron-right
PreviousData Preparationchevron-leftNextHDI+ROPE (with minimum precision)chevron-right