Estimating Performance for Multiclass Classification

This tutorial explains how to use NannyML to estimate the performance of binary classification models in the absence of target data. To find out how CBPE estimates performance, read the explanation of Confidence-based Performance Estimation.

Note

The following example uses timestamps. These are optional but have an impact on the way data is chunked and results are plotted. You can read more about them in the data requirements.

Just The Code

>>> import nannyml as nml
>>> from IPython.display import display

>>> reference_df, analysis_df, _ = nml.load_synthetic_multiclass_classification_dataset()

>>> display(reference_df.head(3))

>>> estimator = nml.CBPE(
...     y_pred_proba={
...         'prepaid_card': 'y_pred_proba_prepaid_card',
...         'highstreet_card': 'y_pred_proba_highstreet_card',
...         'upmarket_card': 'y_pred_proba_upmarket_card'},
...     y_pred='y_pred',
...     y_true='y_true',
...     timestamp_column_name='timestamp',
...     problem_type='classification_multiclass',
...     metrics=['roc_auc', 'f1'],
...     chunk_size=6000,
>>> )
>>> estimator.fit(reference_df)

>>> results = estimator.estimate(analysis_df)
>>> display(results.filter(period='analysis').to_df())

>>> metric_fig = results.plot()
>>> metric_fig.show()

Advanced configuration

Walkthrough

For simplicity this guide is based on a synthetic dataset where the monitored model predicts which type of credit card product new customers should be assigned to. Check out Credit Card Dataset to learn more about this dataset.

In order to monitor a model, NannyML needs to learn about it and set expectations from a reference dataset. Then it can monitor the data that is subject to actual analysis, provided as the analysis dataset. You can read more about this in our section on data periods.

>>> import nannyml as nml
>>> from IPython.display import display

>>> reference_df, analysis_df, _ = nml.load_synthetic_multiclass_classification_dataset()

>>> display(reference_df.head(3))

id

acq_channel

app_behavioral_score

requested_credit_limit

app_channel

credit_bureau_score

stated_income

is_customer

timestamp

y_pred_proba_prepaid_card

y_pred_proba_highstreet_card

y_pred_proba_upmarket_card

y_pred

y_true

0

0

Partner3

1.80823

350

web

309

15000

True

2020-05-02 02:01:30

0.97

0.03

0

prepaid_card

prepaid_card

1

1

Partner2

4.38257

500

mobile

418

23000

True

2020-05-02 02:03:33

0.87

0.13

0

prepaid_card

prepaid_card

2

2

Partner2

-0.787575

400

web

507

24000

False

2020-05-02 02:04:49

0.47

0.35

0.18

prepaid_card

upmarket_card

Next we create the Confidence-based Performance Estimation (CBPE) estimator with a list of metrics, and an optional chunking specification. For more information about chunking check out the chunking tutorial and it’s advanced guide.

Note

The list of metrics specifies which performance metrics of the monitored model will be estimated. The following metrics are currently supported:

  • roc_auc - one-vs-the-rest, macro-averaged

  • f1 - macro-averaged

  • precision - macro-averaged

  • recall - macro-averaged

  • specificity - macro-averaged

  • accuracy

  • average_precision - macro-averaged

>>> estimator = nml.CBPE(
...     y_pred_proba={
...         'prepaid_card': 'y_pred_proba_prepaid_card',
...         'highstreet_card': 'y_pred_proba_highstreet_card',
...         'upmarket_card': 'y_pred_proba_upmarket_card'},
...     y_pred='y_pred',
...     y_true='y_true',
...     timestamp_column_name='timestamp',
...     problem_type='classification_multiclass',
...     metrics=['roc_auc', 'f1'],
...     chunk_size=6000,
>>> )
>>> estimator.fit(reference_df)

The CBPE estimator is then fitted using the fit() method on the reference data.

The fitted estimator can be used to estimate performance on other data, for which performance cannot be calculated. Typically, this would be used on the latest production data where target is missing. In our example this is the analysis_df data.

NannyML can then output a dataframe that contains all the results. Let’s have a look at the results for analysis period only.

>>> results = estimator.estimate(analysis_df)
>>> display(results.filter(period='analysis').to_df())

chunk
key
chunk_index
start_index
end_index
start_date
end_date
period
roc_auc
value
sampling_error
realized
upper_confidence_boundary
lower_confidence_boundary
upper_threshold
lower_threshold
alert
f1
value
sampling_error
realized
upper_confidence_boundary
lower_confidence_boundary
upper_threshold
lower_threshold
alert

0

[0:5999]

0

0

5999

2020-09-01 03:10:01

2020-09-13 16:15:10

analysis

0.906962

0.00214318

nan

0.913392

0.900533

0.913516

0.900902

False

0.753301

0.00565227

nan

0.770258

0.736345

0.764944

0.741254

False

1

[6000:11999]

1

6000

11999

2020-09-13 16:15:32

2020-09-25 19:48:42

analysis

0.909877

0.00214318

nan

0.916306

0.903447

0.913516

0.900902

False

0.756422

0.00565227

nan

0.773378

0.739465

0.764944

0.741254

False

2

[12000:17999]

2

12000

17999

2020-09-25 19:50:04

2020-10-08 02:53:47

analysis

0.909887

0.00214318

nan

0.916317

0.903458

0.913516

0.900902

False

0.758166

0.00565227

nan

0.775122

0.741209

0.764944

0.741254

False

3

[18000:23999]

3

18000

23999

2020-10-08 02:57:34

2020-10-20 15:48:19

analysis

0.909033

0.00214318

nan

0.915463

0.902604

0.913516

0.900902

False

0.756557

0.00565227

nan

0.773514

0.739601

0.764944

0.741254

False

4

[24000:29999]

4

24000

29999

2020-10-20 15:49:06

2020-11-01 22:04:40

analysis

0.907116

0.00214318

nan

0.913546

0.900687

0.913516

0.900902

False

0.753618

0.00565227

nan

0.770575

0.736661

0.764944

0.741254

False

5

[30000:35999]

5

30000

35999

2020-11-01 22:04:59

2020-11-14 03:55:33

analysis

0.819359

0.00214318

nan

0.825788

0.812929

0.913516

0.900902

True

0.630985

0.00565227

nan

0.647941

0.614028

0.764944

0.741254

True

6

[36000:41999]

6

36000

41999

2020-11-14 03:55:49

2020-11-26 09:19:06

analysis

0.820096

0.00214318

nan

0.826526

0.813667

0.913516

0.900902

True

0.631482

0.00565227

nan

0.648439

0.614525

0.764944

0.741254

True

7

[42000:47999]

7

42000

47999

2020-11-26 09:19:22

2020-12-08 14:33:56

analysis

0.818966

0.00214318

nan

0.825396

0.812537

0.913516

0.900902

True

0.630552

0.00565227

nan

0.647509

0.613595

0.764944

0.741254

True

8

[48000:53999]

8

48000

53999

2020-12-08 14:34:25

2020-12-20 18:30:30

analysis

0.819245

0.00214318

nan

0.825674

0.812815

0.913516

0.900902

True

0.631736

0.00565227

nan

0.648692

0.614779

0.764944

0.741254

True

9

[54000:59999]

9

54000

59999

2020-12-20 18:31:09

2021-01-01 22:57:55

analysis

0.821429

0.00214318

nan

0.827859

0.815

0.913516

0.900902

True

0.633443

0.00565227

nan

0.6504

0.616487

0.764944

0.741254

True

Apart from chunk-related data, the results data have the following columns for each metric that was estimated:

  • value - the estimate of a metric for a specific chunk.

  • sampling_error - the estimate of the Sampling Error.

  • realized - when target values are available for a chunk, the realized performance metric will also be calculated and included within the results.

  • upper_confidence_boundary and lower_confidence_boundary - These values show the Confidence Band of the relevant metric and are equal to estimated value +/- 3 times the estimated Sampling Error.

  • upper_threshold and lower_threshold - crossing these thresholds will raise an alert on significant performance change. The thresholds are calculated based on the actual performance of the monitored model on chunks in the reference partition. By default, the thresholds are 3 standard deviations away from the mean performance calculated on chunks. They are calculated during fit phase. You can also set up custom thresholds using constant or standard deviations thresholds, to learn more about it check out our tutorial on thresholds.

  • alert - flag indicating potentially significant performance change. True if estimated performance crosses upper or lower threshold.

These results can be also plotted. Our plot contains several key elements.

  • The purple dashed step plot shows the estimated performance in each chunk of the provided data. Thick squared point markers indicate the middle of these chunks.

  • The black vertical line splits the reference and analysis periods.

  • The low-saturated purple area around the estimated performance in the analysis period corresponds to the confidence band which is calculated as the estimated performance +/- 3 times the estimated Sampling Error.

  • The red horizontal dashed lines show upper and lower thresholds that indicate the range of expected performance values.

  • The red diamond-shaped point markers in the middle of a chunk indicate that an alert has been raised. Alerts are caused by the estimated performance crossing the upper or lower threshold.

Description of tabular results above explains how the confidence bands and thresholds are calculated. Additional information is shown in the hover (these are interactive plots, though only static views are included here).

>>> metric_fig = results.plot()
>>> metric_fig.show()
../../../_images/multiclass_synthetic.svg

Insights

After reviewing the performance estimation results, we should be able to see any indications of performance change that NannyML has detected based upon the model’s inputs and outputs alone.

What’s next

The Data Drift functionality can help us to understand whether data drift is causing the performance problem. When the target values become available we can compared realized and performance results.