Monitoring Realized Performance for Binary Classification

Note

The following example uses timestamps. These are optional but have an impact on the way data is chunked and results are plotted. You can read more about them in the data requirements.

Just The Code

>>> import nannyml as nml
>>> from IPython.display import display

>>> reference_df = nml.load_synthetic_binary_classification_dataset()[0]
>>> analysis_df = nml.load_synthetic_binary_classification_dataset()[1]
>>> analysis_target_df = nml.load_synthetic_binary_classification_dataset()[2]
>>> analysis_df = analysis_df.merge(analysis_target_df, on='identifier')

>>> display(reference_df.head(3))

>>> calc = nml.PerformanceCalculator(
...     y_pred_proba='y_pred_proba',
...     y_pred='y_pred',
...     y_true='work_home_actual',
...     timestamp_column_name='timestamp',
...     problem_type='classification_binary',
...     metrics=['roc_auc', 'f1', 'precision', 'recall', 'specificity', 'accuracy'],
...     chunk_size=5000)

>>> calc.fit(reference_df)

>>> results = calc.calculate(analysis_df)
>>> display(results.filter(period='analysis').to_df())

>>> display(results.filter(period='reference').to_df())

>>> figure = results.plot()
>>> figure.show()

Walkthrough

For simplicity this guide is based on a synthetic dataset included in the library, where the monitored model predicts whether an employee will work from home. You can read more about this synthetic dataset.

In order to monitor a model, NannyML needs to learn about it from a reference dataset. Then it can monitor the data that is subject to actual analysis, provided as the analysis dataset. You can read more about this in our section on data periods.

The analysis_targets dataframe contains the target results of the analysis period. This is kept separate in the synthetic data because it is not used during performance estimation.. But it is required to calculate performance, so the first thing we need to in this case is set up the right data in the right dataframes. The analysis target values are joined on the analysis frame by the identifier column.

>>> import nannyml as nml
>>> from IPython.display import display

>>> reference_df = nml.load_synthetic_binary_classification_dataset()[0]
>>> analysis_df = nml.load_synthetic_binary_classification_dataset()[1]
>>> analysis_target_df = nml.load_synthetic_binary_classification_dataset()[2]
>>> analysis_df = analysis_df.merge(analysis_target_df, on='identifier')

>>> display(reference_df.head(3))

distance_from_office

salary_range

gas_price_per_litre

public_transportation_cost

wfh_prev_workday

workday

tenure

identifier

work_home_actual

timestamp

y_pred_proba

period

y_pred

0

5.96225

40K - 60K €

2.11948

8.56806

False

Friday

0.212653

0

1

2014-05-09 22:27:20

0.99

reference

1

1

0.535872

40K - 60K €

2.3572

5.42538

True

Tuesday

4.92755

1

0

2014-05-09 22:59:32

0.07

reference

0

2

1.96952

40K - 60K €

2.36685

8.24716

False

Monday

0.520817

2

1

2014-05-09 23:48:25

1

reference

1

Next a PerformanceCalculator is created using a list of metrics to calculate (or just one metric), the data columns required for these metrics, and an optional chunking specification.

The list of metrics specifies which performance metrics of the monitored model will be calculated. The following metrics are currently supported:

  • roc_auc - one-vs-the-rest, macro-averaged

  • f1 - macro-averaged

  • precision - macro-averaged

  • recall - macro-averaged

  • specificity - macro-averaged

  • accuracy

For more information on metrics, check the metrics module.

>>> calc = nml.PerformanceCalculator(
...     y_pred_proba='y_pred_proba',
...     y_pred='y_pred',
...     y_true='work_home_actual',
...     timestamp_column_name='timestamp',
...     problem_type='classification_binary',
...     metrics=['roc_auc', 'f1', 'precision', 'recall', 'specificity', 'accuracy'],
...     chunk_size=5000)

>>> calc.fit(reference_df)

The new PerformanceCalculator is fitted using the fit() method on the reference data.

The fitted PerformanceCalculator can then be used to calculate realized performance metrics on all data which has target values available with the calculate() method. NannyML can output a dataframe that contains all the results of the analysis data.

>>> results = calc.calculate(analysis_df)
>>> display(results.filter(period='analysis').to_df())

(‘chunk’, ‘key’)

(‘chunk’, ‘chunk_index’)

(‘chunk’, ‘start_index’)

(‘chunk’, ‘end_index’)

(‘chunk’, ‘start_date’)

(‘chunk’, ‘end_date’)

(‘chunk’, ‘period’)

(‘chunk’, ‘targets_missing_rate’)

(‘roc_auc’, ‘sampling_error’)

(‘roc_auc’, ‘value’)

(‘roc_auc’, ‘upper_threshold’)

(‘roc_auc’, ‘lower_threshold’)

(‘roc_auc’, ‘alert’)

(‘f1’, ‘sampling_error’)

(‘f1’, ‘value’)

(‘f1’, ‘upper_threshold’)

(‘f1’, ‘lower_threshold’)

(‘f1’, ‘alert’)

(‘precision’, ‘sampling_error’)

(‘precision’, ‘value’)

(‘precision’, ‘upper_threshold’)

(‘precision’, ‘lower_threshold’)

(‘precision’, ‘alert’)

(‘recall’, ‘sampling_error’)

(‘recall’, ‘value’)

(‘recall’, ‘upper_threshold’)

(‘recall’, ‘lower_threshold’)

(‘recall’, ‘alert’)

(‘specificity’, ‘sampling_error’)

(‘specificity’, ‘value’)

(‘specificity’, ‘upper_threshold’)

(‘specificity’, ‘lower_threshold’)

(‘specificity’, ‘alert’)

(‘accuracy’, ‘sampling_error’)

(‘accuracy’, ‘value’)

(‘accuracy’, ‘upper_threshold’)

(‘accuracy’, ‘lower_threshold’)

(‘accuracy’, ‘alert’)

0

[0:4999]

0

0

4999

2017-08-31 04:20:00

2018-01-02 00:45:44

analysis

0

0.00181072

0.970962

0.97866

0.963317

False

0.00610429

0.949549

0.961094

0.935047

False

0.00461594

0.942139

0.961131

0.924741

False

0.00422251

0.957077

0.965726

0.940831

False

0.00465744

0.937034

0.960113

0.924741

False

0.0031445

0.9474

0.960601

0.935079

False

1

[5000:9999]

1

5000

9999

2018-01-02 01:13:11

2018-05-01 13:10:10

analysis

0

0.00181072

0.970248

0.97866

0.963317

False

0.00610429

0.946686

0.961094

0.935047

False

0.00461594

0.943434

0.961131

0.924741

False

0.00422251

0.949959

0.965726

0.940831

False

0.00465744

0.944925

0.960113

0.924741

False

0.0031445

0.9474

0.960601

0.935079

False

2

[10000:14999]

2

10000

14999

2018-05-01 14:25:25

2018-09-01 15:40:40

analysis

0

0.00181072

0.976282

0.97866

0.963317

False

0.00610429

0.950459

0.961094

0.935047

False

0.00461594

0.941438

0.961131

0.924741

False

0.00422251

0.959654

0.965726

0.940831

False

0.00465744

0.943602

0.960113

0.924741

False

0.0031445

0.9514

0.960601

0.935079

False

3

[15000:19999]

3

15000

19999

2018-09-01 16:19:07

2018-12-31 10:11:21

analysis

0

0.00181072

0.967721

0.97866

0.963317

False

0.00610429

0.945968

0.961094

0.935047

False

0.00461594

0.946731

0.961131

0.924741

False

0.00422251

0.945205

0.965726

0.940831

False

0.00465744

0.947577

0.960113

0.924741

False

0.0031445

0.9464

0.960601

0.935079

False

4

[20000:24999]

4

20000

24999

2018-12-31 10:38:45

2019-04-30 11:01:30

analysis

0

0.00181072

0.969886

0.97866

0.963317

False

0.00610429

0.944136

0.961094

0.935047

False

0.00461594

0.940039

0.961131

0.924741

False

0.00422251

0.948269

0.965726

0.940831

False

0.00465744

0.938882

0.960113

0.924741

False

0.0031445

0.9436

0.960601

0.935079

False

5

[25000:29999]

5

25000

29999

2019-04-30 11:02:00

2019-09-01 00:24:27

analysis

0

0.00181072

0.96005

0.97866

0.963317

True

0.00610429

0.915794

0.961094

0.935047

True

0.00461594

0.88822

0.961131

0.924741

True

0.00422251

0.945134

0.965726

0.940831

False

0.00465744

0.881342

0.960113

0.924741

True

0.0031445

0.9132

0.960601

0.935079

True

6

[30000:34999]

6

30000

34999

2019-09-01 00:28:54

2019-12-31 09:09:12

analysis

0

0.00181072

0.95853

0.97866

0.963317

True

0.00610429

0.920015

0.961094

0.935047

True

0.00461594

0.898152

0.961131

0.924741

True

0.00422251

0.94297

0.965726

0.940831

False

0.00465744

0.890909

0.960113

0.924741

True

0.0031445

0.9172

0.960601

0.935079

True

7

[35000:39999]

7

35000

39999

2019-12-31 10:07:15

2020-04-30 11:46:53

analysis

0

0.00181072

0.959041

0.97866

0.963317

True

0.00610429

0.915063

0.961094

0.935047

True

0.00461594

0.890992

0.961131

0.924741

True

0.00422251

0.940471

0.965726

0.940831

True

0.00465744

0.884662

0.960113

0.924741

True

0.0031445

0.9126

0.960601

0.935079

True

8

[40000:44999]

8

40000

44999

2020-04-30 12:04:32

2020-09-01 02:46:02

analysis

0

0.00181072

0.963094

0.97866

0.963317

True

0.00610429

0.922835

0.961094

0.935047

True

0.00461594

0.902232

0.961131

0.924741

True

0.00422251

0.9444

0.965726

0.940831

False

0.00465744

0.899126

0.960113

0.924741

True

0.0031445

0.9216

0.960601

0.935079

True

9

[45000:49999]

9

45000

49999

2020-09-01 02:46:13

2021-01-01 04:29:32

analysis

0

0.00181072

0.957556

0.97866

0.963317

True

0.00610429

0.914221

0.961094

0.935047

True

0.00461594

0.886848

0.961131

0.924741

True

0.00422251

0.943337

0.965726

0.940831

False

0.00465744

0.873822

0.960113

0.924741

True

0.0031445

0.9094

0.960601

0.935079

True

There results from the reference data are also available.

>>> display(results.filter(period='reference').to_df())

(‘chunk’, ‘key’)

(‘chunk’, ‘chunk_index’)

(‘chunk’, ‘start_index’)

(‘chunk’, ‘end_index’)

(‘chunk’, ‘start_date’)

(‘chunk’, ‘end_date’)

(‘chunk’, ‘period’)

(‘chunk’, ‘targets_missing_rate’)

(‘roc_auc’, ‘sampling_error’)

(‘roc_auc’, ‘value’)

(‘roc_auc’, ‘upper_threshold’)

(‘roc_auc’, ‘lower_threshold’)

(‘roc_auc’, ‘alert’)

(‘f1’, ‘sampling_error’)

(‘f1’, ‘value’)

(‘f1’, ‘upper_threshold’)

(‘f1’, ‘lower_threshold’)

(‘f1’, ‘alert’)

(‘precision’, ‘sampling_error’)

(‘precision’, ‘value’)

(‘precision’, ‘upper_threshold’)

(‘precision’, ‘lower_threshold’)

(‘precision’, ‘alert’)

(‘recall’, ‘sampling_error’)

(‘recall’, ‘value’)

(‘recall’, ‘upper_threshold’)

(‘recall’, ‘lower_threshold’)

(‘recall’, ‘alert’)

(‘specificity’, ‘sampling_error’)

(‘specificity’, ‘value’)

(‘specificity’, ‘upper_threshold’)

(‘specificity’, ‘lower_threshold’)

(‘specificity’, ‘alert’)

(‘accuracy’, ‘sampling_error’)

(‘accuracy’, ‘value’)

(‘accuracy’, ‘upper_threshold’)

(‘accuracy’, ‘lower_threshold’)

(‘accuracy’, ‘alert’)

0

[0:4999]

0

0

4999

2014-05-09 22:27:20

2014-09-09 08:18:27

reference

0

0.00181072

0.976253

0.97866

0.963317

False

0.00610429

0.953803

0.961094

0.935047

False

0.00461594

0.951308

0.961131

0.924741

False

0.00422251

0.956311

0.965726

0.940831

False

0.00465744

0.952136

0.960113

0.924741

False

0.0031445

0.9542

0.960601

0.935079

False

1

[5000:9999]

1

5000

9999

2014-09-09 09:13:35

2015-01-09 00:02:51

reference

0

0.00181072

0.969045

0.97866

0.963317

False

0.00610429

0.940963

0.961094

0.935047

False

0.00461594

0.934748

0.961131

0.924741

False

0.00422251

0.947262

0.965726

0.940831

False

0.00465744

0.9357

0.960113

0.924741

False

0.0031445

0.9414

0.960601

0.935079

False

2

[10000:14999]

2

10000

14999

2015-01-09 00:04:43

2015-05-09 15:54:26

reference

0

0.00181072

0.971742

0.97866

0.963317

False

0.00610429

0.954483

0.961094

0.935047

False

0.00461594

0.949804

0.961131

0.924741

False

0.00422251

0.959208

0.965726

0.940831

False

0.00465744

0.948283

0.960113

0.924741

False

0.0031445

0.9538

0.960601

0.935079

False

3

[15000:19999]

3

15000

19999

2015-05-09 16:02:08

2015-09-07 07:14:37

reference

0

0.00181072

0.971642

0.97866

0.963317

False

0.00610429

0.946237

0.961094

0.935047

False

0.00461594

0.941363

0.961131

0.924741

False

0.00422251

0.951161

0.965726

0.940831

False

0.00465744

0.940847

0.960113

0.924741

False

0.0031445

0.946

0.960601

0.935079

False

4

[20000:24999]

4

20000

24999

2015-09-07 07:27:47

2016-01-08 16:02:05

reference

0

0.00181072

0.969085

0.97866

0.963317

False

0.00610429

0.944324

0.961094

0.935047

False

0.00461594

0.942285

0.961131

0.924741

False

0.00422251

0.946372

0.965726

0.940831

False

0.00465744

0.940341

0.960113

0.924741

False

0.0031445

0.9434

0.960601

0.935079

False

5

[25000:29999]

5

25000

29999

2016-01-08 17:22:00

2016-05-09 11:09:39

reference

0

0.00181072

0.967364

0.97866

0.963317

False

0.00610429

0.945286

0.961094

0.935047

False

0.00461594

0.937525

0.961131

0.924741

False

0.00422251

0.953176

0.965726

0.940831

False

0.00465744

0.938679

0.960113

0.924741

False

0.0031445

0.9458

0.960601

0.935079

False

6

[30000:34999]

6

30000

34999

2016-05-09 11:19:36

2016-09-04 03:30:35

reference

0

0.00181072

0.968692

0.97866

0.963317

False

0.00610429

0.94885

0.961094

0.935047

False

0.00461594

0.939168

0.961131

0.924741

False

0.00422251

0.958734

0.965726

0.940831

False

0.00465744

0.938099

0.960113

0.924741

False

0.0031445

0.9484

0.960601

0.935079

False

7

[35000:39999]

7

35000

39999

2016-09-04 04:09:35

2017-01-03 18:48:21

reference

0

0.00181072

0.970205

0.97866

0.963317

False

0.00610429

0.948262

0.961094

0.935047

False

0.00461594

0.940831

0.961131

0.924741

False

0.00422251

0.955812

0.965726

0.940831

False

0.00465744

0.939309

0.960113

0.924741

False

0.0031445

0.9476

0.960601

0.935079

False

8

[40000:44999]

8

40000

44999

2017-01-03 19:00:51

2017-05-03 02:34:24

reference

0

0.00181072

0.974096

0.97866

0.963317

False

0.00610429

0.953456

0.961094

0.935047

False

0.00461594

0.953645

0.961131

0.924741

False

0.00422251

0.953267

0.965726

0.940831

False

0.00465744

0.952727

0.960113

0.924741

False

0.0031445

0.953

0.960601

0.935079

False

9

[45000:49999]

9

45000

49999

2017-05-03 02:49:38

2017-08-31 03:10:29

reference

0

0.00181072

0.971757

0.97866

0.963317

False

0.00610429

0.945042

0.961094

0.935047

False

0.00461594

0.938687

0.961131

0.924741

False

0.00422251

0.951484

0.965726

0.940831

False

0.00465744

0.938148

0.960113

0.924741

False

0.0031445

0.9448

0.960601

0.935079

False

Apart from chunking and chunk and period-related columns, the results data have a set of columns for each calculated metric. When taking roc_auc as an example:

  • targets_missing_rate - The fraction of missing target data.

  • <metric> - The value of the metric for a specific chunk.

  • <metric>_lower_threshold> and <metric>_upper_threshold> - Lower and upper thresholds for performance metric. Crossing them will raise an alert that there is a significant metric change. The thresholds are calculated based on the realized performance of chunks in the reference period. The thresholds are 3 standard deviations away from the mean performance calculated on reference chunks. They are calculated during fit phase.

  • <metric>_alert - A flag indicating potentially significant performance change. True if realized performance crosses upper or lower threshold.

  • <metric>_sampling_error - Estimated Sampling Error for the relevant metric.

The results can be plotted for visual inspection.

>>> figure = results.plot()
>>> figure.show()
../../_images/tutorial-performance-calculation-binary.svg

Insights

After reviewing the performance calculation results, we should be able to clearly see how the model is performing against the targets, according to whatever metrics we wish to track.

What Next

If we decide further investigation is needed, the Data Drift functionality can help us to see what feature changes may be contributing to any performance changes.

It is also wise to check whether the model’s performance is satisfactory according to business requirements. This is an ad-hoc investigation that is not covered by NannyML.