Estimating Confusion Matrix Elements for Binary Classification
This tutorial explains how to use NannyML to estimate the confusion matrix for binary classification models in the absence of target data. To find out how CBPE estimates performance, read the explanation of Confidence-based Performance Estimation.
Note
The following example uses timestamps. These are optional but have an impact on the way data is chunked and results are plotted. You can read more about them in the data requirements.
Just The Code
>>> import nannyml as nml
>>> from IPython.display import display
>>> reference_df = nml.load_synthetic_car_loan_dataset()[0]
>>> analysis_df = nml.load_synthetic_car_loan_dataset()[1]
>>> display(reference_df.head(3))
>>> estimator = nml.CBPE(
... y_pred_proba='y_pred_proba',
... y_pred='y_pred',
... y_true='repaid',
... timestamp_column_name='timestamp',
... metrics=['confusion_matrix'],
... chunk_size=5000,
... problem_type='classification_binary',
... normalize_confusion_matrix="all",
>>> )
>>> estimator.fit(reference_df)
>>> results = estimator.estimate(analysis_df)
>>> display(results.filter(period='analysis').to_df())
>>> metric_fig = results.plot()
>>> metric_fig.show()
Walkthrough
For simplicity this guide is based on a synthetic dataset included in the library, where the monitored model predicts whether a customer will repay a loan to buy a car. Check out Car Loan Dataset to learn more about this dataset.
In order to monitor a model, NannyML needs to learn about it from a reference dataset. Then it can monitor the data that is subject to actual analysis, provided as the analysis dataset. You can read more about this in our section on data periods.
We start by loading the dataset we’ll be using:
>>> import nannyml as nml
>>> from IPython.display import display
>>> reference_df = nml.load_synthetic_car_loan_dataset()[0]
>>> analysis_df = nml.load_synthetic_car_loan_dataset()[1]
>>> display(reference_df.head(3))
id |
car_value |
salary_range |
debt_to_income_ratio |
loan_length |
repaid_loan_on_prev_car |
size_of_downpayment |
driver_tenure |
repaid |
timestamp |
y_pred_proba |
y_pred |
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
0 |
0 |
39811 |
40K - 60K € |
0.63295 |
19 |
False |
40% |
0.212653 |
1 |
2018-01-01 00:00:00.000 |
0.99 |
1 |
1 |
1 |
12679 |
40K - 60K € |
0.718627 |
7 |
True |
10% |
4.92755 |
0 |
2018-01-01 00:08:43.152 |
0.07 |
0 |
2 |
2 |
19847 |
40K - 60K € |
0.721724 |
17 |
False |
0% |
0.520817 |
1 |
2018-01-01 00:17:26.304 |
1 |
1 |
Next we create the Confidence-based Performance Estimation
(CBPE
)
estimator. To initialize an estimator that estimates the confusion_matrix, we specify the following
parameters:
y_pred_proba: the name of the column in the reference data that contains the predicted probabilities.
y_pred: the name of the column in the reference data that contains the predicted classes.
y_true: the name of the column in the reference data that contains the true classes.
timestamp_column_name (Optional): the name of the column in the reference data that contains timestamps.
metrics: a list of metrics to estimate. In this example we will estimate the
confusion_matrix
metric.chunk_size (Optional): the number of observations in each chunk of data used to estimate performance. For more information about chunking configurations check out the chunking tutorial.
problem_type: the type of problem being monitored. In this example we will monitor a binary classification problem.
normalize_confusion_matrix (Optional): how to normalize the confusion matrix. The normalization options are:
None : returns counts for each cell
“true” : normalize over the true class of observations.
“pred” : normalize over the predicted class of observations
“all” : normalize over all observations
thresholds (Optional): the thresholds used to calculate the alert flag. For more information about thresholds, check out the thresholds tutorial.
Note
Since we are estimating the confusion matrix, the count values in each cell of the confusion matrix are estimates. We normalize the estimates just as if they were true counts. This means that when we normalize over the true class, the estimates in each row will sum to 1. When we normalize over the predicted class, the estimates in each column will sum to 1. When we normalize over all observations, the estimates in the entire matrix will sum to 1.
>>> estimator = nml.CBPE(
... y_pred_proba='y_pred_proba',
... y_pred='y_pred',
... y_true='repaid',
... timestamp_column_name='timestamp',
... metrics=['confusion_matrix'],
... chunk_size=5000,
... problem_type='classification_binary',
... normalize_confusion_matrix="all",
>>> )
The CBPE
estimator is then fitted using the
fit()
method on the reference
data.
>>> estimator.fit(reference_df)
The fitted estimator
can be used to estimate performance on other data, for which performance cannot be calculated.
Typically, this would be used on the latest production data where target is missing. In our example this is
the analysis_df
data.
NannyML can then output a dataframe that contains all the results. Let’s have a look at the results for analysis period only.
>>> results = estimator.estimate(analysis_df)
>>> display(results.filter(period='analysis').to_df())
chunk
key
|
chunk_index
|
start_index
|
end_index
|
start_date
|
end_date
|
period
|
true_positive
value
|
sampling_error
|
realized
|
upper_confidence_boundary
|
lower_confidence_boundary
|
upper_threshold
|
lower_threshold
|
alert
|
true_negative
value
|
sampling_error
|
realized
|
upper_confidence_boundary
|
lower_confidence_boundary
|
upper_threshold
|
lower_threshold
|
alert
|
false_positive
value
|
sampling_error
|
realized
|
upper_confidence_boundary
|
lower_confidence_boundary
|
upper_threshold
|
lower_threshold
|
alert
|
false_negative
value
|
sampling_error
|
realized
|
upper_confidence_boundary
|
lower_confidence_boundary
|
upper_threshold
|
lower_threshold
|
alert
|
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 |
[0:4999] |
0 |
0 |
4999 |
2018-10-30 18:00:00 |
2018-11-30 00:27:16.848000 |
analysis |
0.481766 |
0.00705286 |
nan |
0.502925 |
0.460608 |
0.478879 |
0.449401 |
True |
0.460026 |
0.00706512 |
nan |
0.481221 |
0.43883 |
0.494119 |
0.464881 |
True |
0.0212337 |
0.00202397 |
nan |
0.0273056 |
0.0151617 |
0.025818 |
0.016022 |
False |
0.0369745 |
0.00261473 |
nan |
0.0448186 |
0.0291303 |
0.0416915 |
0.0291885 |
False |
1 |
[5000:9999] |
1 |
5000 |
9999 |
2018-11-30 00:36:00 |
2018-12-30 07:03:16.848000 |
analysis |
0.454646 |
0.00705286 |
nan |
0.475804 |
0.433487 |
0.478879 |
0.449401 |
False |
0.488676 |
0.00706512 |
nan |
0.509871 |
0.46748 |
0.494119 |
0.464881 |
False |
0.0199543 |
0.00202397 |
nan |
0.0260262 |
0.0138824 |
0.025818 |
0.016022 |
False |
0.0367245 |
0.00261473 |
nan |
0.0445687 |
0.0288803 |
0.0416915 |
0.0291885 |
False |
2 |
[10000:14999] |
2 |
10000 |
14999 |
2018-12-30 07:12:00 |
2019-01-29 13:39:16.848000 |
analysis |
0.455756 |
0.00705286 |
nan |
0.476914 |
0.434597 |
0.478879 |
0.449401 |
False |
0.489736 |
0.00706512 |
nan |
0.510931 |
0.46854 |
0.494119 |
0.464881 |
False |
0.0198442 |
0.00202397 |
nan |
0.0259161 |
0.0137723 |
0.025818 |
0.016022 |
False |
0.0346643 |
0.00261473 |
nan |
0.0425084 |
0.0268201 |
0.0416915 |
0.0291885 |
False |
3 |
[15000:19999] |
3 |
15000 |
19999 |
2019-01-29 13:48:00 |
2019-02-28 20:15:16.848000 |
analysis |
0.457828 |
0.00705286 |
nan |
0.478987 |
0.43667 |
0.478879 |
0.449401 |
False |
0.486988 |
0.00706512 |
nan |
0.508183 |
0.465793 |
0.494119 |
0.464881 |
False |
0.0205719 |
0.00202397 |
nan |
0.0266438 |
0.0145 |
0.025818 |
0.016022 |
False |
0.0346121 |
0.00261473 |
nan |
0.0424563 |
0.0267679 |
0.0416915 |
0.0291885 |
False |
4 |
[20000:24999] |
4 |
20000 |
24999 |
2019-02-28 20:24:00 |
2019-03-31 02:51:16.848000 |
analysis |
0.468372 |
0.00705286 |
nan |
0.489531 |
0.447213 |
0.478879 |
0.449401 |
False |
0.476273 |
0.00706512 |
nan |
0.497468 |
0.455078 |
0.494119 |
0.464881 |
False |
0.020428 |
0.00202397 |
nan |
0.0264999 |
0.014356 |
0.025818 |
0.016022 |
False |
0.034927 |
0.00261473 |
nan |
0.0427712 |
0.0270829 |
0.0416915 |
0.0291885 |
False |
5 |
[25000:29999] |
5 |
25000 |
29999 |
2019-03-31 03:00:00 |
2019-04-30 09:27:16.848000 |
analysis |
0.461246 |
0.00705286 |
nan |
0.482404 |
0.440087 |
0.478879 |
0.449401 |
False |
0.449469 |
0.00706512 |
nan |
0.470664 |
0.428273 |
0.494119 |
0.464881 |
True |
0.0287544 |
0.00202397 |
nan |
0.0348263 |
0.0226825 |
0.025818 |
0.016022 |
True |
0.0605314 |
0.00261473 |
nan |
0.0683756 |
0.0526873 |
0.0416915 |
0.0291885 |
True |
6 |
[30000:34999] |
6 |
30000 |
34999 |
2019-04-30 09:36:00 |
2019-05-30 16:03:16.848000 |
analysis |
0.459067 |
0.00705286 |
nan |
0.480225 |
0.437908 |
0.478879 |
0.449401 |
False |
0.452083 |
0.00706512 |
nan |
0.473278 |
0.430888 |
0.494119 |
0.464881 |
True |
0.0283335 |
0.00202397 |
nan |
0.0344054 |
0.0222616 |
0.025818 |
0.016022 |
True |
0.060517 |
0.00261473 |
nan |
0.0683612 |
0.0526729 |
0.0416915 |
0.0291885 |
True |
7 |
[35000:39999] |
7 |
35000 |
39999 |
2019-05-30 16:12:00 |
2019-06-29 22:39:16.848000 |
analysis |
0.458246 |
0.00705286 |
nan |
0.479404 |
0.437087 |
0.478879 |
0.449401 |
False |
0.452947 |
0.00706512 |
nan |
0.474142 |
0.431752 |
0.494119 |
0.464881 |
True |
0.0295542 |
0.00202397 |
nan |
0.0356261 |
0.0234823 |
0.025818 |
0.016022 |
True |
0.0592531 |
0.00261473 |
nan |
0.0670972 |
0.0514089 |
0.0416915 |
0.0291885 |
True |
8 |
[40000:44999] |
8 |
40000 |
44999 |
2019-06-29 22:48:00 |
2019-07-30 05:15:16.848000 |
analysis |
0.453561 |
0.00705286 |
nan |
0.47472 |
0.432403 |
0.478879 |
0.449401 |
False |
0.460828 |
0.00706512 |
nan |
0.482024 |
0.439633 |
0.494119 |
0.464881 |
True |
0.0272388 |
0.00202397 |
nan |
0.0333107 |
0.0211669 |
0.025818 |
0.016022 |
True |
0.0583718 |
0.00261473 |
nan |
0.066216 |
0.0505277 |
0.0416915 |
0.0291885 |
True |
9 |
[45000:49999] |
9 |
45000 |
49999 |
2019-07-30 05:24:00 |
2019-08-29 11:51:16.848000 |
analysis |
0.473578 |
0.00705286 |
nan |
0.494737 |
0.45242 |
0.478879 |
0.449401 |
False |
0.438153 |
0.00706512 |
nan |
0.459349 |
0.416958 |
0.494119 |
0.464881 |
True |
0.0296219 |
0.00202397 |
nan |
0.0356938 |
0.02355 |
0.025818 |
0.016022 |
True |
0.0586468 |
0.00261473 |
nan |
0.066491 |
0.0508026 |
0.0416915 |
0.0291885 |
True |
Apart from chunk-related data, the results data have the following columns for each metric that was estimated:
value - the estimate of a metric for a specific chunk.
sampling_error - the estimate of the Sampling Error.
realized - when target values are available for a chunk, the realized performance metric will also be calculated and included within the results.
upper_confidence_boundary and lower_confidence_boundary - These values show the confidence band of the relevant metric and are equal to estimated value +/- 3 times the estimated sampling error.
upper_threshold and lower_threshold - crossing these thresholds will raise an alert on significant performance change. The thresholds are calculated based on the actual performance of the monitored model on chunks in the reference partition. The thresholds are 3 standard deviations away from the mean performance calculated on chunks. The thresholds are calculated during fit phase.
alert - flag indicating potentially significant performance change.
True
if estimated performance crosses upper or lower threshold.
These results can be also plotted. Our plot contains several key elements.
The purple dashed step plot shows the estimated performance in each chunk of the provided data. Thick squared point markers indicate the middle of these chunks.
The black vertical line splits the reference and analysis periods.
The low-saturated purple area around the estimated performance in the analysis period corresponds to the confidence band which is calculated as the estimated performance +/- 3 times the estimated Sampling Error.
The red horizontal dashed lines show upper and lower thresholds that indicate the range of expected performance values.
The red diamond-shaped point markers in the middle of a chunk indicate that an alert has been raised. Alerts are caused by the estimated performance crossing the upper or lower threshold.
>>> metric_fig = results.plot()
>>> metric_fig.show()
Additional information such as the chunk index range and chunk date range (if timestamps were provided) is shown in the hover for each chunk (these are interactive plots, though only static views are included here).
Insights
After reviewing the performance estimation results, we should be able to see any indications of performance change that NannyML has detected based upon the model’s inputs and outputs alone.
What’s next
The Data Drift functionality can help us to understand whether data drift is causing the performance problem. When the target values become available we can compared realized and estimated confusion matrix results.