Calculating Business Value for Multiclass Classification
This tutorial explains how to use NannyML to calculate business value for multiclass classification models.
Note
The following example uses timestamps. These are optional but have an impact on the way data is chunked and results are plotted. You can read more about them in the data requirements.
Just The Code
>>> import nannyml as nml
>>> from IPython.display import display
>>> reference_df, analysis_df, analysis_target_df = nml.load_synthetic_multiclass_classification_dataset()
>>> analysis_df = analysis_df.merge(analysis_target_df, on='id', how='left')
>>> display(reference_df.head(3))
>>> # matrix can be provided as a list of lists or a numpy array
>>> business_value_matrix = [
... [1, 0, -1],
... [0, 1, 0],
... [-1, 0, 1]
>>> ]
>>> calc = nml.PerformanceCalculator(
... y_pred_proba={
... 'prepaid_card': 'y_pred_proba_prepaid_card',
... 'highstreet_card': 'y_pred_proba_highstreet_card',
... 'upmarket_card': 'y_pred_proba_upmarket_card'
... },
... y_pred='y_pred',
... y_true='y_true',
... timestamp_column_name='timestamp',
... problem_type='classification_multiclass',
... metrics=['business_value'],
... business_value_matrix = business_value_matrix,
... normalize_business_value='per_prediction',
... chunk_size=6000
>>> )
>>> calc.fit(reference_df)
>>> results = calc.calculate(analysis_df)
>>> display(results.filter(period='analysis').to_df())
>>> display(results.filter(period='reference').to_df())
>>> figure = results.plot()
>>> figure.show()
Walkthrough
For simplicity this guide is based on a synthetic dataset where the monitored model predicts which type of credit card product new customers should be assigned to. Check out Credit Card Dataset to learn more about this dataset.
In order to monitor a model, NannyML needs to learn about it from a reference dataset. Then it can monitor the data that is subject to actual analysis, provided as the analysis dataset. You can read more about this in our section on data periods.
The analysis_targets
dataframe contains the target results of the analysis period.
This is kept separate in the synthetic data because it is
not used during performance estimation. But it is required to calculate performance,
so the first thing we need to in this case is set up the right data in the right dataframes.
The analysis target values are joined on the analysis frame by their index. Your dataset may already contain the target column, so you may skip this join.
>>> import nannyml as nml
>>> from IPython.display import display
>>> reference_df, analysis_df, analysis_target_df = nml.load_synthetic_multiclass_classification_dataset()
>>> analysis_df = analysis_df.merge(analysis_target_df, on='id', how='left')
>>> display(reference_df.head(3))
id |
acq_channel |
app_behavioral_score |
requested_credit_limit |
app_channel |
credit_bureau_score |
stated_income |
is_customer |
timestamp |
y_pred_proba_prepaid_card |
y_pred_proba_highstreet_card |
y_pred_proba_upmarket_card |
y_pred |
y_true |
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 |
0 |
Partner3 |
1.80823 |
350 |
web |
309 |
15000 |
True |
2020-05-02 02:01:30 |
0.97 |
0.03 |
0 |
prepaid_card |
prepaid_card |
1 |
1 |
Partner2 |
4.38257 |
500 |
mobile |
418 |
23000 |
True |
2020-05-02 02:03:33 |
0.87 |
0.13 |
0 |
prepaid_card |
prepaid_card |
2 |
2 |
Partner2 |
-0.787575 |
400 |
web |
507 |
24000 |
False |
2020-05-02 02:04:49 |
0.47 |
0.35 |
0.18 |
prepaid_card |
upmarket_card |
Next a PerformanceCalculator
is created with
the following parameter specifications:
y_pred_proba: a dictionary that maps the class names to the name of the column in the reference data that contains the predicted probabilities for that class.
y_pred: the name of the column in the reference data that contains the predicted classes.
y_true: the name of the column in the reference data that contains the true classes.
timestamp_column_name (Optional): the name of the column in the reference data that contains timestamps.
problem_type: the type of problem being monitored. In this example we will monitor a binary classification problem.
metrics: a list of metrics to calculate. In this example we will calculate the
business_value
metric.business_value_matrix: A matrix that specifies the value of each corresponding cell in the confusion matrix.
normalize_business_value (Optional): how to normalize the business value. The normalization options are:
None : returns the total value per chunk
“per_prediction” : returns the total value for the chunk divided by the number of observations in a given chunk.
chunk_size (Optional): the number of observations in each chunk of data used to calculate performance. For more information about chunking other chunking options check out the chunking tutorial.
thresholds (Optional): the thresholds used to calculate the alert flag. For more information about thresholds, check out the thresholds tutorial.
>>> # matrix can be provided as a list of lists or a numpy array
>>> business_value_matrix = [
... [1, 0, -1],
... [0, 1, 0],
... [-1, 0, 1]
>>> ]
>>> calc = nml.PerformanceCalculator(
... y_pred_proba={
... 'prepaid_card': 'y_pred_proba_prepaid_card',
... 'highstreet_card': 'y_pred_proba_highstreet_card',
... 'upmarket_card': 'y_pred_proba_upmarket_card'
... },
... y_pred='y_pred',
... y_true='y_true',
... timestamp_column_name='timestamp',
... problem_type='classification_multiclass',
... metrics=['business_value'],
... business_value_matrix = business_value_matrix,
... normalize_business_value='per_prediction',
... chunk_size=6000
>>> )
Note
When calculating business_value, the business_value_matrix
parameter is required.
A business value matrix is a nxn matrix that specifies the value of each cell in the confusion matrix.
The format of the business value matrix must be specified so that each element represents the business
value of it’s respective confusion matrix element. Hence the element on the i-th row and j-column of the
business value matrix tells us the value of the i-th target when we have predicted the j-th value.
The target values that each column and row refer are sorted alphanumerically for both
the confusion matrix and the business value matrices.
The business value matrix can be provided as a list of lists or a numpy array. For more information about the business value matrix, check out the Business Value “How it Works” page.
The new PerformanceCalculator
is fitted using the
fit()
method on the reference data.
>>> calc.fit(reference_df)
The fitted PerformanceCalculator
can then be used to calculate
realized performance metrics on all data which has target values available with the
calculate()
method.
NannyML can output a dataframe that contains all the results of the analysis data.
>>> results = calc.calculate(analysis_df)
>>> display(results.filter(period='analysis').to_df())
chunk
key
|
chunk_index
|
start_index
|
end_index
|
start_date
|
end_date
|
period
|
targets_missing_rate
|
business_value
sampling_error
|
value
|
upper_threshold
|
lower_threshold
|
alert
|
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 |
[0:5999] |
0 |
0 |
5999 |
2020-09-01 03:10:01 |
2020-09-13 16:15:10 |
analysis |
0 |
0.00804747 |
2.00122 |
2.05032 |
1.9632 |
False |
1 |
[6000:11999] |
1 |
6000 |
11999 |
2020-09-13 16:15:32 |
2020-09-25 19:48:42 |
analysis |
0 |
0.00804747 |
2.04414 |
2.05032 |
1.9632 |
False |
2 |
[12000:17999] |
2 |
12000 |
17999 |
2020-09-25 19:50:04 |
2020-10-08 02:53:47 |
analysis |
0 |
0.00804747 |
2.01853 |
2.05032 |
1.9632 |
False |
3 |
[18000:23999] |
3 |
18000 |
23999 |
2020-10-08 02:57:34 |
2020-10-20 15:48:19 |
analysis |
0 |
0.00804747 |
2.01854 |
2.05032 |
1.9632 |
False |
4 |
[24000:29999] |
4 |
24000 |
29999 |
2020-10-20 15:49:06 |
2020-11-01 22:04:40 |
analysis |
0 |
0.00804747 |
2.01693 |
2.05032 |
1.9632 |
False |
5 |
[30000:35999] |
5 |
30000 |
35999 |
2020-11-01 22:04:59 |
2020-11-14 03:55:33 |
analysis |
0 |
0.00804747 |
1.28921 |
2.05032 |
1.9632 |
True |
6 |
[36000:41999] |
6 |
36000 |
41999 |
2020-11-14 03:55:49 |
2020-11-26 09:19:06 |
analysis |
0 |
0.00804747 |
1.31007 |
2.05032 |
1.9632 |
True |
7 |
[42000:47999] |
7 |
42000 |
47999 |
2020-11-26 09:19:22 |
2020-12-08 14:33:56 |
analysis |
0 |
0.00804747 |
1.32972 |
2.05032 |
1.9632 |
True |
8 |
[48000:53999] |
8 |
48000 |
53999 |
2020-12-08 14:34:25 |
2020-12-20 18:30:30 |
analysis |
0 |
0.00804747 |
1.32404 |
2.05032 |
1.9632 |
True |
9 |
[54000:59999] |
9 |
54000 |
59999 |
2020-12-20 18:31:09 |
2021-01-01 22:57:55 |
analysis |
0 |
0.00804747 |
1.31623 |
2.05032 |
1.9632 |
True |
The results from the reference data are also available.
>>> display(results.filter(period='reference').to_df())
chunk
key
|
chunk_index
|
start_index
|
end_index
|
start_date
|
end_date
|
period
|
targets_missing_rate
|
business_value
sampling_error
|
value
|
upper_threshold
|
lower_threshold
|
alert
|
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 |
[0:5999] |
0 |
0 |
5999 |
2020-05-02 02:01:30 |
2020-05-14 12:25:35 |
reference |
0 |
0.00804747 |
2.00926 |
2.05032 |
1.9632 |
False |
1 |
[6000:11999] |
1 |
6000 |
11999 |
2020-05-14 12:29:25 |
2020-05-26 18:27:42 |
reference |
0 |
0.00804747 |
2.005 |
2.05032 |
1.9632 |
False |
2 |
[12000:17999] |
2 |
12000 |
17999 |
2020-05-26 18:31:06 |
2020-06-07 19:55:45 |
reference |
0 |
0.00804747 |
2.01476 |
2.05032 |
1.9632 |
False |
3 |
[18000:23999] |
3 |
18000 |
23999 |
2020-06-07 19:58:39 |
2020-06-19 19:42:20 |
reference |
0 |
0.00804747 |
1.98918 |
2.05032 |
1.9632 |
False |
4 |
[24000:29999] |
4 |
24000 |
29999 |
2020-06-19 19:44:14 |
2020-07-02 01:58:05 |
reference |
0 |
0.00804747 |
2.02437 |
2.05032 |
1.9632 |
False |
5 |
[30000:35999] |
5 |
30000 |
35999 |
2020-07-02 02:06:56 |
2020-07-14 08:14:04 |
reference |
0 |
0.00804747 |
1.99098 |
2.05032 |
1.9632 |
False |
6 |
[36000:41999] |
6 |
36000 |
41999 |
2020-07-14 08:14:08 |
2020-07-26 12:55:42 |
reference |
0 |
0.00804747 |
1.99226 |
2.05032 |
1.9632 |
False |
7 |
[42000:47999] |
7 |
42000 |
47999 |
2020-07-26 12:57:37 |
2020-08-07 16:32:15 |
reference |
0 |
0.00804747 |
2.02454 |
2.05032 |
1.9632 |
False |
8 |
[48000:53999] |
8 |
48000 |
53999 |
2020-08-07 16:33:44 |
2020-08-20 00:06:08 |
reference |
0 |
0.00804747 |
1.99082 |
2.05032 |
1.9632 |
False |
9 |
[54000:59999] |
9 |
54000 |
59999 |
2020-08-20 00:07:58 |
2020-09-01 03:03:23 |
reference |
0 |
0.00804747 |
2.02641 |
2.05032 |
1.9632 |
False |
Apart from chunk and period-related columns, the results data have a set of columns for each calculated metric.
targets_missing_rate - the fraction of missing target data.
value - the realized metric value for a specific chunk.
sampling_error - the estimate of the Sampling Error.
upper_threshold and lower_threshold - crossing these thresholds will raise an alert on significant performance change. The thresholds are calculated based on the actual performance of the monitored model on chunks in the reference partition. The thresholds are 3 standard deviations away from the mean performance calculated on chunks. They are calculated during fit phase.
alert - flag indicating potentially significant performance change.
True
if estimated performance crosses upper or lower threshold.
The results can be plotted for visual inspection. Our plot contains several key elements.
The blue step plot shows the performance in each chunk of the provided data. Thick squared point markers indicate the middle of these chunks.
The gray vertical line splits the reference and analysis data periods.
The red horizontal dashed lines show upper and lower thresholds that indicate the range of expected performance values.
The red diamond-shaped point markers in the middle of a chunk indicate that an alert has been raised. Alerts are caused by the performance crossing the upper or lower threshold.
>>> figure = results.plot()
>>> figure.show()
Additional information such as the chunk index range and chunk date range (if timestamps were provided) is shown in the hover for each chunk (these are interactive plots, though only static views are included here).
Insights
After reviewing the performance calculation results, we should be able to clearly see how the business value provided by the model while it is in use. Depending on the results we may report them or need to investigate further.
What’s Next
If we decide further investigation is needed, the Data Drift functionality can help us to see what feature changes may be contributing to any performance changes.