Estimating Performance for Regression

This tutorial explains how to use NannyML to estimate the performance of regression models in the absence of target data. To find out how DLE estimates performance, read the explanation of how Direct Loss Estimation works.

Note

The following example uses timestamps. These are optional but have an impact on the way data is chunked and results are plotted. You can read more about them in the data requirements.

Just The Code

>>> import nannyml as nml
>>> from IPython.display import display

>>> reference_df = nml.load_synthetic_car_price_dataset()[0]
>>> analysis_df = nml.load_synthetic_car_price_dataset()[1]
>>> display(reference_df.head(3))

>>> estimator = nml.DLE(
...     feature_column_names=['car_age', 'km_driven', 'price_new', 'accident_count', 'door_count', 'fuel', 'transmission'],
...     y_pred='y_pred',
...     y_true='y_true',
...     timestamp_column_name='timestamp',
...     metrics=['rmse', 'rmsle'],
...     chunk_size=6000,
...     tune_hyperparameters=False
>>> )

>>> estimator.fit(reference_df)
>>> results = estimator.estimate(analysis_df)
>>> display(results.filter(period='analysis').to_df())

>>> display(results.filter(period='reference').to_df())

>>> metric_fig = results.plot()
>>> metric_fig.show()

Advanced configuration

Walkthrough

For simplicity this guide is based on a synthetic dataset included in the library, where the monitored model predicts the market price of a used car. Check out Car Price Dataset to learn more about this dataset.

In order to monitor a model, NannyML needs to learn about it and set expectations from a reference dataset. Then it can monitor the data that is subject to actual analysis, provided as the analysis dataset. You can read more about this in our section on data periods.

We start by loading the dataset we ‘ll be using:

>>> import nannyml as nml
>>> from IPython.display import display

>>> reference_df = nml.load_synthetic_car_price_dataset()[0]
>>> analysis_df = nml.load_synthetic_car_price_dataset()[1]
>>> display(reference_df.head(3))

id

car_age

km_driven

price_new

accident_count

door_count

fuel

transmission

y_true

y_pred

timestamp

0

0

15

144020

42810

4

3

diesel

automatic

569

1246

2017-01-24 08:00:00.000

1

1

12

57078

31835

3

3

electric

automatic

4277

4924

2017-01-24 08:00:33.600

2

2

2

76288

31851

3

5

diesel

automatic

7011

5744

2017-01-24 08:01:07.200

The next step is to instantiate the Direct Error Estimation (DLE) estimator. For the instantiation we need to provide:

  • The list of column names for the features our model uses.

  • The column name for the model output.

  • The column name for the model targets.

  • The list of regression performance metrics we are interested in estimating. Currently, the supported metrics are:

    • mae - mean absolute error

    • mape - mean absolute percentage error

    • mse - mean squared error

    • rmse - root mean squared error

    • msle - mean squared logarithmic error

    • rmsle - root mean squared logarithmic error

  • Optionally we can provide a chunking specification, otherwise the NannyML default will be used. For more information about chunking check out the chunking tutorial and it’s advanced guide.

  • Optionally we can provide selected hyperparamters for the model that will make the error estimation. If not, the LGBMRegressor defaults will be used.

  • Optionally we can tell the estimator to use FLAML to perform hyperparamter tuning. By default no hyperparamter tuning is performed.

  • Optionally we can provide configuration options to perform hyperparamter tuning instead of using the ones set by NannyML.

More information can be found on the API documentation for the DLE estimator. During this tutorial the NannyML default settings are used regarding hyperparameter tuning.

>>> estimator = nml.DLE(
...     feature_column_names=['car_age', 'km_driven', 'price_new', 'accident_count', 'door_count', 'fuel', 'transmission'],
...     y_pred='y_pred',
...     y_true='y_true',
...     timestamp_column_name='timestamp',
...     metrics=['rmse', 'rmsle'],
...     chunk_size=6000,
...     tune_hyperparameters=False
>>> )

The new DLE is fitted using the fit() method on the reference data.

The fitted estimator can then be used to calculate estimated performance metrics on all data which has target values available with the estimate() method. NannyML can output a dataframe that contains all the results of the analysis data.

>>> estimator.fit(reference_df)
>>> results = estimator.estimate(analysis_df)
>>> display(results.filter(period='analysis').to_df())

chunk
key
chunk_index
start_index
end_index
start_date
end_date
period
rmse
sampling_error
realized
value
upper_confidence_boundary
lower_confidence_boundary
upper_threshold
lower_threshold
alert
rmsle
sampling_error
realized
value
upper_confidence_boundary
lower_confidence_boundary
upper_threshold
lower_threshold
alert

0

[0:5999]

0

0

5999

2017-02-16 16:00:00

2017-02-18 23:59:26.400000

analysis

10.348

nan

1067.21

1098.26

1036.17

1103.31

1014.28

False

0.002239

nan

0.265798

0.272515

0.259081

0.271511

0.263948

False

1

[6000:11999]

1

6000

11999

2017-02-19 00:00:00

2017-02-21 07:59:26.400000

analysis

10.348

nan

1063.13

1094.17

1032.09

1103.31

1014.28

False

0.002239

nan

0.266868

0.273585

0.260151

0.271511

0.263948

False

2

[12000:17999]

2

12000

17999

2017-02-21 08:00:00

2017-02-23 15:59:26.400000

analysis

10.348

nan

1054.16

1085.2

1023.11

1103.31

1014.28

False

0.002239

nan

0.267884

0.274601

0.261167

0.271511

0.263948

False

3

[18000:23999]

3

18000

23999

2017-02-23 16:00:00

2017-02-25 23:59:26.400000

analysis

10.348

nan

1060.67

1091.71

1029.62

1103.31

1014.28

False

0.002239

nan

0.265695

0.272412

0.258978

0.271511

0.263948

False

4

[24000:29999]

4

24000

29999

2017-02-26 00:00:00

2017-02-28 07:59:26.400000

analysis

10.348

nan

1055.23

1086.27

1024.19

1103.31

1014.28

False

0.002239

nan

0.268354

0.275071

0.261637

0.271511

0.263948

False

5

[30000:35999]

5

30000

35999

2017-02-28 08:00:00

2017-03-02 15:59:26.400000

analysis

10.348

nan

928.513

959.557

897.469

1103.31

1014.28

True

0.002239

nan

0.307401

0.314118

0.300684

0.271511

0.263948

True

6

[36000:41999]

6

36000

41999

2017-03-02 16:00:00

2017-03-04 23:59:26.400000

analysis

10.348

nan

928.916

959.96

897.872

1103.31

1014.28

True

0.002239

nan

0.308669

0.315386

0.301952

0.271511

0.263948

True

7

[42000:47999]

7

42000

47999

2017-03-05 00:00:00

2017-03-07 07:59:26.400000

analysis

10.348

nan

927.916

958.96

896.872

1103.31

1014.28

True

0.002239

nan

0.308781

0.315498

0.302064

0.271511

0.263948

True

8

[48000:53999]

8

48000

53999

2017-03-07 08:00:00

2017-03-09 15:59:26.400000

analysis

10.348

nan

930.426

961.47

899.382

1103.31

1014.28

True

0.002239

nan

0.308513

0.31523

0.301796

0.271511

0.263948

True

9

[54000:59999]

9

54000

59999

2017-03-09 16:00:00

2017-03-11 23:59:26.400000

analysis

10.348

nan

919.398

950.442

888.354

1103.31

1014.28

True

0.002239

nan

0.310696

0.317413

0.303979

0.271511

0.263948

True

The results from the reference data are also available.

>>> display(results.filter(period='reference').to_df())

chunk
key
chunk_index
start_index
end_index
start_date
end_date
period
rmse
sampling_error
realized
value
upper_confidence_boundary
lower_confidence_boundary
upper_threshold
lower_threshold
alert
rmsle
sampling_error
realized
value
upper_confidence_boundary
lower_confidence_boundary
upper_threshold
lower_threshold
alert

0

[0:5999]

0

0

5999

2017-01-24 08:00:00

2017-01-26 15:59:26.400000

reference

10.348

1086.31

1073.91

1104.95

1042.86

1103.31

1014.28

False

0.002239

0.267475

0.266546

0.273263

0.259829

0.271511

0.263948

False

1

[6000:11999]

1

6000

11999

2017-01-26 16:00:00

2017-01-28 23:59:26.400000

reference

10.348

1060.22

1056.62

1087.66

1025.58

1103.31

1014.28

False

0.002239

0.268573

0.269103

0.27582

0.262386

0.271511

0.263948

False

2

[12000:17999]

2

12000

17999

2017-01-29 00:00:00

2017-01-31 07:59:26.400000

reference

10.348

1038.42

1054.93

1085.97

1023.88

1103.31

1014.28

False

0.002239

0.266343

0.268745

0.275462

0.262028

0.271511

0.263948

False

3

[18000:23999]

3

18000

23999

2017-01-31 08:00:00

2017-02-02 15:59:26.400000

reference

10.348

1038.4

1054.43

1085.47

1023.38

1103.31

1014.28

False

0.002239

0.266362

0.26708

0.273797

0.260363

0.271511

0.263948

False

4

[24000:29999]

4

24000

29999

2017-02-02 16:00:00

2017-02-04 23:59:26.400000

reference

10.348

1072.02

1066.54

1097.58

1035.49

1103.31

1014.28

False

0.002239

0.269812

0.266766

0.273483

0.260049

0.271511

0.263948

False

5

[30000:35999]

5

30000

35999

2017-02-05 00:00:00

2017-02-07 07:59:26.400000

reference

10.348

1074.97

1064.8

1095.85

1033.76

1103.31

1014.28

False

0.002239

0.266937

0.26642

0.273137

0.259703

0.271511

0.263948

False

6

[36000:41999]

6

36000

41999

2017-02-07 08:00:00

2017-02-09 15:59:26.400000

reference

10.348

1058.48

1057.22

1088.26

1026.17

1103.31

1014.28

False

0.002239

0.267517

0.267556

0.274273

0.260839

0.271511

0.263948

False

7

[42000:47999]

7

42000

47999

2017-02-09 16:00:00

2017-02-11 23:59:26.400000

reference

10.348

1050.7

1055.1

1086.15

1024.06

1103.31

1014.28

False

0.002239

0.270036

0.268793

0.27551

0.262076

0.271511

0.263948

False

8

[48000:53999]

8

48000

53999

2017-02-12 00:00:00

2017-02-14 07:59:26.400000

reference

10.348

1048.4

1052.11

1083.15

1021.07

1103.31

1014.28

False

0.002239

0.266767

0.267823

0.27454

0.261106

0.271511

0.263948

False

9

[54000:59999]

9

54000

59999

2017-02-14 08:00:00

2017-02-16 15:59:26.400000

reference

10.348

1060.04

1053.17

1084.21

1022.12

1103.31

1014.28

False

0.002239

0.267471

0.268474

0.275191

0.261757

0.271511

0.263948

False

Apart from chunk-related data, the results data have the following columns for each metric that was estimated:

  • value - the estimate of a metric for a specific chunk.

  • sampling_error - the estimate of the Sampling Error.

  • realized - when target values are available for a chunk, the realized performance metric will also be calculated and included within the results.

  • upper_confidence_boundary and lower_confidence_boundary - These values show the Confidence Band of the relevant metric and are equal to estimated value +/- 3 times the estimated Sampling Error.

  • upper_threshold and lower_threshold - crossing these thresholds will raise an alert on significant performance change. The thresholds are calculated based on the actual performance of the monitored model on chunks in the reference partition. By default, the thresholds are 3 standard deviations away from the mean performance calculated on chunks. They are calculated during fit phase. You can also set up custom thresholds using constant or standard deviations thresholds, to learn more about it check out our tutorial on thresholds.

  • alert - flag indicating potentially significant performance change. True if estimated performance crosses upper or lower threshold.

These results can be also plotted. Our plot contains several key elements.

  • The purple dashed step plot shows the estimated performance in each chunk of the analysis period. Thick squared point markers indicate the middle of these chunks.

  • The black vertical line splits the reference and analysis periods.

  • The low-saturated colored area around the estimated performance indicates the sampling error.

  • The red horizontal dashed lines show upper and lower thresholds for alerting purposes.

  • If the estimated performance crosses the upper or lower threshold an alert is raised which is indicated with a red diamond-shaped point marker in the middle of the chunk.

Additional information is shown in the hover (these are interactive plots, though only static views are included here). The plots can be created with the following code:

>>> metric_fig = results.plot()
>>> metric_fig.show()
../../_images/tutorial-perf-est-regression.svg

Insights

From looking at the RMSE and RMSLE performance results we can observe an interesting effect. We know that RMSE penalizes mispredictions symmetrically while RMSLE penalizes underprediction more than overprediction. Hence performance estimator tells us that while our model will become a little bit more accurate according to RMSE, the increase in RMSLE suggests us that our model will be underpredicting more than it was before!

What’s next

The Data Drift functionality can help us to understand whether data drift is causing the performance problem. When the target values become available we can compared realized and estimated performance results.