Binary Classification: California Housing Dataset

This example outlines a typical workflow for estimating performance of a model without access to ground truth, detecting performance issues and identifying potential root causes for these issues. In this examples, we are using NannyML on the modified California Housing Prices dataset.

You can see what modifications were made to the data to make it suitable for the use case in California Housing Dataset.

Load and prepare data

Let’s load the dataset from NannyML’s included datasets.

>>> import pandas as pd
>>> import nannyml as nml
>>> reference, analysis, analysis_targets = nml.datasets.load_modified_california_housing_dataset()
>>> reference.head(3)

MedInc

HouseAge

AveRooms

AveBedrms

Population

AveOccup

Latitude

Longitude

timestamp

period

clf_target

y_pred_proba

y_pred

identifier

0

9.8413

32

7.17004

1.01484

4353

2.93725

34.22

-118.19

2020-10-01 00:00:00

reference

1

0.99

1

0

1

8.3695

37

7.45875

1.06271

941

3.10561

34.22

-118.21

2020-10-01 01:00:00

reference

1

1

1

1

2

8.72

44

6.16318

1.04603

668

2.79498

34.2

-118.18

2020-10-01 02:00:00

reference

1

1

1

2

Performance Estimation

We first want to estimate performance for the analysis period, using the reference period as our performance baseline.

>>> # fit performance estimator and estimate for combined reference and analysis
>>> cbpe = nml.CBPE(
...     y_pred='y_pred',
...     y_pred_proba='y_pred_proba',
...     y_true='clf_target',
...     timestamp_column_name='timestamp',
...     problem_type='classification_binary',
...     chunk_period='M',
...     metrics=['roc_auc'])
>>> cbpe.fit(reference_data=reference)
>>> est_perf = cbpe.estimate(pd.concat([reference, analysis]))
UserWarning: The resulting list of chunks contains 1 underpopulated chunks. They contain too few records to be statistically relevant and might negatively influence the quality of calculations. Please consider splitting your data in a different way or continue at your own risk.

We get a warning that some chunks are too small. Let’s quickly check what’s going on here.

>>> est_perf_data = est_perf.to_df()
>>> est_perf.data[('chunk', 'end_index')] - est_perf.data[('chunk', 'start_index')]
0     743
1     719
2     743
3     743
4     671
5     743
6     719
7     743
8     743
9     719
10    743
11    743
12    671
13    743
14    719
15    743
16    719
17    743
18    743
19    719
20    743
21    719
22    743
23    743
24    671
25    743
26    719
27    215
dtype: int64

The last chunk is smaller than the others due to the selected chunking method. Let’s remove it to make sure everything we visualise is reliable.

>>> est_perf.data = est_perf.data[:-1].copy()
>>> est_perf.data.tail(2)

(‘chunk’, ‘key’)

(‘chunk’, ‘chunk_index’)

(‘chunk’, ‘start_index’)

(‘chunk’, ‘end_index’)

(‘chunk’, ‘start_date’)

(‘chunk’, ‘end_date’)

(‘chunk’, ‘period’)

(‘roc_auc’, ‘sampling_error’)

(‘roc_auc’, ‘realized’)

(‘roc_auc’, ‘value’)

(‘roc_auc’, ‘upper_confidence_boundary’)

(‘roc_auc’, ‘lower_confidence_boundary’)

(‘roc_auc’, ‘upper_threshold’)

(‘roc_auc’, ‘lower_threshold’)

(‘roc_auc’, ‘alert’)

25

2022-03

17

12384

13127

2022-03-01 00:00:00

2022-03-31 23:59:59.999999999

analysis

0.00778098

nan

0.829077

0.85242

0.805734

1

0.708336

False

26

2022-04

18

13128

13847

2022-04-01 00:00:00

2022-04-30 23:59:59.999999999

analysis

0.0079096

nan

0.910661

0.934389

0.886932

1

0.708336

False

Now we can plot the estimated performance confidently.

>>> fig = est_perf.plot(kind='performance', metric='roc_auc')
>>> fig.show()
../_images/example_california_performance.svg

CBPE estimates a significant performance drop in the chunk corresponding to the month of September.

Comparison with the actual performance

Because we have the ground truth for our dataset, we can use it to calculate ROC AUC on the relevant chunks, and compare it to the estimated values.

>>> from sklearn.metrics import roc_auc_score
>>> import matplotlib.pyplot as plt
>>> # add ground truth to analysis
>>> analysis_full = pd.merge(analysis, analysis_targets, on = 'identifier')
>>> df_all = pd.concat([reference, analysis_full]).reset_index(drop=True)
>>> df_all['timestamp'] = pd.to_datetime(df_all['timestamp'])
>>> # calculate actual ROC AUC
>>> target_col = cbpe.y_true
>>> pred_score_col = 'y_pred_proba'
>>> actual_performance = []
>>> for idx in est_perf_data.index:
...     start_date, end_date = est_perf_data.loc[idx, ('chunk', 'start_date')], est_perf_data.loc[idx, ('chunk', 'end_date')]
...     sub = df_all[df_all['timestamp'].between(start_date, end_date)]
...     actual_perf = roc_auc_score(sub[target_col], sub[pred_score_col])
...     est_perf_data.loc[idx, ('roc_auc', 'realized')] = actual_perf
>>> # plot
>>> first_analysis = est_perf_data[('chunk', 'key')].values[0]
>>> plt.figure(figsize=(10,5))
>>> plt.plot(est_perf_data[('chunk', 'key')], est_perf_data[('roc_auc', 'value')], label='estimated AUC')
>>> plt.plot(est_perf_data[('chunk', 'key')], est_perf_data[('roc_auc', 'realized')], label='actual ROC AUC')
>>> plt.xticks(rotation=90)
>>> plt.axvline(x=first_analysis, label='First analysis chunk', linestyle=':', color='grey')
>>> plt.ylabel('ROC AUC')
>>> plt.legend()
>>> plt.show()
../_images/example_california_performance_estimation_tmp.svg

We can see that the significant drop at the first few chunks of the analysis period was estimated accurately. After that, the overall trend seems to be well represented. The estimation of performance has a lower variance than actual performance.

Drift detection

The next step is to find out what might be responsible for this drop in ROC AUC. Let’s try using univariate drift detection, and see what we discover.

>>> feature_column_names = [col for col in reference if col not in ['y_pred', 'y_pred_proba', 'clf_target', 'timestamp', 'period', 'identifier']]
>>> univariate_calculator = nml.UnivariateDriftCalculator(column_names=feature_column_names,
...                                                       timestamp_column_name='timestamp',
...                                                       chunk_period='M').fit(reference_data=reference)
>>> univariate_results = univariate_calculator.calculate(analysis)
>>> nml.Ranker.by('alert_count').rank(univariate_results)

number_of_alerts

column_name

rank

0

12

Longitude

1

1

12

Latitude

2

2

12

HouseAge

3

3

12

AveOccup

4

4

11

MedInc

5

5

11

AveRooms

6

6

8

Population

7

7

8

AveBedrms

8

It looks like there is a lot of drift in this dataset. Since we have 12 chunks in the analysis period, we can see that the top 4 features drifted in all analyzed chunks. Let’s look at the magnitude of this drift by examining the KS distance statistics.

>>> # get columns with d statistics only
>>> # # # print(result.data.loc[:, (non_chunk, slice(None), 'alert')])
>>> univariate_results.to_df().loc[:, (slice(None), 'kolmogorov_smirnov', 'value')].mean().sort_values(ascending=False)

0

(‘Longitude’, ‘kolmogorov_smirnov’, ‘value’)

0.712709

(‘Latitude’, ‘kolmogorov_smirnov’, ‘value’)

0.672904

(‘HouseAge’, ‘kolmogorov_smirnov’, ‘value’)

0.201638

(‘MedInc’, ‘kolmogorov_smirnov’, ‘value’)

0.154952

(‘AveOccup’, ‘kolmogorov_smirnov’, ‘value’)

0.14389

(‘AveRooms’, ‘kolmogorov_smirnov’, ‘value’)

0.129277

(‘AveBedrms’, ‘kolmogorov_smirnov’, ‘value’)

0.0891403

(‘Population’, ‘kolmogorov_smirnov’, ‘value’)

0.0735623

The mean value of D-statistic for Longitude and Latitude on the analysis chunks is the largest. Let’s plot their distributions for the analysis period.

>>> for col in ['Longitude', 'Latitude']:
...     fig = univariate_results.plot(
...         kind='distribution',
...         method='kolmogorov_smirnov',
...         column_name=col)
...     fig.show()
../_images/example_california_performance_distribution_Longitude.svg../_images/example_california_performance_distribution_Latitude.svg

Indeed, we can see the distributions of these variables are completely different in each chunk. This was expected, as the original dataset has observations from nearby locations. Let’s see it on a scatter plot:

>>> analysis_res = est_perf.data
>>> plt.figure(figsize=(8,6))
>>> for idx in analysis_res.index[:10]:
...     start_date, end_date = analysis_res.loc[idx, ('chunk', 'start_date')], analysis_res.loc[idx, ('chunk', 'end_date')]
...     sub = df_all[df_all['timestamp'].between(start_date, end_date)]
...     plt.scatter(sub['Latitude'], sub['Longitude'], s=5, label="Chunk {}".format(str(idx)))

>>> plt.legend()
>>> plt.xlabel('Latitude')
>>> plt.ylabel('Longitude')
../_images/example_california_latitude_longitude_scatter.svg

In this example, NannyML estimated the performance (ROC AUC) of a model without accessing the target data. We can see from our comparison with the targets that the estimate is quite accurate. Next, the potential root causes of the drop in performance were indicated by detecting data drift. This was achieved using univariate methods that identified the features which drifted the most.