Ranking

NannyML uses ranking to order columns in univariate drift results. The resulting order can be helpful in prioritizing what to further investigate to fully address any issues with the model being monitored.

There are currently two ranking methods in NannyML, alert count ranking and correlation ranking.

Just The Code

>>> import nannyml as nml
>>> from IPython.display import display

>>> reference_df = nml.load_synthetic_binary_classification_dataset()[0]
>>> analysis_df = nml.load_synthetic_binary_classification_dataset()[1]
>>> analysis_target_df = nml.load_synthetic_binary_classification_dataset()[2]
>>> analysis_df = analysis_df.merge(analysis_target_df, on='identifier')

>>> column_names = ['distance_from_office', 'salary_range', 'gas_price_per_litre', 'public_transportation_cost', 'wfh_prev_workday', 'workday', 'tenure', 'y_pred_proba', 'y_pred']
>>> univ_calc = nml.UnivariateDriftCalculator(
...     column_names=column_names,
...     timestamp_column_name='timestamp',
...     continuous_methods=['kolmogorov_smirnov', 'jensen_shannon'],
...     categorical_methods=['chi2', 'jensen_shannon'],
...     chunk_size=5000
>>> )

>>> univ_calc.fit(reference_df)
>>> univariate_results = univ_calc.calculate(analysis_df)
>>> display(univariate_results.filter(period='analysis', column_names=['distance_from_office']).to_df())

>>> alert_count_ranker = nml.AlertCountRanker()
>>> alert_count_ranked_features = alert_count_ranker.rank(
...     univariate_results.filter(methods=['jensen_shannon']),
...     only_drifting = False)
>>> display(alert_count_ranked_features)

>>> estimated_calc = nml.CBPE(
...     y_pred_proba='y_pred_proba',
...     y_pred='y_pred',
...     y_true='work_home_actual',
...     timestamp_column_name='timestamp',
...     metrics=['roc_auc', 'recall'],
...     chunk_size=5000,
...     problem_type='classification_binary',
>>> )
>>> estimated_calc.fit(reference_df)
>>> estimated_perf_results = estimated_calc.estimate(analysis_df)
>>> display(estimated_perf_results.filter(period='analysis').to_df())

>>> realized_calc = nml.PerformanceCalculator(
...     y_pred_proba='y_pred_proba',
...     y_pred='y_pred',
...     y_true='work_home_actual',
...     timestamp_column_name='timestamp',
...     problem_type='classification_binary',
...     metrics=['roc_auc', 'recall',],
...     chunk_size=5000)
>>> realized_calc.fit(reference_df)
>>> realized_perf_results = realized_calc.calculate(analysis_df)
>>> display(realized_perf_results.filter(period='analysis').to_df())

>>> ranker1 = nml.CorrelationRanker()
>>> # ranker fits on one metric and reference period data only
>>> ranker1.fit(
...     estimated_perf_results.filter(period='reference', metrics=['roc_auc']))
>>> # ranker ranks on one drift method and one performance metric
>>> correlation_ranked_features1 = ranker1.rank(
...     univariate_results.filter(methods=['jensen_shannon']),
...     estimated_perf_results.filter(metrics=['roc_auc']),
...     only_drifting = False)
>>> display(correlation_ranked_features1)

>>> ranker2 = nml.CorrelationRanker()
>>> # ranker fits on one metric and reference period data only
>>> ranker2.fit(
...     estimated_perf_results.filter(period='reference', metrics=['recall']))
>>> # ranker ranks on one drift method and one performance metric
>>> correlation_ranked_features2 = ranker2.rank(
...     univariate_results.filter(period='analysis', methods=['jensen_shannon']),
...     realized_perf_results.filter(period='analysis', metrics=['recall']),
...     only_drifting = False)
>>> display(correlation_ranked_features2)

Walkthrough

Ranking methods use univariate drift calculation results and performance estimation or realized performance results in order to rank features.

Note

The univariate drift calculation results need to be created or filtered in such a way so that there is only one drift method used for each feature. Similarly the performance estimation or realized performance results need to be created or filtered in such a way that only one performance metric is present in them.

Below we can see in more details how to use each ranking method.

Alert Count Ranking

Let’s look deeper in our first ranking method. Alert count ranking ranks features according to the number of alerts they generated within the ranking period. It is based on the univariate drift results of the features or data columns considered.

The first thing we need before using the alert count ranker is to create the univariate drift results.

>>> import nannyml as nml
>>> from IPython.display import display

>>> reference_df = nml.load_synthetic_binary_classification_dataset()[0]
>>> analysis_df = nml.load_synthetic_binary_classification_dataset()[1]
>>> analysis_target_df = nml.load_synthetic_binary_classification_dataset()[2]
>>> analysis_df = analysis_df.merge(analysis_target_df, on='identifier')

>>> column_names = ['distance_from_office', 'salary_range', 'gas_price_per_litre', 'public_transportation_cost', 'wfh_prev_workday', 'workday', 'tenure', 'y_pred_proba', 'y_pred']
>>> univ_calc = nml.UnivariateDriftCalculator(
...     column_names=column_names,
...     timestamp_column_name='timestamp',
...     continuous_methods=['kolmogorov_smirnov', 'jensen_shannon'],
...     categorical_methods=['chi2', 'jensen_shannon'],
...     chunk_size=5000
>>> )

>>> univ_calc.fit(reference_df)
>>> univariate_results = univ_calc.calculate(analysis_df)
>>> display(univariate_results.filter(period='analysis', column_names=['distance_from_office']).to_df())

(‘chunk’, ‘chunk’, ‘chunk_index’)

(‘chunk’, ‘chunk’, ‘end_date’)

(‘chunk’, ‘chunk’, ‘end_index’)

(‘chunk’, ‘chunk’, ‘key’)

(‘chunk’, ‘chunk’, ‘period’)

(‘chunk’, ‘chunk’, ‘start_date’)

(‘chunk’, ‘chunk’, ‘start_index’)

(‘distance_from_office’, ‘kolmogorov_smirnov’, ‘alert’)

(‘distance_from_office’, ‘kolmogorov_smirnov’, ‘lower_threshold’)

(‘distance_from_office’, ‘kolmogorov_smirnov’, ‘upper_threshold’)

(‘distance_from_office’, ‘kolmogorov_smirnov’, ‘value’)

(‘distance_from_office’, ‘jensen_shannon’, ‘alert’)

(‘distance_from_office’, ‘jensen_shannon’, ‘lower_threshold’)

(‘distance_from_office’, ‘jensen_shannon’, ‘upper_threshold’)

(‘distance_from_office’, ‘jensen_shannon’, ‘value’)

0

0

2018-01-02 00:45:44

4999

[0:4999]

analysis

2017-08-31 04:20:00

0

False

0.0131

False

0.1

0.0261007

1

1

2018-05-01 13:10:10

9999

[5000:9999]

analysis

2018-01-02 01:13:11

5000

False

0.01124

False

0.1

0.0202971

2

2

2018-09-01 15:40:40

14999

[10000:14999]

analysis

2018-05-01 14:25:25

10000

False

0.01682

False

0.1

0.0210957

3

3

2018-12-31 10:11:21

19999

[15000:19999]

analysis

2018-09-01 16:19:07

15000

False

0.01436

False

0.1

0.0362101

4

4

2019-04-30 11:01:30

24999

[20000:24999]

analysis

2018-12-31 10:38:45

20000

False

0.01116

False

0.1

0.0287082

5

5

2019-09-01 00:24:27

29999

[25000:29999]

analysis

2019-04-30 11:02:00

25000

True

0.43548

True

0.1

0.464732

6

6

2019-12-31 09:09:12

34999

[30000:34999]

analysis

2019-09-01 00:28:54

30000

True

0.43032

True

0.1

0.460044

7

7

2020-04-30 11:46:53

39999

[35000:39999]

analysis

2019-12-31 10:07:15

35000

True

0.43786

True

0.1

0.466746

8

8

2020-09-01 02:46:02

44999

[40000:44999]

analysis

2020-04-30 12:04:32

40000

True

0.43608

True

0.1

0.4663

9

9

2021-01-01 04:29:32

49999

[45000:49999]

analysis

2020-09-01 02:46:13

45000

True

0.43852

True

0.1

0.467798

To illustrate the results we filter and display the analysis period results for distance_from_office feature. The next step is to instantiate the ranker and instruct it to rank() the provided results. Notice that the univariate results are filtered to ensure they only have one drift method per categorical and continuous feature as required.

>>> alert_count_ranker = nml.AlertCountRanker()
>>> alert_count_ranked_features = alert_count_ranker.rank(
...     univariate_results.filter(methods=['jensen_shannon']),
...     only_drifting = False)
>>> display(alert_count_ranked_features)

number_of_alerts

column_name

rank

0

5

y_pred_proba

1

1

5

wfh_prev_workday

2

2

5

salary_range

3

3

5

public_transportation_cost

4

4

5

distance_from_office

5

5

0

y_pred

6

6

0

workday

7

7

0

tenure

8

8

0

gas_price_per_litre

9

The alert count ranker results give a simple and concise view of features that tend to break univariate drift thresholds more than others.

Correlation Ranking

Let’s continue to the second ranking method. Correlation ranking ranks features according to how much they correlate to absolute changes in the performance metric selected.

Therefore we first need to create the performance results we will use in our ranking. The estimated performance results are created below.

>>> estimated_calc = nml.CBPE(
...     y_pred_proba='y_pred_proba',
...     y_pred='y_pred',
...     y_true='work_home_actual',
...     timestamp_column_name='timestamp',
...     metrics=['roc_auc', 'recall'],
...     chunk_size=5000,
...     problem_type='classification_binary',
>>> )
>>> estimated_calc.fit(reference_df)
>>> estimated_perf_results = estimated_calc.estimate(analysis_df)
>>> display(estimated_perf_results.filter(period='analysis').to_df())

(‘chunk’, ‘key’)

(‘chunk’, ‘chunk_index’)

(‘chunk’, ‘start_index’)

(‘chunk’, ‘end_index’)

(‘chunk’, ‘start_date’)

(‘chunk’, ‘end_date’)

(‘chunk’, ‘period’)

(‘roc_auc’, ‘sampling_error’)

(‘roc_auc’, ‘realized’)

(‘roc_auc’, ‘value’)

(‘roc_auc’, ‘upper_confidence_boundary’)

(‘roc_auc’, ‘lower_confidence_boundary’)

(‘roc_auc’, ‘upper_threshold’)

(‘roc_auc’, ‘lower_threshold’)

(‘roc_auc’, ‘alert’)

(‘recall’, ‘sampling_error’)

(‘recall’, ‘realized’)

(‘recall’, ‘value’)

(‘recall’, ‘upper_confidence_boundary’)

(‘recall’, ‘lower_confidence_boundary’)

(‘recall’, ‘upper_threshold’)

(‘recall’, ‘lower_threshold’)

(‘recall’, ‘alert’)

0

[0:4999]

0

0

4999

2017-08-31 04:20:00

2018-01-02 00:45:44

analysis

0.00181072

0.970962

0.968631

0.974063

0.963198

0.97866

0.963317

False

0.00422251

0.957077

0.954644

0.967311

0.941976

0.965726

0.940831

False

1

[5000:9999]

1

5000

9999

2018-01-02 01:13:11

2018-05-01 13:10:10

analysis

0.00181072

0.970248

0.969044

0.974476

0.963612

0.97866

0.963317

False

0.00422251

0.949959

0.950074

0.962742

0.937407

0.965726

0.940831

False

2

[10000:14999]

2

10000

14999

2018-05-01 14:25:25

2018-09-01 15:40:40

analysis

0.00181072

0.976282

0.969444

0.974876

0.964012

0.97866

0.963317

False

0.00422251

0.959654

0.953431

0.966098

0.940763

0.965726

0.940831

False

3

[15000:19999]

3

15000

19999

2018-09-01 16:19:07

2018-12-31 10:11:21

analysis

0.00181072

0.967721

0.969047

0.974479

0.963615

0.97866

0.963317

False

0.00422251

0.945205

0.950695

0.963363

0.938028

0.965726

0.940831

False

4

[20000:24999]

4

20000

24999

2018-12-31 10:38:45

2019-04-30 11:01:30

analysis

0.00181072

0.969886

0.968873

0.974305

0.963441

0.97866

0.963317

False

0.00422251

0.948269

0.952322

0.96499

0.939655

0.965726

0.940831

False

5

[25000:29999]

5

25000

29999

2019-04-30 11:02:00

2019-09-01 00:24:27

analysis

0.00181072

0.96005

0.960478

0.96591

0.955046

0.97866

0.963317

True

0.00422251

0.945134

0.931746

0.944414

0.919078

0.965726

0.940831

True

6

[30000:34999]

6

30000

34999

2019-09-01 00:28:54

2019-12-31 09:09:12

analysis

0.00181072

0.95853

0.961134

0.966566

0.955701

0.97866

0.963317

True

0.00422251

0.94297

0.933032

0.945699

0.920364

0.965726

0.940831

True

7

[35000:39999]

7

35000

39999

2019-12-31 10:07:15

2020-04-30 11:46:53

analysis

0.00181072

0.959041

0.960536

0.965968

0.955104

0.97866

0.963317

True

0.00422251

0.940471

0.932623

0.94529

0.919955

0.965726

0.940831

True

8

[40000:44999]

8

40000

44999

2020-04-30 12:04:32

2020-09-01 02:46:02

analysis

0.00181072

0.963094

0.961869

0.967301

0.956437

0.97866

0.963317

True

0.00422251

0.9444

0.931093

0.94376

0.918425

0.965726

0.940831

True

9

[45000:49999]

9

45000

49999

2020-09-01 02:46:13

2021-01-01 04:29:32

analysis

0.00181072

0.957556

0.960537

0.965969

0.955104

0.97866

0.963317

True

0.00422251

0.943337

0.935494

0.948162

0.922827

0.965726

0.940831

True

The analysis period estimations are shown.

The realized performance results are also created since both can be used according to the use case being addressed.

>>> realized_calc = nml.PerformanceCalculator(
...     y_pred_proba='y_pred_proba',
...     y_pred='y_pred',
...     y_true='work_home_actual',
...     timestamp_column_name='timestamp',
...     problem_type='classification_binary',
...     metrics=['roc_auc', 'recall',],
...     chunk_size=5000)
>>> realized_calc.fit(reference_df)
>>> realized_perf_results = realized_calc.calculate(analysis_df)
>>> display(realized_perf_results.filter(period='analysis').to_df())

(‘chunk’, ‘key’)

(‘chunk’, ‘chunk_index’)

(‘chunk’, ‘start_index’)

(‘chunk’, ‘end_index’)

(‘chunk’, ‘start_date’)

(‘chunk’, ‘end_date’)

(‘chunk’, ‘period’)

(‘chunk’, ‘targets_missing_rate’)

(‘roc_auc’, ‘sampling_error’)

(‘roc_auc’, ‘value’)

(‘roc_auc’, ‘upper_threshold’)

(‘roc_auc’, ‘lower_threshold’)

(‘roc_auc’, ‘alert’)

(‘recall’, ‘sampling_error’)

(‘recall’, ‘value’)

(‘recall’, ‘upper_threshold’)

(‘recall’, ‘lower_threshold’)

(‘recall’, ‘alert’)

0

[0:4999]

0

0

4999

2017-08-31 04:20:00

2018-01-02 00:45:44

analysis

0

0.00181072

0.970962

0.97866

0.963317

False

0.00422251

0.957077

0.965726

0.940831

False

1

[5000:9999]

1

5000

9999

2018-01-02 01:13:11

2018-05-01 13:10:10

analysis

0

0.00181072

0.970248

0.97866

0.963317

False

0.00422251

0.949959

0.965726

0.940831

False

2

[10000:14999]

2

10000

14999

2018-05-01 14:25:25

2018-09-01 15:40:40

analysis

0

0.00181072

0.976282

0.97866

0.963317

False

0.00422251

0.959654

0.965726

0.940831

False

3

[15000:19999]

3

15000

19999

2018-09-01 16:19:07

2018-12-31 10:11:21

analysis

0

0.00181072

0.967721

0.97866

0.963317

False

0.00422251

0.945205

0.965726

0.940831

False

4

[20000:24999]

4

20000

24999

2018-12-31 10:38:45

2019-04-30 11:01:30

analysis

0

0.00181072

0.969886

0.97866

0.963317

False

0.00422251

0.948269

0.965726

0.940831

False

5

[25000:29999]

5

25000

29999

2019-04-30 11:02:00

2019-09-01 00:24:27

analysis

0

0.00181072

0.96005

0.97866

0.963317

True

0.00422251

0.945134

0.965726

0.940831

False

6

[30000:34999]

6

30000

34999

2019-09-01 00:28:54

2019-12-31 09:09:12

analysis

0

0.00181072

0.95853

0.97866

0.963317

True

0.00422251

0.94297

0.965726

0.940831

False

7

[35000:39999]

7

35000

39999

2019-12-31 10:07:15

2020-04-30 11:46:53

analysis

0

0.00181072

0.959041

0.97866

0.963317

True

0.00422251

0.940471

0.965726

0.940831

True

8

[40000:44999]

8

40000

44999

2020-04-30 12:04:32

2020-09-01 02:46:02

analysis

0

0.00181072

0.963094

0.97866

0.963317

True

0.00422251

0.9444

0.965726

0.940831

False

9

[45000:49999]

9

45000

49999

2020-09-01 02:46:13

2021-01-01 04:29:32

analysis

0

0.00181072

0.957556

0.97866

0.963317

True

0.00422251

0.943337

0.965726

0.940831

False

The analysis period results are shown.

We can now proceed to correlation ranking. Let’s correlate drift results with the estimated roc_auc. A key difference here is that after instantiation, we need to fit() the ranker with the related results from the reference period and only contain the performance metric we want the correlation ranker to use. You can read more about why this is needed on the Correlation Ranking, How it Works page. After fitting, we can rank() providing appropriately filtered univariate and performance results.

>>> ranker1 = nml.CorrelationRanker()
>>> # ranker fits on one metric and reference period data only
>>> ranker1.fit(
...     estimated_perf_results.filter(period='reference', metrics=['roc_auc']))
>>> # ranker ranks on one drift method and one performance metric
>>> correlation_ranked_features1 = ranker1.rank(
...     univariate_results.filter(methods=['jensen_shannon']),
...     estimated_perf_results.filter(metrics=['roc_auc']),
...     only_drifting = False)
>>> display(correlation_ranked_features1)

column_name

pearsonr_correlation

pearsonr_pvalue

has_drifted

rank

0

y_pred_proba

0.998566

1.84763e-11

True

1

1

wfh_prev_workday

0.998399

2.86596e-11

True

2

2

salary_range

0.996189

9.18768e-10

True

3

3

public_transportation_cost

0.995207

2.29534e-09

True

4

4

distance_from_office

0.99459

3.72322e-09

True

5

5

y_pred

0.779594

0.00783668

False

6

6

workday

0.307878

0.386807

False

7

7

gas_price_per_litre

0.0756047

0.835557

False

8

8

tenure

-0.645446

0.0438467

False

9

Depending on circumstances it may be appropriate to consider correlation of drift results on just the analysis dataset or for different metrics. Below we can see the correlation of the same drift results with the precision results

>>> ranker2 = nml.CorrelationRanker()
>>> # ranker fits on one metric and reference period data only
>>> ranker2.fit(
...     estimated_perf_results.filter(period='reference', metrics=['recall']))
>>> # ranker ranks on one drift method and one performance metric
>>> correlation_ranked_features2 = ranker2.rank(
...     univariate_results.filter(period='analysis', methods=['jensen_shannon']),
...     realized_perf_results.filter(period='analysis', metrics=['recall']),
...     only_drifting = False)
>>> display(correlation_ranked_features2)

column_name

pearsonr_correlation

pearsonr_pvalue

has_drifted

rank

0

public_transportation_cost

0.826804

0.00317605

True

1

1

distance_from_office

0.821073

0.00359132

True

2

2

y_pred_proba

0.819702

0.00369612

True

3

3

wfh_prev_workday

0.817814

0.00384407

True

4

4

salary_range

0.804391

0.00502082

True

5

5

y_pred

0.566548

0.0877154

False

6

6

gas_price_per_litre

0.109797

0.762693

False

7

7

workday

-0.0495088

0.891965

False

8

8

tenure

-0.565362

0.0885301

False

9

Insights

The intended use of ranking results is to suggest prioritization of further investigation of drift results.

If other information is available, such as feature importance, they can also be used to prioritize which drifted features can be investigated.

What’s Next

More information about the specifics of how ranking works can be found on the How it Works, Ranking page.