Median

Just The Code

>>> import nannyml as nml
>>> from IPython.display import display

>>> reference_df, analysis_df, analysis_targets_df = nml.load_synthetic_car_loan_dataset()
>>> display(reference_df.head())

>>> feature_column_names = [
...     'car_value', 'debt_to_income_ratio', 'driver_tenure'
>>> ]
>>> calc = nml.SummaryStatsMedianCalculator(
...     column_names=feature_column_names,
>>> )

>>> calc.fit(reference_df)
>>> results = calc.calculate(analysis_df)
>>> display(results.filter(period='all').to_df())

>>> for column_name in results.column_names:
...     results.filter(column_names=column_name).plot().show()

Walkthrough

The Median value calculation is straightforward. For each chunk NannyML calculates the median for all selected numerical columns. The resulting values from the reference data chunks are used to calculate the alert thresholds. The median value results from the analysis chunks are compared against those thresholds and generate alerts if applicable.

We begin by loading the synthetic car loan dataset provided by the NannyML package.

>>> import nannyml as nml
>>> from IPython.display import display

>>> reference_df, analysis_df, analysis_targets_df = nml.load_synthetic_car_loan_dataset()
>>> display(reference_df.head())

id

car_value

salary_range

debt_to_income_ratio

loan_length

repaid_loan_on_prev_car

size_of_downpayment

driver_tenure

repaid

timestamp

y_pred_proba

y_pred

0

0

39811

40K - 60K €

0.63295

19

False

40%

0.212653

1

2018-01-01 00:00:00.000

0.99

1

1

1

12679

40K - 60K €

0.718627

7

True

10%

4.92755

0

2018-01-01 00:08:43.152

0.07

0

2

2

19847

40K - 60K €

0.721724

17

False

0%

0.520817

1

2018-01-01 00:17:26.304

1

1

3

3

22652

20K - 20K €

0.705992

16

False

10%

0.453649

1

2018-01-01 00:26:09.456

0.98

1

4

4

21268

60K+ €

0.671888

21

True

30%

5.69526

1

2018-01-01 00:34:52.608

0.99

1

The SummaryStatsMedianCalculator class implements the functionality needed for median values calculations. We need to instantiate it with appropriate parameters:

  • column_names: A list with the names of columns to be evaluated.

  • timestamp_column_name (Optional): The name of the column in the reference data that contains timestamps.

  • chunk_size (Optional): The number of observations in each chunk of data used. Only one chunking argument needs to be provided. For more information about chunking configurations check out the chunking tutorial.

  • chunk_number (Optional): The number of chunks to be created out of data provided for each period.

  • chunk_period (Optional): The time period based on which we aggregate the provided data in order to create chunks.

  • chunker (Optional): A NannyML Chunker object that will handle the aggregation provided data in order to create chunks.

  • threshold (Optional): The threshold strategy used to calculate the alert threshold limits. For more information about thresholds, check out the thresholds tutorial.

>>> feature_column_names = [
...     'car_value', 'debt_to_income_ratio', 'driver_tenure'
>>> ]
>>> calc = nml.SummaryStatsMedianCalculator(
...     column_names=feature_column_names,
>>> )

Next, the fit() method needs to be called on the reference data, which provides the baseline that the analysis data will be compared with for alert generation. Then the calculate() method will calculate the data quality results on the data provided to it.

The results can be filtered to only include a certain data period, method or column by using the filter method. You can evaluate the result data by converting the results into a DataFrame, by calling the to_df() method. By default this will return a DataFrame with a multi-level index. The first level represents the column, the second level represents resulting information such as the data quality metric values, the alert thresholds or the associated sampling error.

>>> calc.fit(reference_df)
>>> results = calc.calculate(analysis_df)
>>> display(results.filter(period='all').to_df())

chunk
key
chunk_index
start_index
end_index
start_date
end_date
period
car_value
value
sampling_error
upper_confidence_boundary
lower_confidence_boundary
upper_threshold
lower_threshold
alert
debt_to_income_ratio
value
sampling_error
upper_confidence_boundary
lower_confidence_boundary
upper_threshold
lower_threshold
alert
driver_tenure
value
sampling_error
upper_confidence_boundary
lower_confidence_boundary
upper_threshold
lower_threshold
alert

0

[0:4999]

0

0

4999

reference

21985.5

285.398

22841.7

21129.3

22490.8

21291.6

False

0.657796

0.00237049

0.664908

0.650685

0.665299

0.654625

False

5.64235

0.0403319

5.76335

5.52136

5.73436

5.43698

False

1

[5000:9999]

1

5000

9999

reference

21970.5

285.398

22826.7

21114.3

22490.8

21291.6

False

0.657437

0.00237049

0.664549

0.650326

0.665299

0.654625

False

5.64975

0.0403319

5.77075

5.52875

5.73436

5.43698

False

2

[10000:14999]

2

10000

14999

reference

21932

285.398

22788.2

21075.8

22490.8

21291.6

False

0.661076

0.00237049

0.668188

0.653965

0.665299

0.654625

False

5.51535

0.0403319

5.63634

5.39435

5.73436

5.43698

False

3

[15000:19999]

3

15000

19999

reference

21621

285.398

22477.2

20764.8

22490.8

21291.6

False

0.658017

0.00237049

0.665128

0.650905

0.665299

0.654625

False

5.60735

0.0403319

5.72835

5.48636

5.73436

5.43698

False

4

[20000:24999]

4

20000

24999

reference

21642

285.398

22498.2

20785.8

22490.8

21291.6

False

0.660874

0.00237049

0.667986

0.653763

0.665299

0.654625

False

5.59006

0.0403319

5.71106

5.46907

5.73436

5.43698

False

5

[25000:29999]

5

25000

29999

reference

21886.5

285.398

22742.7

21030.3

22490.8

21291.6

False

0.660811

0.00237049

0.667923

0.6537

0.665299

0.654625

False

5.59278

0.0403319

5.71378

5.47179

5.73436

5.43698

False

6

[30000:34999]

6

30000

34999

reference

22212.5

285.398

23068.7

21356.3

22490.8

21291.6

False

0.661339

0.00237049

0.66845

0.654228

0.665299

0.654625

False

5.52495

0.0403319

5.64595

5.40396

5.73436

5.43698

False

7

[35000:39999]

7

35000

39999

reference

21681.5

285.398

22537.7

20825.3

22490.8

21291.6

False

0.663304

0.00237049

0.670416

0.656193

0.665299

0.654625

False

5.55446

0.0403319

5.67545

5.43346

5.73436

5.43698

False

8

[40000:44999]

8

40000

44999

reference

22191

285.398

23047.2

21334.8

22490.8

21291.6

False

0.659974

0.00237049

0.667086

0.652863

0.665299

0.654625

False

5.64926

0.0403319

5.77026

5.52827

5.73436

5.43698

False

9

[45000:49999]

9

45000

49999

reference

21789.5

285.398

22645.7

20933.3

22490.8

21291.6

False

0.658992

0.00237049

0.666103

0.651881

0.665299

0.654625

False

5.5304

0.0403319

5.65139

5.4094

5.73436

5.43698

False

10

[0:4999]

0

0

4999

analysis

22009.5

285.398

22865.7

21153.3

22490.8

21291.6

False

0.663227

0.00237049

0.670339

0.656116

0.665299

0.654625

False

5.47825

0.0403319

5.59925

5.35726

5.73436

5.43698

False

11

[5000:9999]

1

5000

9999

analysis

22053

285.398

22909.2

21196.8

22490.8

21291.6

False

0.658009

0.00237049

0.66512

0.650897

0.665299

0.654625

False

5.54058

0.0403319

5.66157

5.41958

5.73436

5.43698

False

12

[10000:14999]

2

10000

14999

analysis

22026

285.398

22882.2

21169.8

22490.8

21291.6

False

0.655704

0.00237049

0.662815

0.648592

0.665299

0.654625

False

5.67845

0.0403319

5.79944

5.55745

5.73436

5.43698

False

13

[15000:19999]

3

15000

19999

analysis

21994

285.398

22850.2

21137.8

22490.8

21291.6

False

0.662317

0.00237049

0.669429

0.655206

0.665299

0.654625

False

5.61623

0.0403319

5.73723

5.49524

5.73436

5.43698

False

14

[20000:24999]

4

20000

24999

analysis

21952.5

285.398

22808.7

21096.3

22490.8

21291.6

False

0.662998

0.00237049

0.67011

0.655887

0.665299

0.654625

False

5.57528

0.0403319

5.69627

5.45428

5.73436

5.43698

False

15

[25000:29999]

5

25000

29999

analysis

43820.5

285.398

44676.7

42964.3

22490.8

21291.6

True

0.660507

0.00237049

0.667619

0.653396

0.665299

0.654625

False

5.59118

0.0403319

5.71218

5.47019

5.73436

5.43698

False

16

[30000:34999]

6

30000

34999

analysis

45036.5

285.398

45892.7

44180.3

22490.8

21291.6

True

0.662112

0.00237049

0.669224

0.655001

0.665299

0.654625

False

5.5786

0.0403319

5.6996

5.45761

5.73436

5.43698

False

17

[35000:39999]

7

35000

39999

analysis

45215.5

285.398

46071.7

44359.3

22490.8

21291.6

True

0.661659

0.00237049

0.66877

0.654547

0.665299

0.654625

False

5.53342

0.0403319

5.65442

5.41243

5.73436

5.43698

False

18

[40000:44999]

8

40000

44999

analysis

44852.5

285.398

45708.7

43996.3

22490.8

21291.6

True

0.662502

0.00237049

0.669614

0.655391

0.665299

0.654625

False

5.58562

0.0403319

5.70662

5.46462

5.73436

5.43698

False

19

[45000:49999]

9

45000

49999

analysis

44438

285.398

45294.2

43581.8

22490.8

21291.6

True

0.661043

0.00237049

0.668154

0.653931

0.665299

0.654625

False

5.58789

0.0403319

5.70889

5.4669

5.73436

5.43698

False

More information on accessing the information contained in the Result can be found on the Working with results page.

The next step is visualizing the results, which is done using the plot() method. It is recommended to filter results for each column and plot separately.

>>> for column_name in results.column_names:
...     results.filter(column_names=column_name).plot().show()
../../_images/median-car_value.svg../../_images/median-debt_to_income_ratio.svg../../_images/median-driver_tenure.svg

Insights

We see that only the car_value column exhibits a change in median value.

What Next

We can also inspect the dataset for other Summary Statistics such as Standard Deviation. We can also look for any Data Drift present in the dataset using Detecting Data Drift functionality of NannyML.