Standard Deviation

Just The Code

>>> import nannyml as nml
>>> from IPython.display import display

>>> reference_df, analysis_df, analysis_targets_df = nml.load_synthetic_car_loan_dataset()
>>> display(reference_df.head())

>>> feature_column_names = [
...     'car_value', 'debt_to_income_ratio', 'driver_tenure'
>>> ]
>>> calc = nml.SummaryStatsStdCalculator(
...     column_names=feature_column_names,
>>> )

>>> calc.fit(reference_df)
>>> results = calc.calculate(analysis_df)
>>> display(results.filter(period='all').to_df())

>>> for column_name in results.column_names:
...     results.filter(column_names=column_name).plot().show()

Walkthrough

The Standard Deviation value calculation is straightforward. For each chunk NannyML calculates the standard deviation for all selected numerical columns. The resulting values from the reference data chunks are used to calculate the alert thresholds. The standard deviation value results from the analysis chunks are compared against those thresholds and generate alerts if applicable.

We begin by loading the synthetic car loan dataset provided by the NannyML package.

>>> import nannyml as nml
>>> from IPython.display import display

>>> reference_df, analysis_df, analysis_targets_df = nml.load_synthetic_car_loan_dataset()
>>> display(reference_df.head())

id

car_value

salary_range

debt_to_income_ratio

loan_length

repaid_loan_on_prev_car

size_of_downpayment

driver_tenure

repaid

timestamp

y_pred_proba

y_pred

0

0

39811

40K - 60K €

0.63295

19

False

40%

0.212653

1

2018-01-01 00:00:00.000

0.99

1

1

1

12679

40K - 60K €

0.718627

7

True

10%

4.92755

0

2018-01-01 00:08:43.152

0.07

0

2

2

19847

40K - 60K €

0.721724

17

False

0%

0.520817

1

2018-01-01 00:17:26.304

1

1

3

3

22652

20K - 20K €

0.705992

16

False

10%

0.453649

1

2018-01-01 00:26:09.456

0.98

1

4

4

21268

60K+ €

0.671888

21

True

30%

5.69526

1

2018-01-01 00:34:52.608

0.99

1

The SummaryStatsStdCalculator class implements the functionality needed for standard deviation values calculations. We need to instantiate it with appropriate parameters:

  • column_names: A list with the names of columns to be evaluated.

  • timestamp_column_name (Optional): The name of the column in the reference data that contains timestamps.

  • chunk_size (Optional): The number of observations in each chunk of data used. Only one chunking argument needs to be provided. For more information about chunking configurations check out the chunking tutorial.

  • chunk_number (Optional): The number of chunks to be created out of data provided for each period.

  • chunk_period (Optional): The time period based on which we aggregate the provided data in order to create chunks.

  • chunker (Optional): A NannyML Chunker object that will handle the aggregation provided data in order to create chunks.

  • threshold (Optional): The threshold strategy used to calculate the alert threshold limits. For more information about thresholds, check out the thresholds tutorial.

>>> feature_column_names = [
...     'car_value', 'debt_to_income_ratio', 'driver_tenure'
>>> ]
>>> calc = nml.SummaryStatsStdCalculator(
...     column_names=feature_column_names,
>>> )

Next, the fit() method needs to be called on the reference data, which provides the baseline that the analysis data will be compared with for alert generation. Then the calculate().calculate` method will calculate the data quality results on the data provided to it.

The results can be filtered to only include a certain data period, method or column by using the filter method. You can evaluate the result data by converting the results into a DataFrame, by calling the to_df() method. By default this will return a DataFrame with a multi-level index. The first level represents the column, the second level represents resulting information such as the data quality metric values, the alert thresholds or the associated sampling error.

>>> calc.fit(reference_df)
>>> results = calc.calculate(analysis_df)
>>> display(results.filter(period='all').to_df())

chunk
key
chunk_index
start_index
end_index
start_date
end_date
period
car_value
value
sampling_error
upper_confidence_boundary
lower_confidence_boundary
upper_threshold
lower_threshold
alert
debt_to_income_ratio
value
sampling_error
upper_confidence_boundary
lower_confidence_boundary
upper_threshold
lower_threshold
alert
driver_tenure
value
sampling_error
upper_confidence_boundary
lower_confidence_boundary
upper_threshold
lower_threshold
alert

0

[0:4999]

0

0

4999

reference

20403.1

271.992

21219.1

19587.1

20978.6

19816.9

False

0.154082

0.00124756

0.157824

0.150339

0.159073

0.151493

False

2.29725

0.0173422

2.34928

2.24523

2.3309

2.27314

False

1

[5000:9999]

1

5000

9999

reference

20527.4

271.992

21343.4

19711.5

20978.6

19816.9

False

0.157558

0.00124756

0.161301

0.153816

0.159073

0.151493

False

2.29714

0.0173422

2.34916

2.24511

2.3309

2.27314

False

2

[10000:14999]

2

10000

14999

reference

20114.8

271.992

20930.8

19298.9

20978.6

19816.9

False

0.15577

0.00124756

0.159513

0.152028

0.159073

0.151493

False

2.29814

0.0173422

2.35016

2.24611

2.3309

2.27314

False

3

[15000:19999]

3

15000

19999

reference

20434.1

271.992

21250.1

19618.1

20978.6

19816.9

False

0.156043

0.00124756

0.159786

0.152301

0.159073

0.151493

False

2.28299

0.0173422

2.33502

2.23097

2.3309

2.27314

False

4

[20000:24999]

4

20000

24999

reference

20212.7

271.992

21028.7

19396.7

20978.6

19816.9

False

0.155773

0.00124756

0.159515

0.15203

0.159073

0.151493

False

2.31656

0.0173422

2.36859

2.26453

2.3309

2.27314

False

5

[25000:29999]

5

25000

29999

reference

20714.8

271.992

21530.7

19898.8

20978.6

19816.9

False

0.156099

0.00124756

0.159842

0.152356

0.159073

0.151493

False

2.30991

0.0173422

2.36194

2.25789

2.3309

2.27314

False

6

[30000:34999]

6

30000

34999

reference

20481.3

271.992

21297.3

19665.3

20978.6

19816.9

False

0.15381

0.00124756

0.157553

0.150068

0.159073

0.151493

False

2.3132

0.0173422

2.36522

2.26117

2.3309

2.27314

False

7

[35000:39999]

7

35000

39999

reference

20657.2

271.992

21473.2

19841.2

20978.6

19816.9

False

0.153576

0.00124756

0.157319

0.149833

0.159073

0.151493

False

2.3062

0.0173422

2.35822

2.25417

2.3309

2.27314

False

8

[40000:44999]

8

40000

44999

reference

20243.4

271.992

21059.4

19427.4

20978.6

19816.9

False

0.156162

0.00124756

0.159904

0.152419

0.159073

0.151493

False

2.30548

0.0173422

2.35751

2.25346

2.3309

2.27314

False

9

[45000:49999]

9

45000

49999

reference

20188.5

271.992

21004.5

19372.5

20978.6

19816.9

False

0.153955

0.00124756

0.157697

0.150212

0.159073

0.151493

False

2.29335

0.0173422

2.34537

2.24132

2.3309

2.27314

False

10

[0:4999]

0

0

4999

analysis

20614.9

271.992

21430.9

19798.9

20978.6

19816.9

False

0.152418

0.00124756

0.156161

0.148675

0.159073

0.151493

False

2.33962

0.0173422

2.39165

2.2876

2.3309

2.27314

True

11

[5000:9999]

1

5000

9999

analysis

20589.5

271.992

21405.5

19773.6

20978.6

19816.9

False

0.155663

0.00124756

0.159405

0.15192

0.159073

0.151493

False

2.30781

0.0173422

2.35984

2.25579

2.3309

2.27314

False

12

[10000:14999]

2

10000

14999

analysis

20463.2

271.992

21279.2

19647.2

20978.6

19816.9

False

0.154717

0.00124756

0.158459

0.150974

0.159073

0.151493

False

2.30841

0.0173422

2.36044

2.25639

2.3309

2.27314

False

13

[15000:19999]

3

15000

19999

analysis

20667

271.992

21483

19851

20978.6

19816.9

False

0.15608

0.00124756

0.159823

0.152337

0.159073

0.151493

False

2.31285

0.0173422

2.36488

2.26083

2.3309

2.27314

False

14

[20000:24999]

4

20000

24999

analysis

19758.5

271.992

20574.5

18942.5

20978.6

19816.9

True

0.153575

0.00124756

0.157318

0.149832

0.159073

0.151493

False

2.31019

0.0173422

2.36222

2.25817

2.3309

2.27314

False

15

[25000:29999]

5

25000

29999

analysis

21804.3

271.992

22620.3

20988.4

20978.6

19816.9

True

0.155871

0.00124756

0.159613

0.152128

0.159073

0.151493

False

2.31176

0.0173422

2.36379

2.25974

2.3309

2.27314

False

16

[30000:34999]

6

30000

34999

analysis

22160.7

271.992

22976.6

21344.7

20978.6

19816.9

True

0.155253

0.00124756

0.158995

0.15151

0.159073

0.151493

False

2.31126

0.0173422

2.36329

2.25923

2.3309

2.27314

False

17

[35000:39999]

7

35000

39999

analysis

21644.4

271.992

22460.4

20828.4

20978.6

19816.9

True

0.155762

0.00124756

0.159505

0.152019

0.159073

0.151493

False

2.31125

0.0173422

2.36328

2.25923

2.3309

2.27314

False

18

[40000:44999]

8

40000

44999

analysis

22013.2

271.992

22829.1

21197.2

20978.6

19816.9

True

0.156886

0.00124756

0.160629

0.153143

0.159073

0.151493

False

2.31088

0.0173422

2.36291

2.25885

2.3309

2.27314

False

19

[45000:49999]

9

45000

49999

analysis

22013.7

271.992

22829.7

21197.7

20978.6

19816.9

True

0.155866

0.00124756

0.159609

0.152123

0.159073

0.151493

False

2.30833

0.0173422

2.36035

2.2563

2.3309

2.27314

False

More information on accessing the information contained in the Result can be found on the Working with results page.

The next step is visualizing the results, which is done using the plot() method. It is recommended to filter results for each column and plot separately.

>>> for column_name in results.column_names:
...     results.filter(column_names=column_name).plot().show()
../../_images/std-car_value.svg../../_images/std-debt_to_income_ratio.svg../../_images/std-driver_tenure.svg

Insights

We see that only the car_value column exhibits a permanent change in standard deviation values. However both car_value and driver_tenure appear to have one off events where they have some slightly abnormal values.

What Next

We can also inspect the dataset for other Summary Statistics such as Average. We can also look for any Data Drift present in the dataset using Detecting Data Drift functionality of NannyML.