Unseen Values Detection
Just The Code
>>> import nannyml as nml
>>> from IPython.display import display
>>> reference_df, analysis_df, analysis_targets_df = nml.load_titanic_dataset()
>>> display(reference_df.head())
>>> feature_column_names = [
... 'Sex', 'Ticket', 'Cabin', 'Embarked',
>>> ]
>>> calc = nml.UnseenValuesCalculator(
... column_names=feature_column_names,
>>> )
>>> calc.fit(reference_df)
>>> results = calc.calculate(analysis_df)
>>> display(results.filter(period='all').to_df())
>>> for column_name in results.column_names:
... results.filter(column_names=column_name).plot().show()
Walkthrough
NannyML defines unseen values as categorical feature values that are not present
in the reference period.
NannyML’s approach to unseen values detection is simple.
The reference period is used to create a set of expected values for
each categorical feature.
For each chunk in the analysis period
NannyML calculates the number of unseen values. There is an option,
called normalize
, to convert the count of values to a relative ratio if needed.
If unseen values are detected in a chunk, an alert is raised for the relevant feature.
We begin by loading the titanic dataset provided by the NannyML package.
>>> import nannyml as nml
>>> from IPython.display import display
>>> reference_df, analysis_df, analysis_targets_df = nml.load_titanic_dataset()
>>> display(reference_df.head())
PassengerId |
Pclass |
Name |
Sex |
Age |
SibSp |
Parch |
Ticket |
Fare |
Cabin |
Embarked |
boat |
body |
home.dest |
Survived |
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 |
1 |
3 |
Braund, Mr. Owen Harris |
male |
22 |
1 |
0 |
A/5 21171 |
7.25 |
nan |
S |
nan |
nan |
Bridgerule, Devon |
0 |
1 |
2 |
1 |
Cumings, Mrs. John Bradley (Florence Briggs Thayer) |
female |
38 |
1 |
0 |
PC 17599 |
71.2833 |
C85 |
C |
4 |
nan |
New York, NY |
1 |
2 |
3 |
3 |
Heikkinen, Miss. Laina |
female |
26 |
0 |
0 |
STON/O2. 3101282 |
7.925 |
nan |
S |
nan |
nan |
nan |
1 |
3 |
4 |
1 |
Futrelle, Mrs. Jacques Heath (Lily May Peel) |
female |
35 |
1 |
0 |
113803 |
53.1 |
C123 |
S |
D |
nan |
Scituate, MA |
1 |
4 |
5 |
3 |
Allen, Mr. William Henry |
male |
35 |
0 |
0 |
373450 |
8.05 |
nan |
S |
nan |
nan |
Lower Clapton, Middlesex or Erdington, Birmingham |
0 |
The UnseenValuesCalculator
class implements
the functionality needed for unseen values calculations.
We need to instantiate it with appropriate parameters:
column_names: A list with the names of columns to be evaluated. They need to be categorical columns.
normalize (Optional): Optionally, a boolean option indicating whether we want the absolute count of the missing value instances or their relative ratio. By default it is set to true.
timestamp_column_name (Optional): The name of the column in the reference data that contains timestamps.
chunk_size (Optional): The number of observations in each chunk of data used. Only one chunking argument needs to be provided. For more information about chunking configurations check out the chunking tutorial.
chunk_number (Optional): The number of chunks to be created out of data provided for each period.
chunk_period (Optional): The time period based on which we aggregate the provided data in order to create chunks.
chunker (Optional): A NannyML
Chunker
object that will handle the aggregation provided data in order to create chunks.thresholds (Optional): The threshold strategy used to calculate the alert threshold limits. For more information about thresholds, check out the thresholds tutorial.
Warning
Note that because of how unseen values are defined they will be 0 by definition for the reference period. Hence the StandardDeviationThreshold threshold option is not really applicable for this calculator.
>>> feature_column_names = [
... 'Sex', 'Ticket', 'Cabin', 'Embarked',
>>> ]
>>> calc = nml.UnseenValuesCalculator(
... column_names=feature_column_names,
>>> )
Next, the fit()
method needs
to be called on the reference data, which provides the baseline that the analysis data will be
compared with for alert generation. Then the
calculate()
method will
calculate the data quality results on the data provided to it.
The results can be filtered to only include a certain data period, method or column by using the filter
method.
You can evaluate the result data by converting the results into a DataFrame,
by calling the to_df()
method.
By default this will return a DataFrame with a multi-level index. The first level represents the column, the second level
represents resulting information such as the data quality metric values and the alert thresholds.
>>> calc.fit(reference_df)
>>> results = calc.calculate(analysis_df)
>>> display(results.filter(period='all').to_df())
chunk
key
|
chunk_index
|
start_index
|
end_index
|
start_date
|
end_date
|
period
|
Sex
value
|
upper_threshold
|
lower_threshold
|
alert
|
Ticket
value
|
upper_threshold
|
lower_threshold
|
alert
|
Cabin
value
|
upper_threshold
|
lower_threshold
|
alert
|
Embarked
value
|
upper_threshold
|
lower_threshold
|
alert
|
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 |
[0:88] |
0 |
0 |
88 |
reference |
0 |
0 |
False |
0 |
0 |
False |
0 |
0 |
False |
0 |
0 |
False |
||||||
1 |
[89:177] |
1 |
89 |
177 |
reference |
0 |
0 |
False |
0 |
0 |
False |
0 |
0 |
False |
0 |
0 |
False |
||||||
2 |
[178:266] |
2 |
178 |
266 |
reference |
0 |
0 |
False |
0 |
0 |
False |
0 |
0 |
False |
0 |
0 |
False |
||||||
3 |
[267:355] |
3 |
267 |
355 |
reference |
0 |
0 |
False |
0 |
0 |
False |
0 |
0 |
False |
0 |
0 |
False |
||||||
4 |
[356:444] |
4 |
356 |
444 |
reference |
0 |
0 |
False |
0 |
0 |
False |
0 |
0 |
False |
0 |
0 |
False |
||||||
5 |
[445:533] |
5 |
445 |
533 |
reference |
0 |
0 |
False |
0 |
0 |
False |
0 |
0 |
False |
0 |
0 |
False |
||||||
6 |
[534:622] |
6 |
534 |
622 |
reference |
0 |
0 |
False |
0 |
0 |
False |
0 |
0 |
False |
0 |
0 |
False |
||||||
7 |
[623:711] |
7 |
623 |
711 |
reference |
0 |
0 |
False |
0 |
0 |
False |
0 |
0 |
False |
0 |
0 |
False |
||||||
8 |
[712:800] |
8 |
712 |
800 |
reference |
0 |
0 |
False |
0 |
0 |
False |
0 |
0 |
False |
0 |
0 |
False |
||||||
9 |
[801:889] |
9 |
801 |
889 |
reference |
0 |
0 |
False |
0 |
0 |
False |
0 |
0 |
False |
0 |
0 |
False |
||||||
10 |
[890:890] |
10 |
890 |
890 |
reference |
0 |
0 |
False |
0 |
0 |
False |
0 |
0 |
False |
0 |
0 |
False |
||||||
11 |
[0:40] |
0 |
0 |
40 |
analysis |
0 |
0 |
False |
0.609756 |
0 |
True |
0.0731707 |
0 |
True |
0 |
0 |
False |
||||||
12 |
[41:81] |
1 |
41 |
81 |
analysis |
0 |
0 |
False |
0.634146 |
0 |
True |
0.219512 |
0 |
True |
0 |
0 |
False |
||||||
13 |
[82:122] |
2 |
82 |
122 |
analysis |
0 |
0 |
False |
0.731707 |
0 |
True |
0.146341 |
0 |
True |
0 |
0 |
False |
||||||
14 |
[123:163] |
3 |
123 |
163 |
analysis |
0 |
0 |
False |
0.634146 |
0 |
True |
0.0731707 |
0 |
True |
0 |
0 |
False |
||||||
15 |
[164:204] |
4 |
164 |
204 |
analysis |
0 |
0 |
False |
0.536585 |
0 |
True |
0.097561 |
0 |
True |
0 |
0 |
False |
||||||
16 |
[205:245] |
5 |
205 |
245 |
analysis |
0 |
0 |
False |
0.658537 |
0 |
True |
0.0731707 |
0 |
True |
0 |
0 |
False |
||||||
17 |
[246:286] |
6 |
246 |
286 |
analysis |
0 |
0 |
False |
0.756098 |
0 |
True |
0.0731707 |
0 |
True |
0 |
0 |
False |
||||||
18 |
[287:327] |
7 |
287 |
327 |
analysis |
0 |
0 |
False |
0.707317 |
0 |
True |
0.097561 |
0 |
True |
0 |
0 |
False |
||||||
19 |
[328:368] |
8 |
328 |
368 |
analysis |
0 |
0 |
False |
0.536585 |
0 |
True |
0.0487805 |
0 |
True |
0 |
0 |
False |
||||||
20 |
[369:409] |
9 |
369 |
409 |
analysis |
0 |
0 |
False |
0.560976 |
0 |
True |
0.195122 |
0 |
True |
0 |
0 |
False |
||||||
21 |
[410:417] |
10 |
410 |
417 |
analysis |
0 |
0 |
False |
0.625 |
0 |
True |
0.125 |
0 |
True |
0 |
0 |
False |
More information on accessing the information contained in the
Result
can be found on the Working with results page.
The next step is visualizing the results, which is done using the
plot()
method.
It is recommended to filter results for each column and plot separately.
>>> for column_name in results.column_names:
... results.filter(column_names=column_name).plot().show()
Insights
We see that most of the dataset columns don’t have unseen values. The Ticket and Cabin columns are the most interesting with regards to unseen values.
What Next
We can also inspect the dataset for missing values in the Missing Values Tutorial. Then we can look for any Data Drift present in the dataset using Detecting Data Drift functionality of NannyML.