nannyml.performance_calculation.result module

Contains the results of the realized performance calculation and provides filtering and plotting functionality.

class nannyml.performance_calculation.result.Result(results_data: pandas.core.frame.DataFrame, problem_type: nannyml._typing.ProblemType, y_pred: str, y_pred_proba: Optional[Union[str, Dict[str, str]]], y_true: str, metrics: List[nannyml.performance_calculation.metrics.base.Metric], timestamp_column_name: Optional[str] = None, reference_data: Optional[pandas.core.frame.DataFrame] = None, analysis_data: Optional[pandas.core.frame.DataFrame] = None)[source]

Bases: nannyml.base.PerMetricResult[nannyml.performance_calculation.metrics.base.Metric], nannyml.plots.blueprints.comparisons.ResultCompareMixin

Wraps performance calculation results and provides filtering and plotting functionality.

Creates a new Result instance.

Parameters
  • results_data (pd.DataFrame) – Results data returned by a CBPE estimator.

  • problem_type (ProblemType) –

    Determines which method to use. Allowed values are:

    • ’regression’

    • ’classification_binary’

    • ’classification_multiclass’

  • y_pred (str) – The name of the column containing your model predictions.

  • y_pred_proba (Union[str, Dict[str, str]]) –

    Name(s) of the column(s) containing your model output.

    • For binary classification, pass a single string refering to the model output column.

    • For multiclass classification, pass a dictionary that maps a class string to the column name containing model outputs for that class.

  • y_true (str) – The name of the column containing target values (that are provided in reference data during fitting).

  • metrics (List[nannyml.performance_calculation.metrics.base.Metric]) – List of metrics to evaluate.

  • timestamp_column_name (str, default=None) – The name of the column containing the timestamp of the model prediction. If not given, plots will not use a time-based x-axis but will use the index of the chunks instead.

  • reference_data (pd.DataFrame, default=None) – The reference data used for fitting. Must have target data available.

  • analysis_data (pd.DataFrame, default=None) – The data on which NannyML calculates the perfomance.

keys() List[nannyml._typing.Key][source]

Creates a list of keys where each Key is a namedtuple(‘Key’, ‘properties display_names’)

metrics: List[nannyml.performance_calculation.metrics.base.Metric]
plot(kind: str = 'performance', *args, **kwargs) plotly.graph_objs._figure.Figure[source]

Render realized performance metrics. This function will return a plotly.graph_objects.Figure object.

Parameters

kind (str, default='performance') – The kind of plot to render. Only the ‘performance’ plot is currently available.

Raises

InvalidArgumentsException – when an unknown plot kind is provided.:

Returns

fig – A Figure object containing the requested drift plot.

Can be saved to disk using the write_image() method or shown rendered on screen using the show() method.

Return type

plotly.graph_objs._figure.Figure

Examples

>>> import nannyml as nml
>>>
>>> reference_df, analysis_df, target_df = nml.load_synthetic_binary_classification_dataset()
>>>
>>> calc = nml.PerformanceCalculator(y_true='work_home_actual', y_pred='y_pred', y_pred_proba='y_pred_proba',
>>>                                  problem_type='classification_binary', timestamp_column_name='timestamp',
>>>                                  metrics=['f1', 'roc_auc'])
>>>
>>> calc.fit(reference_df)
>>>
>>> results = calc.calculate(analysis_df.merge(target_df, on='identifier'))
>>> print(results.data)
             key  start_index  ...  roc_auc_upper_threshold roc_auc_alert
0       [0:4999]            0  ...                  0.97866         False
1    [5000:9999]         5000  ...                  0.97866         False
2  [10000:14999]        10000  ...                  0.97866         False
3  [15000:19999]        15000  ...                  0.97866         False
4  [20000:24999]        20000  ...                  0.97866         False
5  [25000:29999]        25000  ...                  0.97866          True
6  [30000:34999]        30000  ...                  0.97866          True
7  [35000:39999]        35000  ...                  0.97866          True
8  [40000:44999]        40000  ...                  0.97866          True
9  [45000:49999]        45000  ...                  0.97866          True
>>> for metric in calc.metrics:
>>>     results.plot(metric=metric, plot_reference=True).show()