nannyml.performance_estimation.confidence_based.results module

Module containing CBPE estimation results and plotting implementations.

class nannyml.performance_estimation.confidence_based.results.Result(results_data: DataFrame, metrics: List[Metric], y_pred: str, y_pred_proba: Union[str, Dict[str, str]], y_true: str, chunker: Chunker, problem_type: ProblemType, timestamp_column_name: Optional[str] = None)[source]

Bases: AbstractEstimatorResult

Contains results for CBPE estimation and adds plotting functionality.

Creates a new DriftResult instance.

Parameters:

results_data (pd.DataFrame) – The result data of the performed calculation.

plot(kind: str = 'performance', metric: Optional[Union[str, Metric]] = None, plot_reference: bool = False, *args, **kwargs) Figure[source]

Render plots based on CBPE estimation results.

This function will return a plotly.graph_objects.Figure object. The following kinds of plots are available:

  • performance: a line plot rendering the estimated performance per Chunk after

    applying the calculate() method on a chunked dataset.

Parameters:
  • kind (str, default='performance') – The kind of plot to render. Only the ‘performance’ plot is currently available.

  • metric (Union[str, nannyml.performance_estimation.confidence_based.metrics.Metric]) – The metric to plot when rendering a plot of kind ‘performance’.

  • plot_reference (bool, default=False) – Indicates whether to include the reference period in the plot or not. Defaults to False.

Returns:

fig – A Figure object containing the requested drift plot.

Can be saved to disk using the write_image() method or shown rendered on screen using the show() method.

Return type:

plotly.graph_objs._figure.Figure

Examples

>>> import nannyml as nml
>>>
>>> reference_df, analysis_df, target_df = nml.load_synthetic_binary_classification_dataset()
>>>
>>> estimator = nml.CBPE(
>>>     y_true='work_home_actual',
>>>     y_pred='y_pred',
>>>     y_pred_proba='y_pred_proba',
>>>     timestamp_column_name='timestamp',
>>>     metrics=['f1', 'roc_auc']
>>> )
>>>
>>> estimator.fit(reference_df)
>>>
>>> results = estimator.estimate(analysis_df)
>>> print(results.data)
             key  start_index  ...  lower_threshold_roc_auc alert_roc_auc
0       [0:4999]            0  ...                  0.97866         False
1    [5000:9999]         5000  ...                  0.97866         False
2  [10000:14999]        10000  ...                  0.97866         False
3  [15000:19999]        15000  ...                  0.97866         False
4  [20000:24999]        20000  ...                  0.97866         False
5  [25000:29999]        25000  ...                  0.97866          True
6  [30000:34999]        30000  ...                  0.97866          True
7  [35000:39999]        35000  ...                  0.97866          True
8  [40000:44999]        40000  ...                  0.97866          True
9  [45000:49999]        45000  ...                  0.97866          True
>>> for metric in estimator.metrics:
>>>     results.plot(metric=metric, plot_reference=True).show()