nannyml.performance_calculation.result module
Contains the results of the realized performance calculation and provides plotting functionality.
- class nannyml.performance_calculation.result.Result(results_data: DataFrame, problem_type: ProblemType, y_pred: str, y_pred_proba: Optional[Union[str, Dict[str, str]]], y_true: str, metrics: List[Metric], timestamp_column_name: Optional[str] = None, reference_data: Optional[DataFrame] = None, analysis_data: Optional[DataFrame] = None)[source]
Bases:
AbstractCalculatorResult
Contains the results of the realized performance calculation and provides plotting functionality.
Creates a new Result instance.
- plot(kind: str = 'performance', plot_reference: bool = False, *args, **kwargs) Optional[Figure] [source]
Render realized performance metrics.
The following kinds of plots are available:
performance
a step plot showing the realized performance metric per
Chunk
for a given metric.
- Parameters:
kind (str, default='performance') – The kind of plot to render. Only the ‘performance’ plot is currently available.
metric (Union[str, nannyml.performance_calculation.metrics.base.Metric], default=None) – The name of the metric to plot. Value should be one of: - ‘roc_auc’ - ‘f1’ - ‘precision’ - ‘recall’ - ‘specificity’ - ‘accuracy’
plot_reference (bool, default=False) – Indicates whether to include the reference period in the plot or not. Defaults to
False
.
- Returns:
fig – A
Figure
object containing the requested drift plot.Can be saved to disk using the
write_image()
method or shown rendered on screen using theshow()
method.- Return type:
plotly.graph_objs._figure.Figure
Examples
>>> import nannyml as nml >>> >>> reference_df, analysis_df, target_df = nml.load_synthetic_binary_classification_dataset() >>> >>> calc = nml.PerformanceCalculator(y_true='work_home_actual', y_pred='y_pred', y_pred_proba='y_pred_proba', >>> timestamp_column_name='timestamp', metrics=['f1', 'roc_auc']) >>> >>> calc.fit(reference_df) >>> >>> results = calc.calculate(analysis_df.merge(target_df, on='identifier')) >>> print(results.data) key start_index ... roc_auc_upper_threshold roc_auc_alert 0 [0:4999] 0 ... 0.97866 False 1 [5000:9999] 5000 ... 0.97866 False 2 [10000:14999] 10000 ... 0.97866 False 3 [15000:19999] 15000 ... 0.97866 False 4 [20000:24999] 20000 ... 0.97866 False 5 [25000:29999] 25000 ... 0.97866 True 6 [30000:34999] 30000 ... 0.97866 True 7 [35000:39999] 35000 ... 0.97866 True 8 [40000:44999] 40000 ... 0.97866 True 9 [45000:49999] 45000 ... 0.97866 True >>> for metric in calc.metrics: >>> results.plot(metric=metric, plot_reference=True).show()