nannyml.performance_calculation.result module

Contains the results of the realized performance calculation and provides filtering and plotting functionality.

class nannyml.performance_calculation.result.Result(results_data: DataFrame, problem_type: ProblemType, y_pred: str, y_pred_proba: Optional[Union[str, Dict[str, str]]], y_true: str, metrics: List[Metric], timestamp_column_name: Optional[str] = None, reference_data: Optional[DataFrame] = None, analysis_data: Optional[DataFrame] = None)[source]

Bases: PerMetricResult[Metric], ResultCompareMixin

Wraps performance calculation results and provides filtering and plotting functionality.

Creates a new Result instance.

Parameters:
  • results_data (pd.DataFrame) – Results data returned by a CBPE estimator.

  • problem_type (ProblemType) –

    Determines which method to use. Allowed values are:

    • ’regression’

    • ’classification_binary’

    • ’classification_multiclass’

  • y_pred (str) – The name of the column containing your model predictions.

  • y_pred_proba (Union[str, Dict[str, str]]) –

    Name(s) of the column(s) containing your model output.

    • For binary classification, pass a single string refering to the model output column.

    • For multiclass classification, pass a dictionary that maps a class string to the column name containing model outputs for that class.

  • y_true (str) – The name of the column containing target values (that are provided in reference data during fitting).

  • metrics (List[nannyml.performance_calculation.metrics.base.Metric]) – List of metrics to evaluate.

  • timestamp_column_name (str, default=None) – The name of the column containing the timestamp of the model prediction. If not given, plots will not use a time-based x-axis but will use the index of the chunks instead.

  • reference_data (pd.DataFrame, default=None) – The reference data used for fitting. Must have target data available.

  • analysis_data (pd.DataFrame, default=None) – The data on which NannyML calculates the perfomance.

keys() List[Key][source]

Creates a list of keys where each Key is a namedtuple(‘Key’, ‘properties display_names’).

metrics: List[Metric]
plot(kind: str = 'performance', *args, **kwargs) Figure[source]

Render realized performance metrics.

This function will return a plotly.graph_objects.Figure object.

Parameters:

kind (str, default='performance') – The kind of plot to render. Only the ‘performance’ plot is currently available.

Raises:

InvalidArgumentsException – when an unknown plot kind is provided.:

Returns:

fig – A Figure object containing the requested drift plot.

Can be saved to disk using the write_image() method or shown rendered on screen using the show() method.

Return type:

plotly.graph_objs._figure.Figure

Examples

>>> import nannyml as nml
>>> from IPython.display import display
>>> reference_df, analysis_df, analysis_targets_df = nml.load_synthetic_car_loan_dataset()
>>> analysis_df = analysis_df.merge(analysis_targets_df, left_index=True, right_index=True)
>>> display(reference_df.head(3))
>>> calc = nml.PerformanceCalculator(
...     y_pred_proba='y_pred_proba',
...     y_pred='y_pred',
...     y_true='repaid',
...     timestamp_column_name='timestamp',
...     problem_type='classification_binary',
...     metrics=['roc_auc', 'f1', 'precision', 'recall', 'specificity', 'accuracy'],
...     chunk_size=5000)
>>> calc.fit(reference_df)
>>> results = calc.calculate(analysis_df)
>>> display(results.filter(period='analysis').to_df())
>>> display(results.filter(period='reference').to_df())
>>> figure = results.plot()
>>> figure.show()