ddop.metrics.prescriptiveness_score

ddop.metrics.prescriptiveness_score(y_true, y_pred, y_pred_saa, cu, co, multioutput='uniform_average')

Compute the coefficient of prescriptiveness that is defined as (1 - u/v), where u are the average costs between the true and predicted values (y_true,y_pred), and v are the average costs between the true values and the predictions obtained by SAA (y_pred_saa, y_pred). The best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse).

Parameters
  • y_true (array-like) – The true values

  • y_pred (array-like) – The predicted vales

  • cu (int or float) – the underage costs per unit.

  • co (int or float) – the overage costs per unit.

  • multioutput ({"raw_values", "uniform_average"}, default="raw_values") –

    Defines aggregating of multiple output scores. Default is “uniform_average”.
    ’raw_values’ :

    Returns a full set of scores in case of multioutput input.

    ’uniform_average’ :

    Scores of all outputs are averaged with uniform weight.

Returns

score – The prescriptiveness score or ndarray of scores if ‘multioutput’ is ‘raw_values’.

Return type

float or ndarray of floats

Examples

>>> from ddop.metrics import prescriptiveness_score
>>> y_true = [[2,2], [2,4], [3,6]]
>>> y_pred = [[1,2], [3,3], [4,7]]
>>> y_pred_saa = [[4,5],[4,5],[4,5]]
>>> cu = [2,4]
>>> co = [1,1]
>>> prescriptiveness_score(y_true, y_pred, cu, co, multioutput="raw_values")
array([0.2, 0.375])
>>> prescriptiveness_score(y_true, y_pred, cu, co, multioutput="uniform_average")
0.2875