pytorch_lightning.metrics.functional.classification module¶
-
pytorch_lightning.metrics.functional.classification.
_binary_clf_curve
(pred, target, sample_weight=None, pos_label=1.0)[source]¶ adapted from https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_ranking.py
-
pytorch_lightning.metrics.functional.classification.
accuracy
(pred, target, num_classes=None, reduction='elementwise_mean')[source]¶ Computes the accuracy classification score
- Parameters
reduction¶ –
a method for reducing accuracies over labels (default: takes the mean) Available reduction methods:
elementwise_mean: takes the mean
none: pass array
sum: add elements
- Return type
- Returns
A Tensor with the classification score.
Example
>>> x = torch.tensor([0, 1, 2, 3]) >>> y = torch.tensor([0, 1, 2, 2]) >>> accuracy(x, y) tensor(0.7500)
-
pytorch_lightning.metrics.functional.classification.
auc
(x, y, reorder=True)[source]¶ Computes Area Under the Curve (AUC) using the trapezoidal rule
- Parameters
- Return type
- Returns
Tensor containing AUC score (float)
Example
>>> x = torch.tensor([0, 1, 2, 3]) >>> y = torch.tensor([0, 1, 2, 2]) >>> auc(x, y) tensor(4.)
-
pytorch_lightning.metrics.functional.classification.
auc_decorator
(reorder=True)[source]¶ - Return type
-
pytorch_lightning.metrics.functional.classification.
auroc
(pred, target, sample_weight=None, pos_label=1.0)[source]¶ Compute Area Under the Receiver Operating Characteristic Curve (ROC AUC) from prediction scores
- Parameters
- Return type
- Returns
Tensor containing ROCAUC score
Example
>>> x = torch.tensor([0, 1, 2, 3]) >>> y = torch.tensor([0, 1, 2, 2]) >>> auroc(x, y) tensor(0.3333)
-
pytorch_lightning.metrics.functional.classification.
average_precision
(pred, target, sample_weight=None, pos_label=1.0)[source]¶ Compute average precision from prediction scores
- Parameters
- Return type
- Returns
Tensor containing average precision score
Example
>>> x = torch.tensor([0, 1, 2, 3]) >>> y = torch.tensor([0, 1, 2, 2]) >>> average_precision(x, y) tensor(0.3333)
-
pytorch_lightning.metrics.functional.classification.
confusion_matrix
(pred, target, normalize=False)[source]¶ Computes the confusion matrix C where each entry C_{i,j} is the number of observations in group i that were predicted in group j.
- Parameters
- Return type
- Returns
Tensor, confusion matrix C [num_classes, num_classes ]
Example
>>> x = torch.tensor([1, 2, 3]) >>> y = torch.tensor([0, 2, 3]) >>> confusion_matrix(x, y) tensor([[0., 1., 0., 0.], [0., 0., 0., 0.], [0., 0., 1., 0.], [0., 0., 0., 1.]])
-
pytorch_lightning.metrics.functional.classification.
dice_score
(pred, target, bg=False, nan_score=0.0, no_fg_score=0.0, reduction='elementwise_mean')[source]¶ Compute dice score from prediction scores
- Parameters
bg¶ (
bool
) – whether to also compute dice for the backgroundnan_score¶ (
float
) – score to return, if a NaN occurs during computationno_fg_score¶ (
float
) – score to return, if no foreground pixel was found in targeta method for reducing accuracies over labels (default: takes the mean) Available reduction methods:
elementwise_mean: takes the mean
none: pass array
sum: add elements
- Return type
- Returns
Tensor containing dice score
Example
>>> pred = torch.tensor([[0.85, 0.05, 0.05, 0.05], ... [0.05, 0.85, 0.05, 0.05], ... [0.05, 0.05, 0.85, 0.05], ... [0.05, 0.05, 0.05, 0.85]]) >>> target = torch.tensor([0, 1, 3, 2]) >>> dice_score(pred, target) tensor(0.3333)
-
pytorch_lightning.metrics.functional.classification.
f1_score
(pred, target, num_classes=None, reduction='elementwise_mean')[source]¶ Computes the F1-score (a.k.a F-measure), which is the harmonic mean of the precision and recall. It ranges between 1 and 0, where 1 is perfect and the worst value is 0.
- Parameters
- Return type
- Returns
Tensor containing F1-score
Example
>>> x = torch.tensor([0, 1, 2, 3]) >>> y = torch.tensor([0, 1, 2, 2]) >>> f1_score(x, y) tensor(0.6667)
-
pytorch_lightning.metrics.functional.classification.
fbeta_score
(pred, target, beta, num_classes=None, reduction='elementwise_mean')[source]¶ Computes the F-beta score which is a weighted harmonic mean of precision and recall. It ranges between 1 and 0, where 1 is perfect and the worst value is 0.
- Parameters
beta¶ (
float
) – weights recall when combining the score. beta < 1: more weight to precision. beta > 1 more weight to recall beta = 0: only precision beta -> inf: only recallmethod for reducing F-score (default: takes the mean) Available reduction methods:
elementwise_mean: takes the mean
none: pass array
sum: add elements.
- Return type
- Returns
Tensor with the value of F-score. It is a value between 0-1.
Example
>>> x = torch.tensor([0, 1, 2, 3]) >>> y = torch.tensor([0, 1, 2, 2]) >>> fbeta_score(x, y, 0.2) tensor(0.7407)
-
pytorch_lightning.metrics.functional.classification.
get_num_classes
(pred, target, num_classes=None)[source]¶ Calculates the number of classes for a given prediction and target tensor.
- Args:
pred: predicted values target: true labels num_classes: number of classes if known
- Return:
An integer that represents the number of classes.
- Return type
-
pytorch_lightning.metrics.functional.classification.
iou
(pred, target, num_classes=None, remove_bg=False, reduction='elementwise_mean')[source]¶ Intersection over union, or Jaccard index calculation.
- Parameters
num_classes¶ (
Optional
[int
]) – Optionally specify the number of classesremove_bg¶ (
bool
) – Flag to state whether a background class has been included within input parameters. If true, will remove background class. If false, return IoU over all classes Assumes that background is ‘0’ class in input tensora method for reducing IoU over labels (default: takes the mean) Available reduction methods:
elementwise_mean: takes the mean
none: pass array
sum: add elements
- Returns
Tensor containing single value if reduction is ‘elementwise_mean’, or number of classes if reduction is ‘none’
- Return type
IoU score
Example
>>> target = torch.randint(0, 1, (10, 25, 25)) >>> pred = torch.tensor(target) >>> pred[2:5, 7:13, 9:15] = 1 - pred[2:5, 7:13, 9:15] >>> iou(pred, target) tensor(0.4914)
-
pytorch_lightning.metrics.functional.classification.
multiclass_auc_decorator
(reorder=True)[source]¶ - Return type
-
pytorch_lightning.metrics.functional.classification.
multiclass_precision_recall_curve
(pred, target, sample_weight=None, num_classes=None)[source]¶ Computes precision-recall pairs for different thresholds given a multiclass scores.
- Parameters
- Return type
- Returns
number of classes, precision, recall, thresholds
Example
>>> pred = torch.tensor([[0.85, 0.05, 0.05, 0.05], ... [0.05, 0.85, 0.05, 0.05], ... [0.05, 0.05, 0.85, 0.05], ... [0.05, 0.05, 0.05, 0.85]]) >>> target = torch.tensor([0, 1, 3, 2]) >>> nb_classes, precision, recall, thresholds = multiclass_precision_recall_curve(pred, target) >>> nb_classes (tensor([1., 1.]), tensor([1., 0.]), tensor([0.8500])) >>> precision (tensor([1., 1.]), tensor([1., 0.]), tensor([0.8500])) >>> recall (tensor([0.2500, 0.0000, 1.0000]), tensor([1., 0., 0.]), tensor([0.0500, 0.8500])) >>> thresholds (tensor([0.2500, 0.0000, 1.0000]), tensor([1., 0., 0.]), tensor([0.0500, 0.8500]))
-
pytorch_lightning.metrics.functional.classification.
multiclass_roc
(pred, target, sample_weight=None, num_classes=None)[source]¶ Computes the Receiver Operating Characteristic (ROC) for multiclass predictors.
- Parameters
- Return type
- Returns
returns roc for each class. Number of classes, false-positive rate (fpr), true-positive rate (tpr), thresholds
Example
>>> pred = torch.tensor([[0.85, 0.05, 0.05, 0.05], ... [0.05, 0.85, 0.05, 0.05], ... [0.05, 0.05, 0.85, 0.05], ... [0.05, 0.05, 0.05, 0.85]]) >>> target = torch.tensor([0, 1, 3, 2]) >>> multiclass_roc(pred, target) ((tensor([0., 0., 1.]), tensor([0., 1., 1.]), tensor([1.8500, 0.8500, 0.0500])), (tensor([0., 0., 1.]), tensor([0., 1., 1.]), tensor([1.8500, 0.8500, 0.0500])), (tensor([0.0000, 0.3333, 1.0000]), tensor([0., 0., 1.]), tensor([1.8500, 0.8500, 0.0500])), (tensor([0.0000, 0.3333, 1.0000]), tensor([0., 0., 1.]), tensor([1.8500, 0.8500, 0.0500])))
-
pytorch_lightning.metrics.functional.classification.
precision
(pred, target, num_classes=None, reduction='elementwise_mean')[source]¶ Computes precision score.
- Parameters
method for reducing precision values (default: takes the mean) Available reduction methods:
elementwise_mean: takes the mean
none: pass array
sum: add elements
- Return type
- Returns
Tensor with precision.
Example
>>> x = torch.tensor([0, 1, 2, 3]) >>> y = torch.tensor([0, 1, 2, 2]) >>> precision(x, y) tensor(0.7500)
-
pytorch_lightning.metrics.functional.classification.
precision_recall
(pred, target, num_classes=None, reduction='elementwise_mean')[source]¶ Computes precision and recall for different thresholds
- Parameters
method for reducing precision-recall values (default: takes the mean) Available reduction methods:
elementwise_mean: takes the mean
none: pass array
sum: add elements
- Return type
- Returns
Tensor with precision and recall
Example
>>> x = torch.tensor([0, 1, 2, 3]) >>> y = torch.tensor([0, 1, 2, 2]) >>> precision_recall(x, y) (tensor(0.7500), tensor(0.6250))
-
pytorch_lightning.metrics.functional.classification.
precision_recall_curve
(pred, target, sample_weight=None, pos_label=1.0)[source]¶ Computes precision-recall pairs for different thresholds.
- Parameters
- Return type
- Returns
precision, recall, thresholds
Example
>>> pred = torch.tensor([0, 1, 2, 3]) >>> target = torch.tensor([0, 1, 2, 2]) >>> precision, recall, thresholds = precision_recall_curve(pred, target) >>> precision tensor([0.3333, 0.0000, 0.0000, 1.0000]) >>> recall tensor([1., 0., 0., 0.]) >>> thresholds tensor([1, 2, 3])
-
pytorch_lightning.metrics.functional.classification.
recall
(pred, target, num_classes=None, reduction='elementwise_mean')[source]¶ Computes recall score.
- Parameters
method for reducing recall values (default: takes the mean) Available reduction methods:
elementwise_mean: takes the mean
none: pass array
sum: add elements
- Return type
- Returns
Tensor with recall.
Example
>>> x = torch.tensor([0, 1, 2, 3]) >>> y = torch.tensor([0, 1, 2, 2]) >>> recall(x, y) tensor(0.6250)
-
pytorch_lightning.metrics.functional.classification.
roc
(pred, target, sample_weight=None, pos_label=1.0)[source]¶ Computes the Receiver Operating Characteristic (ROC). It assumes classifier is binary.
- Parameters
- Return type
- Returns
false-positive rate (fpr), true-positive rate (tpr), thresholds
Example
>>> x = torch.tensor([0, 1, 2, 3]) >>> y = torch.tensor([0, 1, 2, 2]) >>> fpr, tpr, thresholds = roc(x, y) >>> fpr tensor([0.0000, 0.3333, 0.6667, 0.6667, 1.0000]) >>> tpr tensor([0., 0., 0., 1., 1.]) >>> thresholds tensor([4, 3, 2, 1, 0])
-
pytorch_lightning.metrics.functional.classification.
stat_scores
(pred, target, class_index, argmax_dim=1)[source]¶ Calculates the number of true positive, false positive, true negative and false negative for a specific class
- Parameters
- Return type
- Returns
True Positive, False Positive, True Negative, False Negative, Support
Example
>>> x = torch.tensor([1, 2, 3]) >>> y = torch.tensor([0, 2, 3]) >>> tp, fp, tn, fn, sup = stat_scores(x, y, class_index=1) >>> tp, fp, tn, fn, sup (tensor(0), tensor(1), tensor(2), tensor(0), tensor(0))
-
pytorch_lightning.metrics.functional.classification.
stat_scores_multiple_classes
(pred, target, num_classes=None, argmax_dim=1)[source]¶ Calls the stat_scores function iteratively for all classes, thus calculating the number of true postive, false postive, true negative and false negative for each class
- Parameters
- Return type
- Returns
True Positive, False Positive, True Negative, False Negative, Support
Example
>>> x = torch.tensor([1, 2, 3]) >>> y = torch.tensor([0, 2, 3]) >>> tps, fps, tns, fns, sups = stat_scores_multiple_classes(x, y) >>> tps tensor([0., 0., 1., 1.]) >>> fps tensor([0., 1., 0., 0.]) >>> tns tensor([2., 2., 2., 2.]) >>> fns tensor([1., 0., 0., 0.]) >>> sups tensor([1., 0., 1., 1.])
-
pytorch_lightning.metrics.functional.classification.
to_categorical
(tensor, argmax_dim=1)[source]¶ Converts a tensor of probabilities to a dense label tensor
- Parameters
- Return type
- Returns
A tensor with categorical labels [N, d2, …]
Example
>>> x = torch.tensor([[0.2, 0.5], [0.9, 0.1]]) >>> to_categorical(x) tensor([1, 0])
-
pytorch_lightning.metrics.functional.classification.
to_onehot
(tensor, num_classes=None)[source]¶ Converts a dense label tensor to one-hot format
- Parameters
- Output:
A sparse label tensor with shape [N, C, d1, d2, …]
Example
>>> x = torch.tensor([1, 2, 3]) >>> to_onehot(x) tensor([[0, 1, 0, 0], [0, 0, 1, 0], [0, 0, 0, 1]])
- Return type