Confusion matrix

From Infogalactic: the planetary knowledge core
Jump to: navigation, search
Terminology and derivations
from a confusion matrix
true positive (TP)
eqv. with hit
true negative (TN)
eqv. with correct rejection
false positive (FP)
eqv. with false alarm, Type I error
false negative (FN)
eqv. with miss, Type II error

sensitivity or true positive rate (TPR)
eqv. with hit rate, recall
\mathit{TPR} = \frac {\mathit{TP}} {P} = \frac {\mathit{TP}} {\mathit{TP}+\mathit{FN}}
specificity (SPC) or true negative rate (TNR)
\mathit{SPC} = \frac {\mathit{TN}} {N} = \frac {\mathit{TN}} {\mathit{FP} + \mathit{TN}}
precision or positive predictive value (PPV)
\mathit{PPV} = \frac {\mathit{TP}} {\mathit{TP} + \mathit{FP}}
negative predictive value (NPV)
\mathit{NPV} = \frac {\mathit{TN}} {\mathit{TN} + \mathit{FN}}
fall-out or false positive rate (FPR)
\mathit{FPR} = \frac {\mathit{FP}} {N} = \frac {\mathit{FP}} {\mathit{FP} + \mathit{TN}} = 1 - \mathit{SPC}
false discovery rate (FDR)
\mathit{FDR} = \frac {\mathit{FP}} {\mathit{FP} + \mathit{TP}} = 1 - \mathit{PPV}
miss rate or false negative rate (FNR)
\mathit{FNR} = \frac {\mathit{FN}} {P} = \frac {\mathit{FN}} {\mathit{FN} + \mathit{TP}}

accuracy (ACC)
\mathit{ACC} = \frac {\mathit{TP} + \mathit{TN}} {P + N}
F1 score
is the harmonic mean of precision and sensitivity
\mathit{F1} = \frac {2 \mathit{TP}} {2 \mathit{TP} + \mathit{FP} + \mathit{FN}}
Matthews correlation coefficient (MCC)
 \frac{ TP \times TN - FP \times FN } {\sqrt{ (TP+FP) ( TP + FN ) ( TN + FP ) ( TN + FN ) } }

Informedness = Sensitivity + Specificity - 1
Markedness = Precision + NPV - 1

Sources: Fawcett (2006) and Powers (2011).[1][2]

In the field of machine learning, a confusion matrix, also known as a contingency table or an error matrix [3] , is a specific table layout that allows visualization of the performance of an algorithm, typically a supervised learning one (in unsupervised learning it is usually called a matching matrix). Each column of the matrix represents the instances in a predicted class while each row represents the instances in an actual class (or vice-versa).[2] The name stems from the fact that it makes it easy to see if the system is confusing two classes (i.e. commonly mislabeling one as another).

Example

If a classification system has been trained to distinguish between cats, dogs and rabbits, a confusion matrix will summarize the results of testing the algorithm for further inspection. Assuming a sample of 27 animals — 8 cats, 6 dogs, and 13 rabbits, the resulting confusion matrix could look like the table below:

Predicted
Cat Dog Rabbit
Actual
class
Cat 5 3 0
Dog 2 3 1
Rabbit 0 2 11
In this confusion matrix, of the 8 actual cats, the system predicted that three were dogs, and of the six dogs, it predicted that one was a rabbit and two were cats. We can see from the matrix that the system in question has trouble distinguishing between cats and dogs, but can make the distinction between rabbits and other types of animals pretty well. All correct guesses are located in the diagonal of the table, so it's easy to visually inspect the table for errors, as they will be represented by values outside the diagonal.

Table of confusion

In predictive analytics, a table of confusion (sometimes also called a confusion matrix), is a table with two rows and two columns that reports the number of false positives, false negatives, true positives, and true negatives. This allows more detailed analysis than mere proportion of correct guesses (accuracy). Accuracy is not a reliable metric for the real performance of a classifier, because it will yield misleading results if the data set is unbalanced (that is, when the number of samples in different classes vary greatly). For example, if there were 95 cats and only 5 dogs in the data set, the classifier could easily be biased into classifying all the samples as cats. The overall accuracy would be 95%, but in practice the classifier would have a 100% recognition rate for the cat class but a 0% recognition rate for the dog class.

Assuming the confusion matrix above, its corresponding table of confusion, for the cat class, would be:

5 true positives
(actual cats that were
correctly classified as cats)
2 false positives
(dogs that were
incorrectly labeled as cats)
3 false negatives
(cats that were
incorrectly marked as dogs)
17 true negatives
(all the remaining animals,
correctly classified as non-cats)

The final table of confusion would contain the average values for all classes combined.

Let us define an experiment from P positive instances and N negative instances for some condition. The four outcomes can be formulated in a 2×2 contingency table or confusion matrix, as follows:


Predicted condition
Total population Predicted Condition positive Predicted Condition negative Prevalence = <templatestyles src="Sfrac/styles.css" />Σ Condition positive/Σ Total population
True
condition
condition
positive
True positive False Negative
(Type II error)
True positive rate (TPR), Sensitivity, Recall = <templatestyles src="Sfrac/styles.css" />Σ True positive/Σ Condition positive False negative rate (FNR), Miss rate = <templatestyles src="Sfrac/styles.css" />Σ False negative/Σ Condition positive
condition
negative
False Positive
(Type I error)
True negative False positive rate (FPR), Fall-out = <templatestyles src="Sfrac/styles.css" />Σ False positive/Σ Condition negative True negative rate (TNR), Specificity (SPC) = <templatestyles src="Sfrac/styles.css" />Σ True negative/Σ Condition negative
Accuracy (ACC) = <templatestyles src="Sfrac/styles.css" />Σ True positive + Σ True negative/Σ Total population Positive predictive value (PPV), Precision = <templatestyles src="Sfrac/styles.css" />Σ True positive/Σ Test outcome positive False omission rate (FOR) = <templatestyles src="Sfrac/styles.css" />Σ False negative/Σ Test outcome negative Positive likelihood ratio (LR+) = <templatestyles src="Sfrac/styles.css" />TPR/FPR Diagnostic odds ratio (DOR) = <templatestyles src="Sfrac/styles.css" />LR+/LR−
False discovery rate (FDR) = <templatestyles src="Sfrac/styles.css" />Σ False positive/Σ Test outcome positive Negative predictive value (NPV) = <templatestyles src="Sfrac/styles.css" />Σ True negative/Σ Test outcome negative Negative likelihood ratio (LR−) = <templatestyles src="Sfrac/styles.css" />FNR/TNR

See also

References

  1. Lua error in package.lua at line 80: module 'strict' not found.
  2. 2.0 2.1 Lua error in package.lua at line 80: module 'strict' not found.
  3. Lua error in package.lua at line 80: module 'strict' not found.

External links