Accuracy will yield misleading results if the data set is unbalanced that is, when the numbers of observations in different classes vary greatly.įor example, if there were 95 cancer samples and only 5 non-cancer samples in the data, a particular classifier might classify all the observations as having cancer. This allows more detailed analysis than simply observing the proportion of correct classifications (accuracy). In predictive analytics, a table of confusion (sometimes also called a confusion matrix) is a table with two rows and two columns that reports the number of true positives, false negatives, false positives, and true negatives. Sensitivity, recall, hit rate, or true positive rate (TPR) T P R = T P P = T P T P + F N = 1 − F N R. True positive (TP) A test result that correctly indicates the presence of a condition or characteristic true negative (TN) A test result that correctly indicates the absence of a condition or characteristic false positive (FP), Type I error A test result which wrongly indicates that a particular condition or attribute is present false negative (FN), Type II error A test result which wrongly indicates that a particular condition or attribute is absent Table layout for visualizing performance also called an error matrix Terminology and derivationsįrom a confusion matrix condition positive (P) the number of real positive cases in the data condition negative (N) the number of real negative cases in the data
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |