All About Classification Model’s Accuracy — Episode 2
Hello, all this is episode 2 of “All About Classification Model’s Accuracy “. Please visit episode 1 before going forward.
Episode 2
7. What is Specificity?
8. What is False Positive Rate?
9. What is False Negative Rate?
10. What is Type — I Error?
11. What is Type — II Error?
11. What is ROC Curve?
12. What is AUC Curve?
What is Specificity?
Among total negative predicted cases how many are negative correct.
Formula
Specificity = True Negatives / (True Negatives + False Positives)
What is False Positive Rate?
Among total negative predicted cases how many of them are positive incorrect.
Formula
False Positive Rate = 1-Specificity Or
False Positive Rate = False Positive / (True Negative + False Positive)
What is False Negative Rate?
Among total positive predicted cases how many are negative incorrect.
Formula
FNR = FN / (FN + TP)
What is Type — I Error?
It is also known as false positive. It is the count of positive incorrect predictions.
What is Type — II Error?
It is also known as a false negative. It is the count of negative incorrect results.
What is ROC Curve?
A common tool used with binary classifiers. The ROC curve plots the true positive rate(recall) against the false positive rate. It gives us the trade-off between the True Positive Rate (TPR) and the False Positive Rate (FPR) at different classification thresholds.

What is AUC Curve?
Area Under the Curve or AUC ROC curve is nothing but the area under the curve calculated in the ROC space.
AUC = 0 model whose predictions are 100 % wrong and AUC = 1 model whose predictions are 100 % correct.

So that completes our series on “All About Classification Model’s Accuracy”.
Thank you
Other Medium Articles.
Social Links
Linked In : linkedin.com/in/kishan-tongrao-6b9201112
Facebook : facebook.com/profile.php?id=100009125915876
Twitter : twitter.com/kishantongs