Back to glossary

ROC Curve

A graphical plot that illustrates a binary classifier's performance across all classification thresholds by plotting the true positive rate against the false positive rate.

The ROC (Receiver Operating Characteristic) curve shows how a classifier's sensitivity and specificity trade off as you vary the decision threshold. At threshold 0 (classify everything as positive), you achieve perfect recall but maximum false positive rate (top-right corner). At threshold 1 (classify everything as negative), you have zero false positives but also zero recall (bottom-left corner). The curve traces all points in between.

A perfect classifier produces an ROC curve that goes straight up to (0,1) and across, hugging the top-left corner. A random classifier produces a diagonal line from (0,0) to (1,1). Better models have curves that bow toward the top-left corner, indicating higher true positive rates at lower false positive rates. The area under this curve (AUC) summarizes overall discriminative performance in a single number.

For practical applications, the ROC curve is most useful for choosing an operating threshold. You can visually identify the point on the curve that best matches your business requirements: a fraud detection system might tolerate a 5% false positive rate and read off the corresponding true positive rate. The ROC curve is threshold-agnostic, which makes it great for model comparison but less directly interpretable than precision-recall curves for highly imbalanced datasets.

Related Terms