Back to glossary

Precision and Recall

Complementary classification metrics where precision measures the fraction of positive predictions that are correct, and recall measures the fraction of actual positives that are detected.

Precision and recall capture different types of errors in classification. Precision answers "Of all the items I flagged as positive, how many actually are?" while recall answers "Of all the truly positive items, how many did I find?" A spam filter with 99% precision rarely marks legitimate email as spam, but if recall is 50%, half the spam gets through.

The tension between precision and recall is fundamental. Increasing one typically decreases the other. Lowering the classification threshold catches more true positives (higher recall) but also more false positives (lower precision). The right balance depends entirely on the business context: fraud detection prioritizes recall (missing a fraud is costly), while content recommendation prioritizes precision (showing irrelevant content hurts engagement).

For growth teams using ML models, choosing between precision and recall has direct business impact. A churn prediction model with high recall catches nearly every at-risk customer but may waste outreach resources on false alarms. A lead scoring model with high precision ensures sales teams only contact likely converters but may miss some viable leads. The optimal trade-off is determined by the relative costs of false positives versus false negatives in your specific application.

Related Terms