Back to glossary

Data Normalization

The process of organizing data to reduce redundancy and improve integrity through a series of normal forms, or the statistical process of scaling numeric features to a standard range for machine learning.

Data normalization has two meanings depending on context. In database design, normalization reduces redundancy by decomposing tables into smaller, related tables. First Normal Form eliminates repeating groups. Second Normal Form removes partial dependencies. Third Normal Form removes transitive dependencies. The goal is a schema where each fact is stored once, preventing update anomalies.

In machine learning, normalization scales numeric features to comparable ranges. Min-max normalization scales values to [0, 1]. Z-score normalization (standardization) transforms features to have mean 0 and standard deviation 1. This prevents features with large numeric ranges (like salary in thousands) from dominating features with small ranges (like age in tens) in distance-based algorithms.

Both meanings are relevant for AI teams. Database normalization in operational systems keeps data clean and consistent at the source. Feature normalization in ML pipelines ensures models treat all features fairly. Choosing the right normalization technique depends on the algorithm (neural networks prefer min-max; tree-based models are often scale-invariant) and the data distribution (z-score handles outliers differently than min-max).

Related Terms