Back to glossary

Dimensionality Reduction

Techniques that reduce the number of dimensions in high-dimensional data while preserving meaningful structure, used for visualization, compression, and noise removal.

Embeddings live in high-dimensional spaces — 768, 1536, or even 3072 dimensions. Dimensionality reduction compresses these into lower-dimensional representations (2D for visualization, 256D for efficiency) while preserving the relative distances between points as much as possible.

Common techniques include PCA (Principal Component Analysis) for linear reduction, t-SNE for 2D/3D visualization of clusters, and UMAP for preserving both local and global structure. Matryoshka embeddings (supported by models like OpenAI's text-embedding-3) offer a different approach: the model is trained so that the first N dimensions are a valid lower-dimensional embedding, letting you truncate without a separate reduction step.

For growth teams, dimensionality reduction has practical applications: visualizing user segments to understand behavioral clusters, compressing embeddings to reduce vector database storage costs and improve search speed, and removing noise dimensions that hurt downstream model performance. The trade-off is always information loss — the question is whether the speed and cost savings justify the small accuracy reduction.

Related Terms

Further Reading