Back to glossary

Bayesian Inference

A statistical framework that updates probability estimates as new evidence becomes available, combining prior beliefs with observed data to produce posterior probability distributions over hypotheses.

Bayesian inference starts with a prior distribution representing your belief before seeing data, then updates this belief using observed data through Bayes' theorem to produce a posterior distribution. Unlike frequentist methods that provide point estimates and p-values, Bayesian methods provide full probability distributions that answer questions like "what is the probability that variant B is better than variant A?"

The Bayesian approach is particularly natural for A/B testing. Instead of waiting for a fixed sample size and then declaring significance, Bayesian methods continuously update the probability that each variant is best. You can check results at any time without inflating error rates, and the output is intuitive: "there is a 94% probability that variant B improves conversion by 2-5%."

For growth teams, Bayesian methods offer practical advantages. They handle small sample sizes better than frequentist methods, provide probabilistic answers that align with business decision-making, and support more flexible experimental designs. Platforms like Optimizely and VWO now offer Bayesian analysis alongside traditional frequentist statistics.

Related Terms