Back to glossary

Few-Shot Learning

A prompting technique where a small number of input-output examples are included in the prompt to guide the model's behavior on a specific task, improving consistency without fine-tuning.

Few-shot learning sits between zero-shot prompting and full fine-tuning on the effort-quality spectrum. By including 2-10 examples of desired input-output pairs directly in your prompt, you show the model exactly what you want. This typically improves output consistency by 15-40% compared to zero-shot, with zero training required.

The technique leverages in-context learning, where the model treats the examples as implicit training data for the current inference pass. Choosing good examples matters enormously: they should cover the diversity of your input space, demonstrate edge cases, and illustrate the exact output format you expect. The order of examples can also affect performance, with models sometimes being biased toward the format of the most recent example.

Few-shot learning is the sweet spot for many production applications. It requires no model training, works immediately, and can be updated by simply changing the examples in your prompt. The main limitation is context window space: each example consumes tokens, reducing the room available for the actual input. For tasks with long inputs, this creates tension between example quality and input length that may push you toward fine-tuning.

Related Terms