Few-shot prompting
- Definition
- Providing a model with a small number of examples in the prompt to guide its behavior, without any fine-tuning. Few-shot is a fast, low-cost way to adapt a general model to a specific task.
- Why it matters
- Few-shot prompting is the fastest path from idea to working prototype. Instead of spending weeks on fine-tuning, you drop 3-5 examples into your prompt and the model adapts instantly. This makes AI accessible to teams without ML engineering expertise. The strategic advantage is speed: you can test ten different AI features in a day using few-shot prompting, while fine-tuning each would take weeks. The limitation is context window cost, as examples consume tokens, and reliability: few-shot is less consistent than fine-tuning for production workloads. Smart teams use few-shot for rapid prototyping and graduate to fine-tuning only when they have validated the use case.
- In practice
- GPT-3's original paper (Brown et al., 2020) demonstrated that few-shot prompting could match fine-tuned models on many benchmarks. In practice, companies use few-shot for everything from sentiment classification to data extraction. A common pattern: include 5 examples of the desired input-output format in the system prompt, then process new inputs. Anthropic's Claude documentation recommends 3-5 examples for most tasks. The technique is often combined with chain-of-thought: showing the model examples of step-by-step reasoning produces better results than showing only input-output pairs.
We cover products & deployment every week.
Get the 5 AI stories that matter — free, every Friday.
Related terms
Zero-shot prompting
Asking a model to perform a task with no examples, relying entirely on its pre-trained knowledge and instruction-following ability. Zero-shot capability is a key measure of model generality and usability.
In-context learning
A model's ability to learn new tasks from examples provided in the prompt, without any weight updates. In-context learning is what makes few-shot and zero-shot prompting work and is a defining feature of large language models.
Prompt engineering
The practice of crafting inputs to AI models to elicit desired outputs. Prompt engineering has become a critical skill and even a job title, though its importance may decrease as models improve at understanding intent.
Fine-tuning
The process of continuing to train a pre-trained model on a smaller, task-specific dataset. Fine-tuning customizes model behavior for specific domains or formats and is a key part of most enterprise AI deployments.
Know the terms. Know the moves.
Get the 5 AI stories that matter every Friday — free.
Free forever. No spam.