Prompt Chaining
A pattern where the output of one language model call becomes the input for the next, creating a pipeline of specialized prompts that together accomplish a complex task. Prompt chaining offers more control than single-prompt approaches.
Prompt chaining breaks complex AI tasks into a sequence of focused steps, each handled by a specialized prompt. Instead of asking one massive prompt to research, analyze, write, and format, you chain smaller prompts: one researches, its output feeds an analysis prompt, which feeds a writing prompt, which feeds a formatting prompt. Each link in the chain can use different models, temperatures, or system prompts optimized for its specific task.
For product teams, prompt chaining is often more reliable and cost-effective than complex single prompts or full agent loops. Chains are deterministic in structure (the steps are predefined), making them easier to debug and test than open-ended agent reasoning. They also let you use cheaper, faster models for simple steps and reserve expensive models for steps requiring deep reasoning. The limitation is that chains cannot dynamically adapt their structure based on intermediate results. When you need that flexibility, upgrade to a full agent loop. Many production AI features are best implemented as chains with optional agent-powered escape hatches for edge cases.
Related Terms
Model Context Protocol (MCP)
An open standard that defines how AI models connect to external tools, data sources, and services through a unified interface. MCP enables agents to dynamically discover and invoke capabilities without hardcoded integrations.
Tool Use
The ability of an AI model to invoke external functions, APIs, or services during a conversation to perform actions beyond text generation. Tool use transforms language models from passive responders into active problem solvers.
Function Calling
A model capability where the AI generates structured JSON arguments for predefined functions rather than free-form text. Function calling provides a reliable bridge between natural language understanding and programmatic execution.
Agentic Workflow
A multi-step process where an AI agent autonomously plans, executes, and iterates on tasks using tools, reasoning, and feedback loops. Agentic workflows go beyond single-turn interactions to accomplish complex goals.
ReAct Pattern
An agent architecture that interleaves Reasoning and Acting steps, where the model thinks about what to do next, takes an action, observes the result, and repeats. ReAct combines chain-of-thought reasoning with tool use in a unified loop.
Chain of Thought
A prompting technique that instructs the model to break down complex problems into sequential reasoning steps before producing a final answer. Chain of thought significantly improves accuracy on math, logic, and multi-step tasks.