AI Agents
05·AI Agents·updated 2026-04-19

Agentic Design Patterns (5 core + 7 multi-agent)

Daily Dose DS lists 5 core agentic patterns (Reflection, Tool Use, ReAct, Planning, Multi-Agent) and 7 multi-agent orchestration patterns (Parallel, Sequential, Loop, Router, Aggregator, Network, Hierarchical). Real systems mix them. Picking the right pattern determines cost, latency, and reliability.

Agentic Design Patterns (5 core + 7 multi-agent)

Watch or read first

TL;DR

Daily Dose DS lists 5 core agentic patterns (Reflection, Tool Use, ReAct, Planning, Multi-Agent) and 7 multi-agent orchestration patterns (Parallel, Sequential, Loop, Router, Aggregator, Network, Hierarchical). Real systems mix them. Picking the right pattern determines cost, latency, and reliability.

The historical problem

As agents got more capable in 2023-2024, teams repeated the same mistakes:

  • One mega-agent trying to do everything poorly
  • Multi-agent systems where agents duplicated work
  • No escape from infinite loops
  • Reflection added without evaluating whether it helped

The field crystallized a handful of patterns, each solving a specific failure mode. Knowing them saves re-inventing bad versions.

The 5 core agentic patterns (Daily Dose DS)

1. Reflection pattern

The agent reviews its own work, spots mistakes, iterates until the output is acceptable.

draft --> critique --> revise --> critique --> ... --> final

Implementation: two prompts, one generator, one reviewer. Or one agent with two phases.

Example: a writing agent drafts, then critiques tone and structure, then revises.

Best for: writing, code review, output quality is hard to achieve in one shot.

2. Tool use pattern

The LLM calls external tools (APIs, databases, code execution) to gather information or take action.

query --> LLM decides tool --> tool runs --> LLM uses result --> answer

See function calling for the primitive and agent building blocks section on tools.

Best for: tasks needing fresh data, computation, world interaction.

3. ReAct pattern (Reason + Act)

Combines reflection and tool use in a loop. Thought -> Action -> Observation -> Thought -> ... until Answer.

See react pattern for the deep dive.

Best for: general-purpose agents. Default starting point.

4. Planning pattern

Instead of solving step-by-step reactively, the agent creates a full plan upfront:

  • Subdivide the task
  • List objectives
  • Sequence steps
  • Execute each
goal --> plan (steps 1..N) --> execute 1 --> execute 2 --> ... --> answer

In CrewAI, set planning=True.

Best for: tasks with known structure, predictable workflows, where flexibility is not critical.

5. Multi-agent pattern

Multiple specialized agents, each with a role and tools, collaborate to achieve the goal. Tasks get delegated.

         +---------+
         | Manager |
         +---------+
         /    |    \
   +-----+ +------+ +------+
   | A1  | |  A2  | |  A3  |
   +-----+ +------+ +------+

Best for: tasks benefiting from specialization or parallelism.

See the 7 multi-agent patterns below for the topologies.

The 7 multi-agent orchestration patterns (Daily Dose DS)

Once you have multiple agents, how do they collaborate? Seven recurring topologies.

1. Parallel

All agents tackle different subtasks at once. Outputs merge.

query --> [ Agent A | Agent B | Agent C ] --> merge --> result

Best for: high-throughput pipelines (document parsing, feature extraction).

2. Sequential

Agents work in a pipeline, each adding value.

input --> Agent A --> Agent B --> Agent C --> output

Best for: ETL chains, code generate -> review -> deploy, multi-step reasoning.

3. Loop

Agents continuously refine their own output until a quality threshold is met.

output -> critique -> refine -> critique -> ... -> done

Best for: creative iteration, proofreading, quality-sensitive text generation.

4. Router

A controller agent routes each task to the best specialist.

query --> Router --> { FinAgent, LegalAgent, OpsAgent }

Best for: heterogeneous query types, MCP/A2A-style systems.

5. Aggregator

Many agents produce partial results; a main agent aggregates them.

query --> { A, B, C, D } --> Aggregator --> consensus result

Best for: RAG retrieval fusion, voting systems, ensembles.

6. Network

No hierarchy. Agents talk peer-to-peer, sharing context dynamically.

A <--> B
 \    /
  \  /
   C

Best for: simulations, multi-agent games, collective reasoning research.

7. Hierarchical

A top-level planner delegates to workers, tracks progress, makes final calls.

         Planner
        /   |   \
    Worker Worker Worker

Best for: large complex tasks where you need coordination and a single decision-maker.

The pattern picker question

Daily Dose DS says: don't pick the pattern that "looks coolest". Pick the one that minimizes friction between agents. Friction = duplicated work, waiting, unclear handoffs.

Three questions:

  1. No two agents duplicate work?
  2. Every agent knows when to act and when to wait?
  3. Does the system collectively feel smarter than any individual part?

If no, simplify.

Relevance today (2026)

The 5 core patterns are stable

The reflection/tool-use/ReAct/planning/multi-agent split is widely accepted. Every framework exposes them in some form.

Multi-agent topologies are practice, not theory

In 2024, multi-agent was hype. In 2026, serious teams use specific topologies:

  • Router pattern for customer support (route by intent)
  • Sequential for code pipelines (write -> review -> test -> deploy)
  • Hierarchical for research tasks (planner + workers)
  • Aggregator for consensus (RAG fusion, LLM-as-judge)
  • Parallel for doc processing

Network and Loop are mostly research. Parallel, Sequential, Router, Hierarchical dominate production.

Anthropic's "agents vs workflows" distinction

Anthropic's influential 2024 paper split:

  • Workflows: predefined orchestration, LLM calls at fixed points.
  • Agents: LLM decides the flow.

Many of the 7 patterns are workflows, not agents. Sequential and Parallel often work best as workflows. Router, Aggregator, Hierarchical benefit from agent decision-making. Pick by predictability.

Reflection got less popular

Once reasoning models (o1, R1, Opus 4.5 thinking) internalized reflection, explicit reflection-pattern loops became less useful for simple tasks. Still relevant for complex outputs where you want structured critique (essays, code review).

Planning pattern resurges with reasoning models

Reasoning models are great planners. The Planning pattern + reasoning model is a powerful combination: the model lays out a multi-step plan, then a cheaper model executes each step. Cost-efficient.

ARQ as a meta-pattern

Attentive Reasoning Queries (see reasoning prompting techniques) adds structure to reflection and reasoning. Parlant embeds ARQ inside multiple agent modules.

Critical questions

  • When does multi-agent actually help vs a single smarter agent? (When tasks genuinely parallelize or need distinct expertise. Often one competent agent with good tools beats 5 mediocre agents.)
  • Why is the Loop pattern dangerous? (Infinite refinement. Every loop needs a stopping rule: max iterations, quality threshold, or user approval.)
  • When is Network better than Hierarchical? (Almost never in production. Network is flexible but chaotic. Hierarchical scales better.)
  • Can you mix patterns in one system? (Yes, almost always. A hierarchical system might have parallel workers and a router at the top.)
  • How do you debug multi-agent systems? (Tracing per agent, plus tracing per handoff. Langfuse, LangSmith, Arize.)
  • Should each agent have its own memory? (Often yes for privacy and specialization, with a shared blackboard for coordination.)

Production pitfalls

  • Over-engineering. Five agents where one would do. Start with one. Add agents when clearly needed.
  • No clear handoff protocol. Multi-agent systems without defined "who acts when" degrade into chaos.
  • Reflection without eval. You add reflection and claim "quality went up". Prove it with an A/B test.
  • Sequential latency. Chain of 4 agents = sum of latencies. Think twice before making users wait.
  • Parallel race conditions. Two parallel agents writing to the same resource = corrupted state. Use locks or stage aggregation.
  • Planning brittle on surprise. Plans assume the world is predictable. Agents should re-plan on failure, not execute stale plans.
  • Hierarchical single-point-of-failure. The manager agent crashes or hallucinates, the whole system falls. Add retry and human fallback.
  • Duplicated retrieval across agents. Ten agents each retrieve the same docs. Cache retrievals.

Alternatives / Comparisons

Daily Dose DS's 5 + 7 patterns overlap with other taxonomies:

  • Andrew Ng lists 4 patterns: Reflection, Tool Use, Planning, Multi-Agent (omits ReAct as a standalone, folding it in).
  • Anthropic distinguishes Workflows (Prompt chaining, Routing, Parallelization, Orchestrator-workers, Evaluator-optimizer) from Agents. Maps closely onto Daily Dose DS's 7 multi-agent patterns.
  • Academic papers talk about blackboard architectures, mixture of experts.

All taxonomies say roughly the same thing: start simple, add specialization when needed, orchestrate deliberately.

Mental parallels (non-AI)

  • Reflection: peer review in science, or self-edit pass in writing.
  • Tool Use: a detective calling a lab for fingerprint analysis.
  • ReAct: think, act, observe, think again. Any troubleshooter does this.
  • Planning: architects drawing plans before building.
  • Parallel: assembly line stations each doing one thing concurrently.
  • Sequential: a publication pipeline (write, edit, typeset, print).
  • Loop: an artist iterating on a canvas until satisfied.
  • Router: triage nurse in an ER.
  • Aggregator: jury deliberation leading to a verdict.
  • Network: open-source contributors on a project, peer collaboration.
  • Hierarchical: military command structure.

These mappings are useful when explaining architecture to non-technical stakeholders.

Mini-lab

labs/agentic-patterns/ (to create):

  1. Pick a task: "Generate a structured research report on a topic."
  2. Implement 3 variants:
    • Single-agent ReAct (baseline)
    • Hierarchical: 1 planner + 3 workers (researcher, summarizer, formatter)
    • Sequential: researcher -> summarizer -> formatter -> reviewer
  3. Benchmark on 10 topics:
    • Quality (judge LLM on a rubric)
    • Latency
    • Token cost
  4. Add reflection to the worst performer. See if it helps.

Stack: uv, crewai or langgraph, anthropic, langfuse.

Further reading

Canonical

Related in this KB

Frameworks implementing these patterns

agentsdesign-patternsmulti-agentreflectionplanningtool-usereact