Prepare for scenario-based certification questions with a study path built around decision rules, exam traps, and fast review loops. This page is optimized for students who need to move between first-pass study, targeted drilling, and last-hour review without losing context.
Read the Rapid Review section and the Domain Map first. Your goal is to learn the cue words that point to the right answer, not memorize every paragraph.
Pass 2: Domain drilling
Open one domain at a time, focus on the "Know Cold" and "Common Traps" panels, then expand the full notes only after you can explain the decision rules without looking.
Pass 3: Exam rehearsal
Use the Final Drill section, the last-hour checklist, and the code pattern snippet to test recall under time pressure.
Question Decoder
What the exam is usually testing
"Guaranteed", "must always", "enforce", "cannot be bypassed": deterministic enforcement such as hooks or forced tool output.
"Independent subtasks": parallel orchestration.
"Each step needs the previous output": sequential orchestration.
"Premature termination": inspect stop_reason, not iteration limits.
"Runaway agent": use an iteration cap only as a safety net.
"Structured output every time": forced tool_choice with a specific tool.
"Details lost over long tasks": persistent facts or scratchpad files, not repeated summarization.
"Real-time" or "user-facing": not Batch API.
Rapid Review
Highest-frequency decision rules
These are the short answers you should be able to recall instantly before you start deeper study.
If the question says
"guaranteed"
Use deterministic enforcement. Think hooks for behavior and forced tools for structured output.
If the question says
"independent subtasks"
Use parallel orchestration.
If the question says
"needs previous output"
Use sequential orchestration.
If the question says
"premature termination"
Check stop_reason. Do not rely on text parsing or the first content block type.
If the question says
"runaway agent"
Iteration cap as safety net, not as the primary loop control.
If the question says
"share context between agents"
You must pass context explicitly. Subagents do not share memory.
If the question says
"wrong tool keeps getting selected"
Improve tool descriptions first. Do not start by reducing tool count.
If the question says
"guaranteed structured output"
Forced tool_choice with a specific tool name.
If the question says
"details disappear over time"
Persistent fact blocks or scratchpad files, not progressive summarization.
If the question says
"search returned nothing"
Treat it as a valid empty result, not an error, if the tool executed successfully.
Study Tracker
Domain map
Mark a domain ready when you can answer the cue and the trap from memory without opening the full notes.
Study this first. It frames how you reason about loop control, deterministic enforcement, multi-agent design, and human escalation.
Know cold
The loop is: send request -> inspect stop_reason -> execute tools or terminate.
tool_use means continue the loop.
end_turn means stop the loop.
Tool results must be appended to conversation history before the next iteration.
Iteration limits are secondary safety bounds only.
Common traps
Parsing natural language like "I'm done."
Using iteration caps as the main stopping mechanism.
Checking only content[0].type == "text".
Assuming subagents share the coordinator's memory.
Open full reference notes
The agentic loop
stop_reason is deterministic and unambiguous. It is the reliable termination signal.
Text can appear alongside tool calls, so text alone does not mean the agent is finished.
Without appending tool results back into history, Claude cannot reason about what the tool returned.
Orchestration patterns
Pattern
When to use
Key characteristic
Sequential
Each step depends on the previous output
A -> B -> C, with state flowing forward
Parallel
Subtasks are independent and latency matters
Fan-out, fan-in with no shared state
Pipeline
Different stages have different specializations
Assembly line handoff between stages
Dynamic adaptive
Task structure is unknown upfront
The model decides decomposition at runtime
Hub-and-spoke
A coordinator delegates to specialists
Central agent plus focused subagents
Guardrails hierarchy
Mechanism
Type
Enforcement
Use for
System prompt rules
Probabilistic
Model may fail to comply
Style guidance and soft preferences
PreToolUse hooks
Deterministic
Code-level, before execution
Blocking dangerous calls and validating parameters
PostToolUse hooks
Deterministic
Code-level, after execution
Validating outputs, sanitizing results, and audit logging
Claude Agent SDK and multi-agent systems
AgentDefinition sets identity, system prompt, and tools.
allowedTools should usually keep an agent to about 4-5 tools.
The task tool delegates work to a subagent in its own context.
Handoffs transfer control but do not carry the sending agent's conversation history.
Subagents do not share memory. All context must be passed explicitly.
Human-in-the-loop format
Customer ID: 12847
Summary: Tool calls completed, but the policy exception requires human approval.
Root cause: Confidence fell below threshold after conflicting eligibility checks.
Recommended action: Review the exception and approve or deny the operation.
Error recovery and resilience
Strategy
When to use
fork_session
Divergent exploration without polluting the main context
Fresh start + summary injection
Context is stale, contradictory, or polluted
Retry with error feedback
Transient failures where the model can correct with specifics
This domain tests whether you understand precedence, path-based rules, reusable automation, and the difference between prompt guidance and code-enforced behavior.
Know cold
More specific configuration scope overrides broader scope.
.claude/rules/*.md uses YAML frontmatter with paths.
Skills are reusable capability modules with restricted tools.
Commands are prompt templates invoked with /.
Hooks are deterministic and run as code.
Common traps
Thinking project rules all belong in project CLAUDE.md instead of path-scoped rules.
Treating hooks like prompt instructions.
Reviewing output in the same session that produced it.
Assuming -p turns on plan mode.
Open full reference notes
Configuration hierarchy
Priority
Location
Scope
Committed to Git?
1
~/.claude/CLAUDE.md
User-global
No
2
.claude/CLAUDE.md
Project-wide
Yes
3
CLAUDE.md in any directory
Directory and below
Yes
4
.claude/rules/*.md
Conditional and path-matched
Yes
Conditional rules example
---
paths:
- "src/api/**"
- "src/middleware/**"
---
Always validate authentication tokens before processing API requests.
Use structured error responses with proper HTTP status codes.
Skills and commands
Capability
Location
Purpose
Project skills
.claude/skills/
Shared reusable workflows with tool restrictions
Personal skills
~/.claude/skills/
Personal reusable workflows
Project commands
.claude/commands/
Shared prompt templates
Personal commands
~/.claude/commands/
Personal prompt templates
Skills can restrict access with allowed-tools and isolate work with context: fork. Commands are just prompt templates.
This domain tests how you turn vague instructions into reliable outputs through system prompts, schemas, examples, chaining, and retries.
Know cold
System prompts work best when they include role, rules, output format, and calibration examples.
Forced tool use is better than prompt-based JSON when the format must be guaranteed.
Nullable fields prevent fabrication.
2-4 few-shot examples is usually the best tradeoff.
Retry with error feedback should include the original prompt, failed output, and the specific validation error.
Common traps
Using prompt-based JSON for production-grade schema guarantees.
Saying only "try again" without specific validation feedback.
Making uncertain fields required and forcing the model to invent data.
Using Batch API for real-time work.
Open full reference notes
System prompts
Concrete examples beat abstract prose because they show the exact form you want.
Severity calibration matters or the model flattens critical and minor issues together.
System prompts are cost-effective because prompt caching reuses them across requests.
Structured output: tool use vs text
Method
Guarantee
Use when
Forced tool_choice
Schema enforced by the API
You need guaranteed structure every time
tool_choice: auto
Tool use is optional
Agentic loops where text is also valid
Prompt-based JSON
No schema enforcement
Prototype-only scenarios
Schema design rules
Make uncertain fields optional or nullable.
Use enums for constrained values.
Keep schemas flat where possible.
Add descriptions for each property.
Few-shot examples and chaining
Use few-shot when instructions alone are inconsistent.
2-4 examples is the sweet spot.
Include edge cases and reasoning, not just I/O pairs.
Chain prompts when the task has distinct phases like extract, validate, format, and synthesize.
Retry with error feedback
Original prompt: "Extract all dates from this contract"
Failed output: { "dates": ["2024-01-15", "next Tuesday"] }
Validation error: "dates[1] is relative. All dates must be ISO 8601."
Batch API and review limits
Concept
Correct interpretation
Batch API
Cheaper for latency-tolerant bulk workloads, not faster
Iteration cap is a safety net, not primary control.
Hooks are deterministic. Prompts are probabilistic.
Subagents do not share memory.
Improve tool descriptions before reducing tools.
tool_choice: auto is the default for agentic loops.
Forced tool use guarantees structured output.
Nullable fields reduce fabrication.
Use 2-4 few-shot examples.
Batch API saves cost, not time.
Persistent facts beat progressive summaries.
Per-type accuracy beats aggregate-only reporting.
Code pattern
Loop snippet worth memorizing
while (true) {
const response = await client.messages.create(request);
if (response.stop_reason === "tool_use") {
const toolResults = await runRequestedTools(response);
history.push(response);
history.push(toolResults);
continue;
}
if (response.stop_reason === "end_turn") {
history.push(response);
break;
}
throw new Error("Unexpected stop_reason: " + response.stop_reason);
}
Final drill
Quick self-check
Question 1
Which signal should end the agent loop?
end_turn is the deterministic termination signal. Text alone is not reliable, and iteration limits are only safety bounds.
Question 2
You need guaranteed structured output that matches a schema every time. What should you use?
Forced tool use is the preferred production answer because the API enforces the schema. any forces a tool call but does not guarantee which tool.
Question 3
Your long-running workflow keeps losing names, timestamps, and IDs as the conversation grows. What is the right fix?
Repeated summarization loses detail. Preserve critical facts verbatim or store them externally in a scratchpad file that survives context resets.
Question 4
A search tool executed successfully and returned zero matches. How should the agent treat that result?
If the tool ran successfully, no matches is still a valid answer. Reserve retries and escalation for access failures like auth errors, timeouts, or rate limits.