Certification Prep Track

Claude Architect Certification

Prepare for scenario-based certification questions with a study path built around decision rules, exam traps, and fast review loops. This page is optimized for students who need to move between first-pass study, targeted drilling, and last-hour review without losing context.

5 domains Advanced level 8 guided hours Scenario-heavy exam prep
0% completed
0 / 5 lessons complete

Use this page in three passes

Pass 1: Orientation

Read the Rapid Review section and the Domain Map first. Your goal is to learn the cue words that point to the right answer, not memorize every paragraph.

Pass 2: Domain drilling

Open one domain at a time, focus on the "Know Cold" and "Common Traps" panels, then expand the full notes only after you can explain the decision rules without looking.

Pass 3: Exam rehearsal

Use the Final Drill section, the last-hour checklist, and the code pattern snippet to test recall under time pressure.

What the exam is usually testing

Rapid Review

Highest-frequency decision rules

These are the short answers you should be able to recall instantly before you start deeper study.

If the question says

"guaranteed"

Use deterministic enforcement. Think hooks for behavior and forced tools for structured output.

If the question says

"independent subtasks"

Use parallel orchestration.

If the question says

"needs previous output"

Use sequential orchestration.

If the question says

"premature termination"

Check stop_reason. Do not rely on text parsing or the first content block type.

If the question says

"runaway agent"

Iteration cap as safety net, not as the primary loop control.

If the question says

"share context between agents"

You must pass context explicitly. Subagents do not share memory.

If the question says

"wrong tool keeps getting selected"

Improve tool descriptions first. Do not start by reducing tool count.

If the question says

"guaranteed structured output"

Forced tool_choice with a specific tool name.

If the question says

"details disappear over time"

Persistent fact blocks or scratchpad files, not progressive summarization.

If the question says

"search returned nothing"

Treat it as a valid empty result, not an error, if the tool executed successfully.

Study Tracker

Domain map

Mark a domain ready when you can answer the cue and the trap from memory without opening the full notes.

Master the agent loop, orchestration patterns, deterministic guardrails, and human handoff rules.

  • Cue: premature termination.
  • Answer: inspect stop_reason.
  • Trap: prompts do not guarantee compliance.

Domain 2

Tool Design & MCP Integration

Improve tool selection, choose the right tool_choice, and handle empty results versus failures correctly.

  • Cue: wrong tool gets picked.
  • Answer: improve descriptions first.
  • Trap: tool names are not the primary signal.

Understand configuration precedence, rules, skills, commands, hooks, and when to use plan mode or non-interactive execution.

  • Cue: path-specific rule.
  • Answer: .claude/rules/ with paths.
  • Trap: -p is not plan mode.

Use system prompts, few-shot examples, chaining, and forced tools to produce reliable structured output.

  • Cue: guaranteed schema compliance.
  • Answer: forced tool use.
  • Trap: required fields can cause fabrication.

Domain 5

Context Management & Reliability

Preserve critical facts, recover from stale context, and monitor quality at the category level instead of only aggregate metrics.

  • Cue: details are getting lost.
  • Answer: persistent facts or scratchpads.
  • Trap: overall accuracy can hide category failures.

Agentic Architecture & Orchestration

Study this first. It frames how you reason about loop control, deterministic enforcement, multi-agent design, and human escalation.

Know cold

  • The loop is: send request -> inspect stop_reason -> execute tools or terminate.
  • tool_use means continue the loop.
  • end_turn means stop the loop.
  • Tool results must be appended to conversation history before the next iteration.
  • Iteration limits are secondary safety bounds only.

Common traps

  • Parsing natural language like "I'm done."
  • Using iteration caps as the main stopping mechanism.
  • Checking only content[0].type == "text".
  • Assuming subagents share the coordinator's memory.
Open full reference notes

The agentic loop

Orchestration patterns

Pattern When to use Key characteristic
Sequential Each step depends on the previous output A -> B -> C, with state flowing forward
Parallel Subtasks are independent and latency matters Fan-out, fan-in with no shared state
Pipeline Different stages have different specializations Assembly line handoff between stages
Dynamic adaptive Task structure is unknown upfront The model decides decomposition at runtime
Hub-and-spoke A coordinator delegates to specialists Central agent plus focused subagents

Guardrails hierarchy

Mechanism Type Enforcement Use for
System prompt rules Probabilistic Model may fail to comply Style guidance and soft preferences
PreToolUse hooks Deterministic Code-level, before execution Blocking dangerous calls and validating parameters
PostToolUse hooks Deterministic Code-level, after execution Validating outputs, sanitizing results, and audit logging

Claude Agent SDK and multi-agent systems

Human-in-the-loop format

Customer ID: 12847
Summary: Tool calls completed, but the policy exception requires human approval.
Root cause: Confidence fell below threshold after conflicting eligibility checks.
Recommended action: Review the exception and approve or deny the operation.

Error recovery and resilience

Strategy When to use
fork_session Divergent exploration without polluting the main context
Fresh start + summary injection Context is stale, contradictory, or polluted
Retry with error feedback Transient failures where the model can correct with specifics
Graceful degradation Partial results are better than a hard failure

Tool Design & MCP Integration

This domain is about getting the model to choose the right tool, pass the right schema, and recover correctly when integrations fail.

Know cold

  • Tool descriptions matter more than tool names.
  • Stay near 4-5 tools per agent to reduce misselection.
  • auto is the default tool_choice for agentic loops.
  • tool forces a specific tool and gives guaranteed schema compliance.
  • Use community MCP servers before building custom ones.

Common traps

  • Trying to fix misselection by reducing tool count before improving descriptions.
  • Assuming tool_choice: any guarantees a specific tool.
  • Treating empty results as failures.
  • Returning generic error strings instead of structured metadata.
Open full reference notes

Tool description design

Schema design rules

tool_choice modes

Mode Behavior Use when
auto Model decides whether to call a tool Default for most agentic loops
any Model must call at least one tool A tool is required, but the model can choose which one
tool Model must call a specific named tool You need guaranteed structured output

MCP architecture

Layer Role Example
Client Connects to servers and routes tool calls Claude Desktop or an IDE extension
Host Manages client lifecycle The app process itself
Server Exposes tools, resources, and prompts Filesystem, database, or API connector

Project-level configuration lives in .mcp.json. Personal global configuration lives in ~/.claude.json.

Structured tool errors

{
  "errorCategory": "authentication",
  "isRetryable": false,
  "retryAfterMs": 5000,
  "partialResult": null,
  "suggestion": "Check API key permissions"
}

Tool selection in Claude Code

Tool Purpose Use when
Grep Search file contents by pattern You are looking for code patterns or strings
Glob Find files by path pattern You need discovery by name or extension
Read Read a specific file You know the exact path
Edit Modify file contents You are making targeted changes
Bash Run shell commands The task is build, test, or outside the specialized tools

Claude Code Configuration & Workflows

This domain tests whether you understand precedence, path-based rules, reusable automation, and the difference between prompt guidance and code-enforced behavior.

Know cold

  • More specific configuration scope overrides broader scope.
  • .claude/rules/*.md uses YAML frontmatter with paths.
  • Skills are reusable capability modules with restricted tools.
  • Commands are prompt templates invoked with /.
  • Hooks are deterministic and run as code.

Common traps

  • Thinking project rules all belong in project CLAUDE.md instead of path-scoped rules.
  • Treating hooks like prompt instructions.
  • Reviewing output in the same session that produced it.
  • Assuming -p turns on plan mode.
Open full reference notes

Configuration hierarchy

Priority Location Scope Committed to Git?
1 ~/.claude/CLAUDE.md User-global No
2 .claude/CLAUDE.md Project-wide Yes
3 CLAUDE.md in any directory Directory and below Yes
4 .claude/rules/*.md Conditional and path-matched Yes

Conditional rules example

---
paths:
  - "src/api/**"
  - "src/middleware/**"
---
Always validate authentication tokens before processing API requests.
Use structured error responses with proper HTTP status codes.

Skills and commands

Capability Location Purpose
Project skills .claude/skills/ Shared reusable workflows with tool restrictions
Personal skills ~/.claude/skills/ Personal reusable workflows
Project commands .claude/commands/ Shared prompt templates
Personal commands ~/.claude/commands/ Personal prompt templates

Skills can restrict access with allowed-tools and isolate work with context: fork. Commands are just prompt templates.

Hooks and working modes

Concept Correct use
PreToolUse Block dangerous calls, validate parameters, require confirmation
PostToolUse Validate outputs, sanitize results, audit logging
Plan mode Use for complex tasks with multiple viable approaches
Direct execution Use for clear, well-defined work
-p Use for non-interactive automation and CI/CD

Feedback and review

Prompt Engineering & Structured Output

This domain tests how you turn vague instructions into reliable outputs through system prompts, schemas, examples, chaining, and retries.

Know cold

  • System prompts work best when they include role, rules, output format, and calibration examples.
  • Forced tool use is better than prompt-based JSON when the format must be guaranteed.
  • Nullable fields prevent fabrication.
  • 2-4 few-shot examples is usually the best tradeoff.
  • Retry with error feedback should include the original prompt, failed output, and the specific validation error.

Common traps

  • Using prompt-based JSON for production-grade schema guarantees.
  • Saying only "try again" without specific validation feedback.
  • Making uncertain fields required and forcing the model to invent data.
  • Using Batch API for real-time work.
Open full reference notes

System prompts

Structured output: tool use vs text

Method Guarantee Use when
Forced tool_choice Schema enforced by the API You need guaranteed structure every time
tool_choice: auto Tool use is optional Agentic loops where text is also valid
Prompt-based JSON No schema enforcement Prototype-only scenarios

Schema design rules

Few-shot examples and chaining

Retry with error feedback

Original prompt: "Extract all dates from this contract"
Failed output: { "dates": ["2024-01-15", "next Tuesday"] }
Validation error: "dates[1] is relative. All dates must be ISO 8601."

Batch API and review limits

Concept Correct interpretation
Batch API Cheaper for latency-tolerant bulk workloads, not faster
Real-time user-facing work Use synchronous APIs instead
Self-review Use a separate instance or fresh conversation
Large input review Use per-file passes plus an integration pass

Context Management & Reliability

This domain focuses on preserving facts, handling conflicting evidence, and measuring quality in a way that catches real failure modes.

Know cold

  • Do not rely on progressive summarization for durable facts.
  • Scratchpad files preserve findings across context resets.
  • Structured error propagation must include failure type, partial results, alternatives tried, and suggested action.
  • Valid empty results and access failures are not the same thing.
  • Per-type monitoring is more useful than aggregate-only metrics.

Common traps

  • Summarizing the conversation again and again instead of carrying structured facts verbatim.
  • Treating 0 results as an error when the tool actually succeeded.
  • Relying on overall accuracy and missing one weak category.
  • Silently choosing one conflicting source without preserving provenance.
Open full reference notes

Persistent facts vs progressive summaries

Keep facts in a structured block and copy them forward verbatim when they must survive the task.

## Persistent Facts
- Customer: Jane Smith (ID: 12847)
- Order: #ORD-2024-8821, placed 2024-01-15
- Issue: Delivery address incorrect
- Status: Refund approved, reshipment pending

Scratchpad files and context resets

Structured error propagation

Field Purpose Example
Failure type Describe the category of failure network_timeout
Partial results Show what succeeded { "orders": [...], "payments": null }
Alternatives tried Record recovery attempts Retried 2x, tried fallback endpoint
Suggested action Guide the next step Escalate to support with order IDs

Valid empty results vs access failures

Situation Type Correct action
Search returns 0 results Valid empty Accept the absence of data as the answer
Database query returns an empty set Valid empty Accept the record does not exist
API returns 401 Access failure Retry with correct credentials or escalate
Network timeout Access failure Retry or use a fallback path
Rate limit 429 Access failure Respect retry timing and try again

Context window strategies

Strategy When to use
Persistent fact blocks Critical details must survive long conversations
Scratchpad files Multi-step work crosses context boundaries
Per-file passes + integration Large codebases risk attention dilution
Fresh start + summary injection Context has become stale or contradictory
Prompt caching Repeated system prompts need lower cost and latency

What to recall before the exam starts

  • stop_reason controls loop termination.
  • Iteration cap is a safety net, not primary control.
  • Hooks are deterministic. Prompts are probabilistic.
  • Subagents do not share memory.
  • Improve tool descriptions before reducing tools.
  • tool_choice: auto is the default for agentic loops.
  • Forced tool use guarantees structured output.
  • Nullable fields reduce fabrication.
  • Use 2-4 few-shot examples.
  • Batch API saves cost, not time.
  • Persistent facts beat progressive summaries.
  • Per-type accuracy beats aggregate-only reporting.

Loop snippet worth memorizing

while (true) {
  const response = await client.messages.create(request);

  if (response.stop_reason === "tool_use") {
    const toolResults = await runRequestedTools(response);
    history.push(response);
    history.push(toolResults);
    continue;
  }

  if (response.stop_reason === "end_turn") {
    history.push(response);
    break;
  }

  throw new Error("Unexpected stop_reason: " + response.stop_reason);
}

Final drill

Quick self-check

Question 1

Which signal should end the agent loop?

Question 2

You need guaranteed structured output that matches a schema every time. What should you use?

Question 3

Your long-running workflow keeps losing names, timestamps, and IDs as the conversation grows. What is the right fix?

Question 4

A search tool executed successfully and returned zero matches. How should the agent treat that result?