Agents in Workflows
An agent handles the thinking inside a task. Give a<Task> the agent prop and three things happen: the children become the prompt, the agent reasons and responds, and Smithers validates the response against the output schema. No ceremony required.
Using an Agent
The simplest case first:analysis schema. If the response is valid, the task completes. If not — well, we will get to that.
Agent Types
Where does the AI actually run? Smithers gives you two options, and they are interchangeable. SDK Agents talk directly to a provider API. You pay per token, you get fine-grained control:<Task> does not care which kind you hand it:
claude for codex and the task works the same way. The interface is the seam; what sits behind it is your choice.
Structured Output
Agents do not return free-form text. They return data, validated against a Zod schema. This is the contract that makes agents composable — downstream tasks can depend on the shape of what comes back.{ summary: "...", risk: "critical" }? Validation fails — "critical" is not in the enum. Smithers feeds the Zod error back to the agent and retries. The agent sees its own mistake, corrects it, and tries again. Think of it as a compiler error for AI output.
Agent Fallback Chains
Agents fail. Models go down, rate limits hit, responses come back garbled. You do not want your workflow to stop because one provider had a bad minute. Pass an array of agents to create a fallback chain:codex. If it fails, claude takes over on retry. This is a practical pattern: start with the fast, cheap option; fall back to the more capable one.
For the common case of a single fallback, there is a dedicated prop:
Multi-Agent Patterns
One agent per task is the simple case. But some problems benefit from multiple perspectives or a division of labor.Parallel Review
Ask two agents the same question and compare answers. This is the “get a second opinion” pattern:continueOnFail prop is important here. If one reviewer times out or crashes, the other still completes. You get at least one review instead of zero.
Pipeline Handoff
Different agents are good at different things. Let each one do what it does best:Tools
An agent without tools is a brain in a jar. It can reason about what you tell it, but it cannot look at your files, run your tests, or check what is on disk. Tools fix that. Smithers provides five built-in tools, each doing one thing well:| Tool | Purpose | Input |
|---|---|---|
read | Read a file | { path } |
write | Write a file | { path, content } |
edit | Apply a unified diff patch | { path, patch } |
grep | Search files with regex | { pattern, path? } |
bash | Execute a shell command | { cmd, args?, opts? } |
Assigning Tools to Agents
Pass them in when you create the agent:Sandboxing
You might be wondering: “I am giving an AI shell access. How do I not lose sleep over this?” All tools are sandboxed torootDir (defaults to the workflow directory). The constraints are straightforward:
- File paths are resolved relative to the root
- Symlinks that escape the sandbox are rejected
- Output is truncated to
maxOutputBytes(default 200KB) - Shell commands have a 60-second timeout
- Network access is blocked by default in
bash
Read-Only vs Full-Access Agents
Here is a question worth asking for every agent you create: does it actually need write access? A reviewer does not need to modify files. A code generator does. Match the tools to the job:Task Modes Without Agents
Not everything requires AI. Some tasks are deterministic. Some are just data. Smithers handles both without reaching for an agent.Compute Mode
When children is a function and there is no agent, the function runs directly at execution time:Static Mode
When children is a plain value, Smithers writes it directly as output. No computation, no agent — just data:Choosing the Right Approach
When you are staring at a new task, ask: does this require judgment?| Scenario | Approach |
|---|---|
| Need AI reasoning or generation | Agent mode with agent prop |
| Need to run shell commands or tests | Compute mode with async callback |
| Need to pass data between steps | Static mode with literal value |
| Need AI + file access | Agent mode with tools |
| Need resilient AI calls | Agent with retries and/or fallbackAgent |
| Need diverse AI perspectives | Parallel tasks with different agents |
Next Steps
- Built-in Tools — Full API reference for all five tools.
- CLI Agents — Using Claude Code, Codex, Gemini, and other CLI agents.
- SDK Agents — Using API-billed provider agents.
- Implement-Review Loop — A production pattern using multi-agent review.