LESSON
Day 325: Agent Fundamentals - When LLMs Take Action
The core idea: an agent is not just an LLM that knows things. It is a model-driven control loop that can inspect state, choose a bounded action, call a tool, observe the result, and decide what to do next.
Today's "Aha!" Moment
The insight: 21/04.md treated RAG as a pipeline whose failures must be measured stage by stage. Agent systems use the same discipline, but the stakes rise because the model no longer stops at producing text. It can choose tools, change control flow, and trigger side effects in external systems.
Why this matters: The engineering problem changes the moment model output can cause something to happen. A weak answer is annoying. A wrong refund, duplicated ticket, or unauthorized database write is an operational incident.
Concrete anchor: Imagine an IT support assistant that can:
- search troubleshooting docs
- check device status in an MDM system
- reset MFA for a verified employee
- open a ticket when the problem cannot be resolved automatically
This is more than chat. The model must decide which action is appropriate, in what order, with what arguments, and when to stop.
Keep this mental hook in view: The moment an LLM can choose actions that touch external systems, it becomes a probabilistic controller, not just a text generator.
Why This Matters
RAG answered the question "How does the model get the right information?" Agent design adds a harder question: "What should the system do next, and how much authority should it have while doing it?"
That shift matters in production because most useful business tasks are not single-turn Q&A:
- support assistants need to gather facts before taking action
- internal copilots need to decide whether to search, calculate, summarize, or escalate
- back-office automations need to combine reads, writes, and policy checks across multiple systems
Before agent thinking:
- teams bolt tools onto chat systems and assume the model will use them correctly
- tool calls are treated like prompt tricks instead of execution steps with failure modes
- read-only and write-capable actions are mixed together without clear authority boundaries
After agent thinking:
- the system is modeled as state plus actions plus stop conditions
- tool execution becomes typed, observable, and policy-constrained
- autonomy is chosen deliberately instead of emerging accidentally from prompts
Real-world impact: Better tool design, safer automation, clearer debugging, and fewer cases where a model "looked smart" while silently doing the wrong thing.
This lesson sets up 21/06.md. First you need a reliable single-agent loop. Only then do planning, memory, and multi-agent coordination become useful instead of chaotic.
Learning Objectives
By the end of this session, you should be able to:
- Define an agent as a control loop over state and tools rather than as a vague "LLM with plugins" idea.
- Trace the mechanics of one agent step from model decision to validated tool call to updated state.
- Choose safe production boundaries for action-taking systems including permissions, step limits, and idempotent writes.
Core Concepts Explained
Concept 1: An Agent Is a Policy Over State, Actions, and Stop Conditions
For example, a customer-support assistant can answer "What is our return policy?" from documents alone. But when the user says "My order arrived damaged, please refund it," the system has to inspect order history, verify eligibility, maybe ask a clarification question, and then decide whether to issue a refund or escalate. That branching behavior is what makes the system agentic.
At a high level, Not every tool-using LLM is an agent. If the control flow is fixed, you have a workflow with an LLM inside it. An agent appears when the model is allowed to choose among multiple actions based on changing state.
Mechanically: An agent usually has four parts:
- State
- user request
- conversation history
- retrieved context
- previous tool outputs
- Action space
- answer directly
- ask a clarification question
- call a read tool
- call a write tool
- hand off to a human or another system
- Policy
- the model prompt, tool descriptions, and runtime rules that shape action selection
- Termination rule
- a final answer, a successful completion signal, a handoff, or a hard step/time budget
In practice, This definition prevents a common design mistake: calling every multi-step LLM feature an "agent" and then giving it more authority than its architecture deserves.
The trade-off is clear: You gain flexibility on messy, semi-structured tasks, but you lose the predictability of a fully scripted workflow.
A useful mental model is: Treat the agent like a junior operator with a runbook and a narrow badge. It can make local decisions, but only inside clear boundaries.
Use this lens when:
- Use it when the next best action depends on interpreting changing context, not just executing a fixed sequence.
- Avoid it when the business process is already deterministic and the main need is validation or templated generation.
Concept 2: The Basic Agent Loop Is Observe -> Decide -> Act -> Update State
For example, an IT helpdesk agent receives "VPN still fails after my password reset." It first searches internal docs, then checks the employee's device posture, then decides whether to reset a certificate, open a ticket, or ask the user to reconnect on a managed network. Each step changes what the agent knows, so the next step is not predetermined.
At a high level, The important mechanism is not the prompt alone. It is the closed loop between model output and environment feedback. Tool results become new state, and that updated state drives the next decision.
Mechanically: In production, a minimal agent loop often looks like this:
def run_agent(user_task, tool_registry, max_steps=6):
state = [{"role": "user", "content": user_task}]
for step in range(max_steps):
decision = model.decide(
messages=state,
tool_schemas=tool_registry.schemas(),
)
if decision.type == "final":
return decision.output
call = validate_against_schema(decision.tool_call)
result = tool_registry.execute(
name=call.name,
args=call.args,
timeout_s=5,
)
state.append({"role": "tool", "name": call.name, "content": result})
raise RuntimeError("agent exceeded step budget")
Mechanically, each step has to do four jobs well:
- produce a structured decision
- validate tool name and arguments
- execute against a real system with timeouts and error handling
- append the result back into state in a form the model can interpret
In practice:
- tool schemas are part of the runtime contract, not prompt decoration
- every step should emit traces so you can see why the agent chose a tool
- max-step and timeout budgets matter because loops compound cost and latency quickly
The trade-off is clear: Iterative tool use helps the model recover from uncertainty, but every extra step adds latency, token cost, and another opportunity for the model to drift or repeat work.
A useful mental model is: An agent is a control loop, not a monologue.
Use this lens when:
- Use it when tasks require incremental information gathering or multi-step execution.
- Avoid it when one validated tool call or one grounded answer is enough to finish the job.
Concept 3: Production Agents Need Bounded Authority, Not Unlimited Tool Access
For example, a billing agent can read invoice history, draft a refund recommendation, and trigger a refund API. In production, the write path cannot be treated like the read path. A repeated call could double-refund a customer, and an incorrectly scoped tool could let the agent act outside policy.
At a high level, The hardest part of agent design is not getting the model to call tools. It is deciding which tools it may call automatically, under what constraints, and how the system recovers when a step partially fails.
Mechanically: Safe action-taking systems usually separate controls across these layers:
- Capability design
- expose narrow tools with explicit verbs such as
lookup_orderordraft_refund - avoid giant "do anything" tools that hide multiple side effects
- expose narrow tools with explicit verbs such as
- Argument validation
- require typed parameters, enums, ranges, and server-side validation
- reject malformed or out-of-policy requests before tool execution
- Execution safety
- attach timeouts, retries, and idempotency keys
- make write tools return durable status IDs, not just free-form text
- Policy controls
- separate read, write, and high-risk tools
- require human approval for actions above a threshold
- log who approved what and why
- Recovery paths
- define what happens when a tool times out after the side effect may already have occurred
- prefer compensating actions and reconciliation jobs over naive retries
In practice, Agent-friendly tools often look more like well-designed backend APIs than like generic helper functions. They need stable schemas, explicit preconditions, auditability, and safe failure semantics.
The trade-off is clear: More autonomy can improve task completion and reduce human effort, but every increase in authority raises the cost of mistakes and the burden of controls.
A useful mental model is: Give the agent a badge, not the master key.
Use this lens when:
- Use it whenever an agent can mutate state, spend money, send messages, or touch production systems.
- Avoid hidden side effects triggered by natural-language-only actions with no validation or audit trail.
Troubleshooting
Issue: The agent keeps looping through search or tool calls without finishing.
Why it happens / is confusing: Stop conditions are vague, the same failed observation is fed back repeatedly, or the agent has no explicit success criteria.
Clarification / Fix: Add a hard step budget, define completion states in the system prompt, suppress duplicate tool calls, and return structured tool results that clearly indicate success, failure, or "need more information."
Issue: The model picks the wrong tool or sends malformed arguments.
Why it happens / is confusing: Tool descriptions overlap, schemas are underspecified, or too many tools are exposed at once.
Clarification / Fix: Narrow tool scopes, tighten schemas, provide one or two representative examples per tool, and validate everything server-side before execution.
Issue: A retried request causes the same side effect twice.
Why it happens / is confusing: Agent loops often run on unreliable networks. If a timeout occurs after a write succeeded, the model may retry because it never saw a confirmation.
Clarification / Fix: Use idempotency keys for write tools, return durable operation IDs, and reconcile uncertain writes against the source system before retrying.
Advanced Connections
Connection 1: Agent Fundamentals <-> RAG Evaluation & Monitoring
21/04.md argued that grounded systems need layered evaluation. Agents extend that same idea:
- retrieval and context quality still matter when an agent uses search as a read tool
- step traces matter because a task can fail even when the final answer looks plausible
- monitoring must include unsafe actions, duplicate writes, dead-end loops, and escalation rates
RAG asks, "Did the model use the right evidence?" Agent systems add, "Did the controller choose the right action with that evidence?"
Connection 2: Agent Fundamentals <-> Advanced Agent Patterns
21/06.md will add planning, memory, and multi-agent coordination. Those patterns only help after the single-agent loop is stable:
- planning decomposes harder tasks into deliberate substeps
- memory preserves information across longer horizons
- multi-agent designs split responsibilities across specialists
If the base action loop is untyped, unbounded, or unaudited, advanced patterns multiply the failure modes instead of solving them.
Resources
Optional Deepening Resources
-
[PAPER] ReAct: Synergizing Reasoning and Acting in Language Models
- Focus: The core reasoning-plus-action loop that made modern LLM agents concrete and easy to analyze.
-
[PAPER] Toolformer: Language Models Can Teach Themselves to Use Tools
- Focus: Why tool use can be framed as a learnable decision problem instead of a one-off prompt trick.
-
- Focus: Architectural patterns for routing model behavior through external modules and symbolic tools.
-
[DOC] OpenTelemetry Documentation
- Focus: Instrumenting per-step traces, tool latency, and failure paths so agent behavior stays observable in production.
Key Insights
- An agent is a control loop - what matters is not "LLM plus tools" in the abstract, but how state, actions, and stop conditions interact over time.
- Structured tool execution is the real runtime contract - schemas, validation, timeouts, and traces are what turn model output into an operable system.
- Authority must be designed, not assumed - safe production agents separate read access from writes, use idempotent actions, and keep human approval where the blast radius is high.