Beyond Keywords: Engineering a Production-Ready Agentic Search Framework in Go

Written by amitsurana | Published 2025/12/25
Tech Story Tags: agentic-ai | agentic-systems | agentic-workflows | agentic-workflow | what-is-an-agentic-workflow | agentic-search-framework | llm-calls | semantic-caching

TLDRHow search systems are moving from traditional retrieval to using agentic workflow.via the TL;DR App

Search systems have historically been optimized for retrieval: given a query, return the most relevant documents. That model breaks down the moment user intent shifts from finding information to solving problems.

Consider a query like:

“How will tomorrow’s weather in Seattle affect flight prices to JFK?”

This isn’t a search problem. It’s a reasoning problem — one that requires decomposition, orchestration across multiple systems, and synthesis into a coherent answer.

This is where agentic search comes in.

In this article, I’ll walk through how we designed and productionized an agentic search framework in Go — not as a demo, but as a real system operating under production constraints like latency, cost, concurrency, and failure modes.

Keyword and vector search systems excel at matching queries to documents. What they don’t handle well is:

  • Multi-step reasoning
  • Tool coordination
  • Query decomposition
  • Answer synthesis

Agentic search treats the LLM not as a text generator, but as a planner — a component that decides what actions to taketo answer a question.

At a high level, an agentic system must be able to:

  1. Understand user intent
  2. Decide which tools to call
  3. Execute those tools safely
  4. Iterate when necessary
  5. Synthesize a final response

The hard part isn’t wiring an LLM to tools. The hard part is doing this predictably and economically in production.

High-Level Architecture

We structured the system around three core concerns:

  • Planning – deciding what to do
  • Execution – running tools efficiently
  • Synthesis – producing the final answer

Here’s the end-to-end flow:

Each stage is deliberately isolated. Reasoning does not leak into execution, and execution does not influence planning decisions directly.

Flow Orchestrator: The Control Plane

The Flow Orchestrator manages the full lifecycle of a request. Its responsibilities include:

  • Coordinating planner invocations
  • Executing tools concurrently
  • Handling retries, timeouts, and cancellations
  • Streaming partial responses

Instead of a linear pipeline, the orchestrator supports parallel execution using Go’s goroutines. This becomes essential once multiple independent tools are involved.

Query Planner: Mandatory First Pass, Conditional Iteration

The Query Planner is always invoked at least once.

First Planner Call (Always)

On the first invocation, the planner:

  • Analyzes the user query
  • Produces an initial set of tool calls
  • Establishes a consistent reasoning baseline

Even trivial queries go through this step to maintain uniform behavior and observability.

Lightweight Classifier Gate

Before invoking the planner a second time, we run a lightweight classifier model to determine whether the query is:

  • Single-step
  • Multi-step

This classifier is intentionally cheap and fast.

Second Planner Call (Only for Multi-Step Queries)

If the query is classified as multi-step:

  • The planner is invoked again
  • It receives:
  • The original user query
  • Tool responses from the first execution
  • It determines:
  • Whether more tools are required
  • Which tools to call next
  • How to sequence them

This prevents uncontrolled planner loops — one of the most common failure modes in agentic systems.

Tool Registry: Where Reasoning Meets Reality

Every tool implements a strict Go interface:

// ToolInterface is the tool interface for developers to implement which uses
// generics with strongly typed 
type ToolInterface[Input any, Output any] interface {
  // Execute initiates the execution of a tool.
  //
  // Parameters:
  // - input: Strong typed tool request input.
  // - output: Strong typed tool request output.
  // - toolContext: Additional output data that is not used by the agent model.
  // - err: structured error from tool. in some cases error is passed to LLM. eg: no_response from tool
  Execute(ctx context.Context, requestContext *RequestContext, input Input) (output Output, toolContext ToolResponseContext, err error)

  // GetDefinition gets the tool definition sent to Large Language Model.
  GetDefinition() ToolDefinition
}

This design gives us:

  • Natural-language outputs for planner feedback
  • Structured metadata for downstream use
  • Compile-time safety
  • Safe parallel execution

The Tool Registry acts as a trust boundary. Planner outputs are treated as intent — not instructions.

Parallel Tool Execution

Planner-generated tool calls are executed concurrently whenever possible.

Go’s concurrency model makes this practical:

  • Lightweight goroutines
  • Context-based cancellation
  • Efficient I/O-bound execution

This is one of the reasons Go scales better than Python when agentic systems move beyond prototypes.

Response Generation and Streaming

Once tools complete, responses flow into the Response Generator.

  • Knowledge-based queries are summarized and synthesized using an LLM
  • Direct-answer queries (weather, sports, stocks) bypass synthesis and return raw tool output

Responses are streamed via Server-Sent Events (SSE) so users see partial results early, improving perceived latency.

Caching Strategy: Making Agentic Search Economical

One production reality became clear almost immediately:

LLM calls have real cost — in both latency and dollars.

Once we began serving beta traffic, caching became mandatory.

Our guiding principle was simple:

Avoid LLM calls whenever possible.

Layer 1: Semantic Cache (Full Response)

We first check a semantic cache keyed on the user query.

  • Cache hit → return response immediately
  • Entire agentic flow is bypassed

This delivers the biggest latency and cost win.

Layer 2: Planner Response Cache

If the semantic cache misses, we check whether the planner output (tool plan) is cached.

  • Skips the planner LLM call
  • Executes tools directly

Planner calls are among the most expensive and variable operations — caching them stabilizes both latency and cost.

Layer 3: Summarizer Cache

Finally, we cache summarizer outputs.

  • Tool results often repeat
  • Final synthesis can be reused
  • Reduces LLM load during traffic spikes

Each cache layer short-circuits a different part of the pipeline.

Lessons from Production

A few hard-earned lessons:

  • LLM calls are expensive — caching isn’t optional at scale
  • Semantic caching pays off immediately
  • Planner loops must be gated
  • Most queries are simpler than they look
  • Tools fail — retries and fallbacks matter
  • Observability is non-negotiable
  • Agents aren’t autonomous — orchestration beats autonomy



Written by amitsurana | Amit Surana works on scalable distributed systems and production-grade agentic frameworks
Published by HackerNoon on 2025/12/25