Building a Better Debugging Experience: A Deep Dive into Capturing and Replaying gRPC Traffic

Written by amitsurana | Published 2026/01/01
Tech Story Tags: debugging | golang | distributed-systems | debugging-microservices | programming | request-capture-engine | opentelemetry | query-service-and-cli

TLDRDebugging microservices is hard because it's difficult to see the data flowing between them. We built a "Request Capture Engine" that acts like a flight recorder for our gRPC traffic. It uses gRPC interceptors to automatically record request/response payloads, which are stored in object storage and correlated with OpenTelemetry trace IDs. This allows us to perfectly replay production traffic for debugging, find specific payloads with a simple command-line tool (`rcap`), and even power interactive UIs for A/B testing and analysis. This post explains how you can build a similar system.via the TL;DR App

Debugging in a complex microservices architecture can feel like navigating a maze in the dark. When a request fails or returns unexpected data, the trail often goes cold at the service boundary. You know what you sent, and you know what you got back, but the crucial "why" is locked away inside a black box. What if you could install a flight recorder on every service, transparently recording every interaction for perfect recall and inspection?

This post is a deep dive into how we built such a system—a "Request Capture Engine"—from the ground up using standard, open-source components. We'll walk through the architecture, the code, and the powerful new debugging workflows it unlocks.

The Core Mechanism: How to Intercept gRPC Calls

The foundation of our capture system is a feature built directly into gRPC: interceptors. An interceptor is a middleware function that can "intercept" an incoming or outgoing RPC, allowing you to inspect and modify the request, the response, and the call's context.

For a client-side unary (request-response) call, the concept is simple. Instead of the client calling the server directly, it calls the interceptor, which then invokes the actual RPC. This gives us the perfect hook to record the data.

Here’s a simplified example of what a client interceptor looks like in Go:

func UnaryPayloadCaptureInterceptor() grpc.UnaryClientInterceptor {
    return func(
        ctx context.Context,
        method string,
        req, reply interface{},
        cc *grpc.ClientConn,
        invoker grpc.UnaryInvoker,
        opts ...grpc.CallOption,
    ) error {
        // 1. Record the request payload before the call
        recordRequest(ctx, req)

        // 2. Invoke the actual RPC
        err := invoker(ctx, method, req, reply, cc, opts...)

        // 3. Record the response payload after the call
        recordResponse(ctx, reply, err)

        return err
    }
}

This same principle can be extended to server-side interceptors (to capture what a service receives) and, crucially, to streaming RPCs.

Building the Capture Engine: A Step-by-Step Guide

Capturing Payloads and the Role of OpenTelemetry

To build a complete picture of an interaction, we need to capture the request and response, and we need a way to link them together. This is where distributed tracing comes in. Our system relies on OpenTelemetry, the industry standard for observability.

When a request enters our system, OpenTelemetry assigns it a unique trace_id. As that request travels from one service to another, this trace_id is propagated in the gRPC metadata. Each hop in the journey is a span, with its own span_id.

Our interceptor leverages this context to create a complete record:

import (
    "go.opentelemetry.io/otel/trace"
    "google.golang.org/grpc"
)

func (interceptor *Interceptor) clientInterceptor() grpc.UnaryClientInterceptor {
    return func(ctx context.Context, method string, req, reply interface{}, cc *grpc.ClientConn, invoker grpc.UnaryInvoker, opts ...grpc.CallOption) error {
        // Extract the span from the context
        span := trace.SpanFromContext(ctx)
        if !span.SpanContext().IsSampled() {
            // If tracing is not enabled for this request, do nothing
            return invoker(ctx, method, req, reply, cc, opts...)
        }

        // Create an interaction record, using IDs from the span context
        interaction := &rcappb.Interaction{
            Metadata: generateMetadata(span.SpanContext(), ...),
        }
        interaction.ReqPayload = capturePayload(ctx, req)

        err := invoker(ctx, method, req, reply, cc, opts...)

        interaction.RespPayload = capturePayload(ctx, reply)
        
        // Asynchronously push the interaction to storage
        interceptor.pushOrDiscard(ctx, interaction)

        return err
    }
}

// capturePayload convert message to proto used to store all the events
func capturePayload(ctx context.Context, msg interface{}) *capturepb.Payload {
    // Get proto information from msg
    var prMessage protoreflect.Message
    ...
    ...
    return &capturepb.Payload{
        Message: eventInBytes,
        SchemaName: string(prMessage.Descriptor().FullName())
    }
}

Of course, capturing every single payload in a high-traffic production environment could be overwhelming. It's critical to include a rate-limiter in the interceptor to sample a configurable percentage of traffic, preventing performance degradation.

Handling Streaming RPCs

Capturing streaming RPCs, like those used for LLM responses or large data transfers, is more complex. A stream can consist of hundreds of individual messages, and we need to capture all of them.

We solve this by creating a wrapper around the standard grpc.ClientStream and grpc.ServerStream. This wrapper intercepts every SendMsg and RecvMsg call, records the message, and then passes it to the original stream.

The process looks like this:

When the stream is closed (either by the client or the server), the WrappedStream takes all the recorded request and response messages and persists them as a single interaction.

Storing the Payloads for Posterity

With payloads captured, we need a place to store them. A generic object storage solution like Amazon S3 is a perfect fit—it's durable, scalable, and cost-effective.

The key is a smart file naming strategy. By embedding metadata directly into the object path, we can query for data efficiently without needing a separate database index. Our structure looks like this:

/request-capture-data/{trace_id}/{start_timestamp}_{source_service}_{target_service}_{span_id}.pb

This allows us to instantly find all spans for a given trace_id just by listing files in a directory.

A Note on Security and Privacy

Storing raw request and response data is a superpower, but it comes with great responsibility. If your payloads contain Personally Identifiable Information (PII) or other sensitive data, you must handle it with care.

Before persisting any data, a redaction step is essential. This process should identify sensitive fields—perhaps using proto annotations or field name conventions—and either remove them entirely or mask their values. This ensures that your debugging data remains useful for engineers without exposing sensitive user information.

Making Sense of the Data: Querying and Concatenation

Now that we have a mountain of valuable data, we need a way to access it.

The Query Service and CLI

We built a simple gRPC service that exposes an API for querying the stored payloads from object storage.

// request-capture-service.proto
service RequestCapture {
    // Get all spans for a given trace
    rpc Traces(TracesRequest) returns (TracesResponse);

    // Get the full payloads for a trace or a specific span
    rpc Payloads(PayloadsRequest) returns (PayloadsResponse);
}

While the service is useful, the primary interface for developers is a command-line tool. The CLI provides a fast, intuitive way to fetch exactly what you need.

# Get all spans for a given trace ID
$ rcap traces --trace-id abc-123

# Get the full request/response payloads for that trace, formatted as JSON
$ rcap payloads --trace-id abc-123 --encoding json

The Magic of Stream Concatenation

Analyzing a stream of hundreds of LLM tokens is tedious. To solve this, we introduced the concept of a "concatenator." This is a pluggable component that knows how to merge a stream of messages into a single, coherent message.

For example, our ALLMChatConcatenator takes a stream of chat deltas and merges them by role, producing a final, clean transcript of the conversation. This is configurable, and you can write custom concatenators for any streaming proto in your system. The CLI can then return both the raw stream and the concatenated result, giving you the best of both worlds.

Putting It All Together: From Debugging to Replay

This system isn't just for looking at data; it enables entirely new workflows.

Use Case 1: Advanced Debugging Having the exact, unadulterated payload is a game-changer. You can pipe the CLI's JSON output directly into tools like jq to instantly find a needle in a haystack.

# Find any response where the 'error_code' field was not 0
$ rcap payloads --trace-id abc-123 --encoding json | jq '.payloads[] | select(.resp.error_code != 0)'

Use Case 2: Building a High-Fidelity Traffic Replay System This is perhaps the most powerful use case. The captured payloads are a perfect, high-fidelity record of production traffic. You can build a system to "replay" these payloads against a staging or local instance of a service. This is the most reliable way to reproduce complex bugs that only appear with specific, hard-to-guess data patterns. You can even pipe the output of the rcap tool directly into a gRPC CLI to replay a request with a single command.

Use Case 3: Powering Interactive Debugging UIs While the command line is powerful, the Request Capture Engine's API can also be the backbone for rich, interactive web UIs. Imagine an experimentation UI for your service that directly integrates with the payload store. This unlocks several advanced workflows:

  • Search and Load: A developer can paste a trace_id from a production error into the UI, which then calls the Request Capture Engine service to fetch the exact request payload and load it into an editor.
  • Modify and Replay: The developer can then tweak the request parameters in the UI and replay the request against a development or staging environment to test a fix.
  • A/B Comparison: The UI could load a single production request and replay it against two different versions of a service simultaneously, providing a side-by-side comparison of the responses to validate a change.
  • Shareable Debug Sessions: By integrating with the payload store, a developer can share a URL that links directly to a specific trace, making it easy to collaborate on debugging complex issues.

This turns the payload capture system from a simple debugging tool into a foundational platform for developer tooling.

Conclusion

By combining gRPC interceptors, OpenTelemetry, and a simple object storage layout, we've built a "Request Capture Engine" for our microservices. It has transformed debugging from a frustrating exercise in guesswork into a deterministic process of inspection. The ability to see the exact data that caused a problem, and to replay that data on demand, has dramatically accelerated our ability to build and maintain reliable systems. Building your own payload capture system is an achievable and high-impact investment that will pay dividends in developer productivity and service reliability.


Written by amitsurana | Amit Surana works on scalable distributed systems and production-grade agentic frameworks
Published by HackerNoon on 2026/01/01