You've mastered streaming text. The AI types, the user reads, and the magic of perceived performance is achieved. But what if the AI could do more than just tell? What if it could show and let you interact?
Welcome to the paradigm shift. We are moving Beyond Text.
In this guide, we’ll explore how to stream actual React components directly from the server to the client. Using the Vercel AI SDK and React Server Components (RSCs), we'll transform a static chat interface into a dynamic, interactive dashboard that renders in real-time.
The Evolution: From News Ticker to Live Broadcast
To understand the power of streaming UI components, we must first look at the limitations of the standard approach.
The Old Way (Text-Only): Imagine a live news ticker. Information flows continuously, but it is static. The client is a passive recipient. If a user asks, "Show me a chart of Q3 sales," the AI streams back text describing the chart, or perhaps a JSON blob. The user must wait for the stream to finish before the UI can render.
The New Way (Streaming UI): Now, imagine a live broadcast where the correspondent can dynamically insert interactive dashboards, charts, and forms into the feed. This is the streamable-ui pattern.
Instead of sending {"chart": "data..."}, the server sends a serialized React component: <SalesChart data={...} />. The client receives this, hydrates it immediately, and the user can hover, zoom, and click while the AI is still generating the rest of the response.
The Architecture: RSCs and Server Actions
How does this work under the hood? It relies on two pillars:
- React Server Components (RSCs): The server renders the component tree and serializes it into a special payload (using React's Flight protocol). This is streamed over SSE (Server-Sent Events).
- Server Actions: The streamed components aren't just static HTML. They can contain interactive elements (buttons, forms) that trigger secure functions on the server via Server Actions.
This creates a bi-directional flow:
- Server → Client: Streams a UI component.
- Client → Server: User clicks a button inside that component.
- Server → Client: The Server Action executes, potentially triggering more AI generation and streaming new components.
The "Live Shopping Cart" Analogy
Think of a live streamer selling a product.
- Text-Only: The streamer describes the item. You type in chat to ask questions.
- Streaming UI: The streamer overlays an "Add to Cart" button and a size selector directly onto the video feed. You click it right now without stopping the video.
That is the experience we are building.
Code Example: Streaming an AI Dashboard
Let's build a SaaS feature where an AI generates a summary report and streams it as an interactive React component.
1. The Server-Side Implementation
File: app/api/generate-report/route.ts
We use streamUI from the Vercel AI SDK. This function accepts a standard prompt but allows us to map the AI's text output directly to a React component.
// app/api/generate-report/route.ts
import { streamUI } from 'ai/rsc';
import { OpenAI } from 'openai';
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY || '',
});
// The component to be streamed
const ReportComponent = ({ data }: { data: string }) => {
return (
<div className="p-4 bg-blue-50 border border-blue-200 rounded-lg">
<h3 className="font-bold text-blue-800">AI Generated Report</h3>
<p className="text-sm text-blue-600 mt-2">{data}</p>
<button
onClick={() => alert('Report acknowledged!')}
className="mt-3 px-3 py-1 text-xs bg-blue-600 text-white rounded hover:bg-blue-700"
>
Acknowledge
</button>
</div>
);
};
export async function POST(req: Request) {
const { prompt } = await req.json();
const result = await streamUI({
model: 'gpt-4-turbo-preview',
system: 'You are a helpful assistant that generates concise reports.',
prompt: `Generate a summary report for: ${prompt}`,
// The Magic Mapping:
// When the AI generates text, we wrap it in our React Component
text: ({ content }) => {
return <ReportComponent data={content} />;
},
initial: <div className="text-gray-500">Generating report...</div>,
});
return result.toAIStreamResponse();
}
2. The Client-Side Implementation
File: app/page.tsx
The client uses useCompletion to handle the stream. The completion state here isn't just a string; it's a serialized React tree that the SDK helps reconstruct.
// app/page.tsx
'use client';
import { useCompletion } from 'ai/react';
export default function DashboardPage() {
const { completion, input, handleInputChange, handleSubmit, isLoading } = useCompletion({
api: '/api/generate-report',
});
return (
<div className="max-w-2xl mx-auto p-8 space-y-6">
<h1 className="text-2xl font-bold">SaaS Dashboard</h1>
<form onSubmit={handleSubmit} className="flex gap-2">
<input
type="text"
value={input}
onChange={handleInputChange}
placeholder="Ask for a report (e.g., 'Q3 Sales')..."
className="flex-1 p-2 border rounded text-black"
disabled={isLoading}
/>
<button type="submit" className="px-4 py-2 bg-blue-600 text-white rounded">
Generate
</button>
</form>
<div className="space-y-4 border-t pt-4">
<h2 className="font-semibold text-lg">Output:</h2>
{/* The SDK handles the deserialization of the RSC payload */}
<div className="rendered-content">
{completion ? (
<div dangerouslySetInnerHTML={{ __html: completion }} />
) : (
<p className="text-gray-400 italic">No report generated yet.</p>
)}
{isLoading && (
<div className="flex items-center gap-2 text-blue-500">
<span className="animate-pulse">●</span>
<span>Streaming component...</span>
</div>
)}
</div>
</div>
</div>
);
}
Advanced Patterns: LangGraph and Max Iteration Policies
When you combine streaming UI with AI Agents, you enter a cyclical workflow. The AI generates a UI, the user interacts with it, and that interaction feeds back into the AI to generate the next step.
This is powerful, but dangerous. Without guardrails, an AI can get stuck in an infinite loop of generating components.
The Solution: Max Iteration Policies.
Using LangGraph, we can structure our AI logic as a stateful graph. We add a conditional edge (a "Policy") that checks the iteration count. If the count exceeds a limit (e.g., 5 steps), the graph forces a transition to the END node, terminating the process gracefully.
This ensures your application remains stable even if the AI logic gets confused.
Common Pitfalls to Avoid
- Hallucinated JSON: Don't ask the LLM to generate the React component structure (JSON/JSX). It will fail. Instead, ask it to generate content, and map that content to a pre-defined component on the server (as shown in the code example).
- Vercel Timeouts: Serverless functions have timeouts (10s-15s). If your AI generation is slow, the stream might cut off. Always use
streamUI(which keeps the connection alive efficiently) and optimize your prompts. - Hydration Errors: Server Components cannot access browser APIs (
window,document). If you need client-side interactivity (like theonClickin our example), ensure the event handling logic is handled by the client hydration process or wrapped in a Client Component.
Conclusion
Streaming React components moves us from "Generative Text" to "Generative UI."
It changes the user experience from a passive read-and-wait cycle to an active, iterative collaboration. By leveraging the Vercel AI SDK and React Server Components, you can build applications that feel instantaneous and deeply interactive. The AI isn't just telling you what to do; it's building the tools for you to do it, right in front of your eyes.
The concepts and code demonstrated here are drawn directly from the comprehensive roadmap laid out in the book The Modern Stack. Building Generative UI with Next.js, Vercel AI SDK, and React Server Components
