How to Render React Apps Inside ChatGPT and Claude Using MCP

Written by faraaz-m | Published 2026/04/10
Tech Story Tags: mcp | agentic-ui-architecture | react-in-llm-interfaces | nestjs-mcp-server | ai-ui-integration | saas-inside-chatbots | iframe-sandbox-security | hackernoon-top-story

TLDRThe current interaction model for AI assistants relies heavily on text-based conversational flows. This paradigm fails when users must interact with complex, multi-step data or visual workflows. To remain competitive, SaaS platforms must eliminate this friction by bringing the application execution layer directly to the user.via the TL;DR App

As Large Language Models (LLMs) increasingly serve as the primary computing interface, the reliance on static text and JSON payloads restricts the user experience and limits SaaS engagement. Forcing users to leave a chat environment to execute complex workflows introduces cognitive friction and task abandonment. This paper outlines a novel architecture utilizing a NestJS Model Context Protocol (MCP) server to dynamically build and serve React user interfaces directly within LLM clients. By delivering an index.html payload as a ui:// resource, applications can render interactive, agent-driven micro-frontends in a sandboxed iframe. We evaluate implementation strategies, propose a rigorous security methodology to handle stateful authentication, and detail performance benchmarks. Finally, we examine the inherent technical limitations of iframe sandboxing and the Developer Experience (DX) tradeoffs, discussing the paradigm shift toward "Agentic UIs" as the next major evolution in software consumption.

I. Introduction: The Cost of Context Switching

The current interaction model for AI assistants relies heavily on text-based conversational flows. While effective for information retrieval, this paradigm fails when users must interact with complex, multi-step data or visual workflows. Currently, when an AI agent determines a user needs to interact with a SaaS platform, it typically provides an external URL, forcing a redirect.

UX research demonstrates that this structural interruption degrades Prospective Memory, the cognitive ability to retain and execute a planned intention (Chiossi et al., 2023). This context switching introduces extraneous cognitive load, leading to a significant drop in user engagement. To remain competitive, SaaS platforms must eliminate this friction by bringing the application execution layer directly to the user inside the AI chatbot environment.

Prior to the adoption of UI-capable MCP servers, developers attempted to bridge chatbots and SaaS platforms using Markdown tables, standard OAuth redirects, or simple API text summaries. These legacy methods fundamentally failed to resolve user intent without human intervention or application switching.

Recent industry analysis quantifies this friction as the "Toggle Tax." A 2022 Harvard Business Review study found that the average digital worker toggles between applications nearly 1,200 times per day, costing up to 9% of their annual productivity. Furthermore, cross-application fatigue directly suppresses software adoption; when users are constantly forced to switch contexts, they abandon complex features and revert to manual workarounds (VisualSP, 2026).

III. Architectural Prerequisites

To hydrate UI components on the fly, traditional API structures must be augmented. According to the official MCP specification (Anthropic, 2024), servers can offer "Resources" to clients. The primary prerequisite for our architecture is an MCP server (e.g., NestJS) where the authentication layers explicitly support these new UI resource types.

Rather than returning a standard JSON data payload, the NestJS server acts as a bridge to a React application. When an LLM invokes a specific tool, the MCP server responds with metadata containing a ui:// resource URI.

Listing 1: NestJS MCP Tool Registration

import { Injectable, OnModuleInit } from '@nestjs/common';
import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
import { z } from 'zod';

@Injectable()
export class McpUiService implements OnModuleInit {
  private server: McpServer;

  async onModuleInit() {
    // Initialize the MCP Server with metadata
    this.server = new McpServer({ 
      name: 'SaaS-Dynamic-UI', 
      version: '1.0.0' 
    });
    
    this.registerUiTools();
  }

  private registerUiTools() {
    /**
     * Tool: generate_dashboard
     * Purpose: Provides the LLM with a way to "render" a UI component
     * by returning a URI that the host application (Claude/ChatGPT) 
     * will load into an iframe.
     */
    this.server.tool(
      'generate_dashboard',
      'Generates a dynamic React dashboard for data visualization',
      { datasetId: z.string().describe("The unique ID of the dataset to visualize") },
      async (params) => {
        // Create a scoped, short-lived token for the specific dataset
        const sessionToken = await this.generateSecureToken(params.datasetId);
        
        // This URI is what the React app's URLSearchParams will parse
        const uiResourceUri = `ui://saas-provider.com/components/dashboard?token=${sessionToken}`;

        return {
          content: [
            { 
              type: 'text', 
              text: 'The interactive dashboard has been generated and is rendering below.' 
            }
          ],
          metadata: {
            ui: { 
              uri: uiResourceUri, 
              height: 600 
            }
          }
        };
      }
    );
  }

  /**
   * Helper to generate a secure, verifiable session token
   * (Integration point with your Auth/Session providers)
   */
  private async generateSecureToken(datasetId: string): Promise<string> {
    // Logic to generate/sign a JWT or store a session in Redis
    return Buffer.from(datasetId).toString('base64'); // Placeholder
  }
}

The host LLM client reads this resource, fetching the compiled React index.html, and safely renders it within a sandboxed iframe. The client-side application then connects back to the LLM's context.

Listing 2: React Client Hydration & Bridge

import React, { useEffect, useState } from 'react';
import { useMcpContext } from '@modelcontextprotocol/ext-apps/react';

/**
 * DynamicMcpApp
 * Handles UI hydration and tool interaction within 
 * an external MCP application context.
 */
const DynamicMcpApp = () => {
  const [appState, setAppState] = useState(null);
  const { isConnected, context, callTool } = useMcpContext();

  useEffect(() => {
    // Only attempt to hydrate once the MCP handshake is complete
    if (isConnected && context) {
      const params = new URLSearchParams(window.location.search);
      const token = params.get('token');

      if (token) {
        hydrateUiState(token)
          .then((data) => setAppState(data))
          .catch((err) => console.error("Hydration failed:", err));
      }
    }
  }, [isConnected, context]);

  /**
   * Executes an MCP tool call to process user interactions
   */
  const handleAction = async (actionData) => {
    try {
      await callTool('process_action', { 
        action: actionData, 
        contextId: context.threadId 
      });
    } catch (error) {
      console.error("Tool execution failed:", error);
    }
  };

  if (!appState) {
    return <LoadingSpinner />;
  }

  return (
    <DashboardComponent 
      data={appState} 
      onAction={handleAction} 
    />
  );
};

export default DynamicMcpApp;

IV. Implementation Strategies: Monorepo vs. Shell

When designing the React payload, engineering teams must evaluate their product ecosystem to choose between two primary architectural patterns (Geers, 2020):

Approach A: The Discrete Micro-Frontend (Monorepo Strategy)

For enterprise companies providing a suite of disparate software products, a monorepo approach is optimal. UI building blocks are imported as npm packages. During compile time, the server hydrates a highly specific index.html application tailored precisely to the invoked MCP tool. This prevents codebase bloat and securely isolates environments.

Approach B: The Dynamic Shell Application

For a single-themed SaaS product, maintaining a single "Shell" application is vastly superior. The server passes dynamic metadata and state instructions to a unified React shell. The shell evaluates this data at runtime to conditionally render components. This prevents maintaining two separate flavors of the codebase, ensuring maximum reusability.

V. Security and Authentication Methodology

Rendering external UIs inside a third-party LLM environment introduces a unique matrix of vulnerabilities. The architecture must defend against Cross-Site Scripting (XSS), unauthorized data access, and session hijacking.

A. Secure Credential Passing and Session State Verification

Authentication is maintained via a secure, HttpOnly cookie set on the SaaS provider's domain. Relying solely on stateless JWTs is insufficient. The server must decode the JWT and query a central Session Provider to guarantee the token corresponds to an active session.

Listing 3: Session Verification Provider

import { Injectable, UnauthorizedException } from '@nestjs/common';
import { RedisService } from './redis.service';

/**
 * Interface to provide structure for the decoded JWT payload
 */
interface JwtPayload {
  userId: string;
  sessionId: string;
}

@Injectable()
export class SessionProvider {
  constructor(private readonly redisStore: RedisService) {}

  /**
   * Validates the session against a distributed Redis store
   * to handle revocation or expiration in real-time.
   */
  async validateSession(decodedJwt: JwtPayload): Promise<boolean> {
    const { userId, sessionId } = decodedJwt;
    
    // Key pattern: session:userId:sessionId
    const activeSession = await this.redisStore.get(
      `session:${userId}:${sessionId}`
    );

    if (!activeSession) {
      throw new UnauthorizedException('Session expired or revoked.');
    }

    return true;
  }
}

B. Origin Validation and The Interceptor Pipeline

Before the server processes the requested tool, a guard evaluates the request origin against an explicitly allowed list of trusted AI clients, enforcing this via a strict Content-Security-Policy. To protect against Cross-Site Request Forgery (CSRF), the server generates a unique CSRF token attached to the outbound response headers.


Listing 4: MCP Authentication Guard

import {
  Injectable,
  CanActivate,
  ExecutionContext,
  UnauthorizedException,
} from '@nestjs/common';
import { JwtService } from '@nestjs/jwt';
import { SessionProvider } from './session.service';

@Injectable()
export class McpAuthGuard implements CanActivate {
  private readonly allowedOrigins = ['https://chatgpt.com', 'https://claude.ai'];

  constructor(
    private readonly jwtService: JwtService,
    private readonly sessionProvider: SessionProvider,
  ) {}

  async canActivate(context: ExecutionContext): Promise<boolean> {
    const request = context.switchToHttp().getRequest();
    const response = context.switchToHttp().getResponse();

    // 1. Origin Validation
    const origin = request.headers.origin || request.headers.referer;
    const isAllowed = this.allowedOrigins.some((allowed) => origin?.includes(allowed));

    if (!isAllowed) {
      throw new UnauthorizedException('Invalid origin request.');
    }

    // 2. Token Extraction
    const authToken = request.cookies['saas_auth_token'];
    if (!authToken) {
      throw new UnauthorizedException('Authentication missing.');
    }

    try {
      // 3. JWT Verification & Session Check
      const decoded = this.jwtService.verify(authToken, {
        secret: process.env.JWT_SECRET,
      });
      
      await this.sessionProvider.validateSession(decoded);

      // 4. Security Headers & CSRF
      const csrfToken = require('crypto').randomBytes(32).toString('hex');
      
      response.setHeader('X-CSRF-Token', csrfToken);
      response.setHeader(
        'Content-Security-Policy',
        `frame-ancestors ${this.allowedOrigins.join(' ')}`,
      );

      // 5. Attach User to Request
      request.user = decoded;
      return true;
    } catch (error) {
      throw new UnauthorizedException('Token validation failed.');
    }
  }
}

VI. Performance Benchmarks and Latency Profiling

A primary concern is the introduction of latency. However, empirical observation demonstrates that the performance overhead is negligible compared to the baseline LLM inference loop. Because the React applications are pre-built, the time-to-first-byte (TTFB) is exceptionally low. Once the LLM completes its generation loop and returns the ui:// resource, the host client consistently loads the HTML payload in under 5 seconds.

VII. Technical Limitations of the Sandboxed Architecture

While powerful, embedding a React application within an LLM's chat interface inherits the strict limitations of iframe sandboxing. Developers must architect their UIs with the understanding that they do not have root-level control over the browser environment.

  • Media Controls: The parent LLM client dictates the sandbox attributes. Requests for audio/video hardware controls may be silently blocked.
  • History API Constraints: Standard single-page application (SPA) routing (pushState) does not reflect in the parent chat window, complicating deep-linking.
  • Clipboard Interoperability: Security restrictions often prevent the iframe from accessing the navigator.clipboard API without explicit permission bridging.

VIII. Edge Case Handling and Sandboxed Degradation

LLMs are prone to hallucinations, including invoking tools out of sequence or passing malformed arguments. A production-ready MCP architecture must anticipate and defensively handle these edge cases.


A. Strict Parameter Validation

Before any UI hydration occurs, the MCP tool handler must rigorously validate the incoming parameters against a strict schema. By defining a Zod schema during tool registration, the server automatically intercepts hallucinated data before executing the UI build logic.


Listing 5: Defensive Zod Schema Validation

import { z } from 'zod';

/**
 * Define the expected parameter schema for a 
 * hypothetical travel routing tool
 */
const OutdoorRecommendationSchema = z.object({
  location: z.string().min(2, "Location must be specified"),
  activityType: z.enum(['hiking', 'scenic_drive', 'camping']).default('hiking'),
  maxDistance: z.number().max(100, "Distance cannot exceed 100 miles").optional(),
});

// Inside the MCP tool registration:
this.server.tool(
  'mcp-outdoor-recommendations',
  'Returns interactive map UI for outdoor activities',
  OutdoorRecommendationSchema.shape, // The SDK strictly enforces this schema
  async (params) => {
    /**
     * If the LLM hallucinates a maxDistance of 500, the tool safely 
     * rejects the call and returns a native error to the orchestrator 
     * instead of crashing the React application.
     */
    const uiResourceUri = await this.uiBuilder.hydrateMapUI(params);

    return {
      content: [
        { 
          type: 'text', 
          text: 'Rendering map interface.' 
        }
      ],
      metadata: { 
        ui: { 
          uri: uiResourceUri, 
          height: 800 
        } 
      }
    };
  }
);

B. Sandboxed Error Boundaries

If a runtime error occurs post-hydration, the common pattern of triggering window.location.reload() is blocked by the host client's iframe security policies. Instead, the application employs strict Error Boundaries. If an exception is caught, the UI must programmatically close the iframe and emit a telemetry signal back to the orchestrator to trigger an automatic retry or prompt a Human-in-the-Loop (HITL) intervention.

IX. The Developer Experience (DX): Tradeoffs and Testing Bottlenecks

Adopting an Agentic UI architecture presents a mixed Developer Experience. Utilizing a monorepo approach offers significant advantages: developers can seamlessly share TypeScript interfaces, utility functions, unified analytics, and feature flags between the core SaaS application and the MCP server.

However, this integration increases debugging complexity. The primary drawback is the friction of End-to-End (E2E) testing. To test an MCP connection locally within a live LLM client, developers must utilize secure tunneling tools (such as ngrok or Cloudflare Tunnel) to expose their local environment to the sandboxed iframe, adding latency to the development cycle.

X. Discussion and Conclusion

The transition to returning renderable UI components represents a fundamental industry shift. Software interfaces have historically moved in distinct waves: Desktop GUIs, the Web, Mobile Apps, and now, Agentic UIs. By building custom MCP connectors, SaaS providers can meet users exactly where they are working.

Delivering React UI components via MCP server responses effectively bridges the gap between conversational AI and functional SaaS workflows. By intelligently choosing a monorepo or shell architecture, implementing robust stateful authentication, and navigating the inherent sandbox testing constraints, engineers can eliminate the friction of context switching. This unlocks new avenues for user retention and workflow efficiency in the era of Agentic UI.

References

  • Anthropic. (2024). Model Context Protocol Specification. Retrieved from https://modelcontextprotocol.io/specification/2024-11-25
  • Chiossi, F., et al. (2023). Short-Form Videos Degrade Our Capacity to Retain Intentions. CHI '23.
  • Geers, M. (2020). Micro Frontends in Action. Manning Publications.
  • Harvard Business Review. (2022). The Toggle Tax: The Cost of Context Switching.
  • VisualSP. (2026). The Switch Tax That Quietly Wrecks Software Adoption.




Written by faraaz-m | Principal Engineer at ZoomInfo. I design scalable software architectures and specialize in React micro-frontends.
Published by HackerNoon on 2026/04/10