Building Your First AG-UI App: A Smart Echo Agent Implementation

Written by mayankc | Published 2025/08/27
Tech Story Tags: ai-agent | agentic-ai | ag-ui-app | smart-echo-agent | model-context-protocol | build-an-ai-app | build-your-own-ai-app | ai-agent-tutorial

TLDRThe emergence of autonomous AI agents has created a complex ecosystem requiring standardized communication protocols. Four protocols have emerged to address different layers of agent interaction. This article presents a practical implementation of an echo agent application using the AG-UI protocol.via the TL;DR App

1. Introduction

1.1 Overview

The emergence of autonomous AI agents has created a complex ecosystem requiring standardized communication protocols to enable interoperability and collaboration. Four primary protocols have emerged to address different layers of agent interaction: Agent-to-Agent Protocol (A2A), Agent Communication Protocol (ACP), Model Context Protocol (MCP), and Agent-User Interaction Protocol (AG-UI). Each protocol occupies a distinct position in the agent architecture stack and serves specific communication requirements.

The agent communication landscape can be understood through a layered architecture model where each protocol addresses different interaction patterns:

Model Context Protocol (MCP) operates at the foundational layer, providing a standardized way to connect AI models to external resources and tools, similar to how USB-C provides standardized device connections. MCP focuses on the interaction between an AI model and external resources, enabling LLMs to access databases, APIs, and external services through a consistent interface.

Agent Communication Protocol (ACP) functions at the inter-agent coordination layer. As the next step following MCP, ACP defines how agents operate and communicate, with particular focus on coordination between AI agents operating in the same local or edge environment. ACP provides a unified interface through which agents can collaborate regardless of their frameworks.

Agent-to-Agent Protocol (A2A) addresses horizontal agent collaboration across heterogeneous systems. A2A is an open standard that enables AI agents to communicate and collaborate across different platforms and frameworks, regardless of their underlying technologies. The protocol preserves agent opacity while enabling standardized communication through JSON-RPC 2.0 over HTTP(S).

Agent-User Interaction Protocol (AG-UI) operates at the human-agent interface layer, standardizing how frontend applications connect to AI agents through event-driven streaming protocols. AG-UI focuses explicitly on the agent-user interactivity layer and does not compete with other protocols but rather complements them in the agent ecosystem.

The current protocol landscape demonstrates significant variation in adoption rates and industry support:

A2A Protocol has achieved substantial industry backing, with Google introducing the protocol for cross-platform agent communication and Microsoft announcing support for the open A2A protocol to enable agent-to-agent interoperability across platforms. This enterprise-level support has accelerated A2A adoption in production systems.

Model Context Protocol (MCP) enjoys widespread adoption due to its fundamental role in LLM-external system integration. MCP provides a JSON-RPC client-server interface for secure tool invocation and typed data exchange, making it essential for most modern AI applications requiring external data access.

Agent Communication Protocol (ACP) has gained recognition through IBM Research's advocacy and educational initiatives. DeepLearning.AI offers dedicated courses on ACP implementation, indicating growing academic and professional interest in the protocol.

AG-UI Protocol remains comparatively unknown despite its technical merit and practical utility. Several factors contribute to this limited adoption:

  1. Recency: AG-UI emerged more recently than established protocols like MCP and A2A
  2. Niche Focus: The protocol specifically addresses agent-user interaction rather than broader system integration
  3. Limited Corporate Backing: Unlike A2A (Google/Microsoft) or ACP (IBM), AG-UI lacks major enterprise sponsorship
  4. Developer Awareness: The frontend development community has not yet widely recognized AG-UI's potential

The Agent-User Interaction Protocol (AG-UI) addresses the standardization of communication interfaces between artificial intelligence agents and frontend applications. AG-UI is a lightweight, event-based protocol that standardizes how AI agents connect to user-facing applications and establishes a structured communication layer between backend AI agents and frontend applications, enabling real-time interaction through a stream of structured JSON events.

This beginner friendly article presents a practical implementation of an echo agent application using the AG-UI protocol. The echo agent serves as a fundamental demonstration of the protocol's core capabilities while providing a foundation for understanding event-driven agent-user interactions.

All examples in this article uses Node.js.

1.2 Protocol Architecture

AG-UI follows a client-server architecture that supports various transport mechanisms including Server-Sent Events (SSE), webhooks, and WebSockets. The protocol defines 16 standardized event types including:

  • RUN_STARTED
  • RUN_FINISHED
  • RUN_ERROR
  • STEP_STARTED
  • STEP_FINISHED
  • TEXT_MESSAGE_START
  • TEXT_MESSAGE_CONTENT
  • TEXT_MESSAGE_END
  • TOOL_CALL_START
  • TOOL_CALL_ARGS
  • TOOL_CALL_END
  • STATE_SNAPSHOT
  • STATE_DELTA
  • MESSAGES_SNAPSHOT
  • RAW
  • CUSTOM

1.3 Learning Objectives

This tutorial demonstrates AG-UI implementation through an echo agent application that provides:

  1. Event-driven communication between agent and user interface
  2. Real-time message streaming and state synchronization
  3. Bidirectional data flow using standardized event types
  4. Foundation for extending to more complex agent interactions

1.4 Prerequisites

The implementation requires:

  • Node.js version 16 or higher
  • Basic understanding of JavaScript/TypeScript
  • Familiarity with event-driven programming concepts
  • Understanding of HTTP request/response protocols

1.5 Echo Agent as Learning Tool

The echo agent application provides an optimal learning environment for AG-UI concepts because it:

  • Minimizes business logic complexity while demonstrating protocol mechanics
  • Exhibits clear input-output relationships for event flow analysis
  • Supports incremental feature addition for progressive learning
  • Maintains focus on AG-UI-specific implementation details rather than domain-specific requirements

This article is aimed at beginners who have heard about AG-UI and would like to get started with a fairly simple application (slightly more complex than the standard hello world).

2. Understanding AG-UI: Technical Overview

2.1 Protocol Definition

The Agent User Interaction (AG-UI) Protocol is an open standard that standardizes how frontend applications communicate with AI agents, with support for streaming, frontend tools, shared state, and custom events. The AG-UI SDK uses a streaming event-based architecture where events are the fundamental units of communication between agents and the frontend.

2.2 Event Architecture

All events inherit from the BaseEvent type, which provides common properties shared across all event types including type (EventType discriminator field), timestamp (optional), and rawEvent (optional original event data). The protocol defines five primary event categories:

  1. Lifecycle Events: Control agent execution flow
  2. Text Message Events: Handle streaming text communication
  3. Tool Call Events: Manage agent tool invocations
  4. State Management Events: Synchronize application state
  5. Special Events: Handle custom and raw event types

2.3 Core Event Types

2.3.1 Lifecycle Events

RunStartedEvent signals the start of an agent run with threadId and runId properties. RunFinishedEvent signals successful completion with matching identifiers. RunErrorEvent indicates execution errors with message and optional code properties.

2.3.2 Message Events

TextMessageStartEvent initiates message streaming with messageId and role properties. TextMessageContentEvent delivers content chunks through delta property. TextMessageEndEvent terminates the message stream.

2.3.3 State Management

StateSnapshotEvent provides complete state representation through snapshot property. StateDeltaEvent delivers incremental changes using JSON Patch operations. MessagesSnapshotEvent maintains conversation history through messages array.

2.4 Transport Layer

The protocol supports multiple transport mechanisms without mandating specific implementations. Compatible transports include Server-Sent Events (SSE), WebSockets, and HTTP webhooks. This transport-agnostic design enables integration with existing infrastructure while maintaining protocol compliance.

2.5 Agent Ecosystem Position

AG-UI operates as part of a broader agent protocol ecosystem. Events including messages, tool calls, state patches, and lifecycle signals flow seamlessly between agent backend and front-end interface, maintaining real-time synchronization. The protocol complements rather than competes with other standards such as Model Context Protocol (MCP) for tool calls and Agent-to-Agent (A2A) protocol for inter-agent communication.

Again, this is an important point to note: AG-UI complements other agent protocols such as MCP and A2A. They’re not in competition with each other.

3. Setting Up Your Development Environment

3.1 System Requirements

The AG-UI development environment requires:

  • Node.js version 16.0 or higher
  • npm package manager version 7.0 or higher
  • Python 3.12.7 for backend agent implementations
  • Git version control system

3.2 Project Initialization

AG-UI provides a command-line interface for rapid project creation through the create-ag-ui-app utility. The initialization process creates a structured application template with both frontend and backend components.

Execute the following command to create a new AG-UI application:

npx create-ag-ui-app echo-server-demo

The output is roughly like this as the time of writing:

~/Work/source: npx create-ag-ui-app echo-server-demo
Need to install the following packages:
[email protected]
Ok to proceed? (y) y


   █████╗  ██████╗       ██╗   ██╗ ██╗
  ██╔══██╗██╔════╝       ██║   ██║ ██║
  ███████║██║  ███╗█████╗██║   ██║ ██║
  ██╔══██║██║   ██║╚════╝██║   ██║ ██║
  ██║  ██║╚██████╔╝      ╚██████╔╝ ██║
  ╚═╝  ╚═╝ ╚═════╝        ╚═════╝  ╚═╝

  Agent User Interactivity Protocol


~ Let's get started building an AG-UI powered user interactive agent ~
  Read more about AG-UI at https://ag-ui.com


To build an AG-UI app, you need to select a client.

✔ What client do you want to use? CLI client
🔧 Setting up CLI client...

🔍 Reading current package versions...
  ✓ @ag-ui/client: 0.0.36
  ✓ @ag-ui/core: 0.0.36
  ✓ @ag-ui/mastra: 0.0.8
📋 Found versions: 3 packages
  - @ag-ui/client: 0.0.36
  - @ag-ui/core: 0.0.36
  - @ag-ui/mastra: 0.0.8

✔ What would you like to name your CLI project? my-ag-ui-cli-app
📥 Downloading CLI client template: my-ag-ui-cli-app

✅ CLI client template downloaded successfully!

🔄 Updating workspace dependencies...
  📦 Updated @ag-ui/client: workspace:* → ^0.0.36
  📦 Updated @ag-ui/core: workspace:* → ^0.0.36
  📦 Updated @ag-ui/mastra: workspace:* → ^0.0.8
✅ Package.json updated with actual package versions!

📁 Project created in: my-ag-ui-cli-app

🚀 Next steps:
   export OPENAI_API_KEY='your-openai-api-key'
   cd my-ag-ui-cli-app
   npm install
   npm run dev

💡 Check the README.md for more information on how to use your CLI client!

The CLI tool presents framework selection options including support for various agent implementations such as LangGraph, CrewAI, Mastra, AG2, Agno, LlamaIndex, and Pydantic AI.

The next step is to run ‘npm i’ and get the project ready:

~/Work/source/my-ag-ui-cli-app: npm i
npm warn deprecated [email protected]: This package is deprecated. Use the optional chaining (?.) operator instead.
npm warn deprecated [email protected]: Package is no longer maintained
npm warn deprecated [email protected]: Use your platform's native DOMException instead

added 736 packages, and audited 737 packages in 1m

74 packages are looking for funding
  run `npm fund` for details

found 0 vulnerabilities

3.3 Project Structure Analysis

The generated project contains the following directory structure:

Note that this is not the echo application we’re trying in this article. The boilerplate comes with an agent application (agent.ts) powered with two tools (browser.tool.ts & weather.tool.ts). The boilerplate application needs to be simplified to focus on echoing rather than tool handing.

3.4 Starting the application

The application CLI can be started by running:

npm run dev

This will present a CLI kind of interface to collect user inputs, send them to agent, process events from agent.

4. Building the Basic Echo Agent

In the scope of this article, we don’t want to get bothered about tools. This article is not about MCP. The focus is on agent to frontend communication (AG-UI protocol). For a simple echo agent case, tools are not required.

4.1 Agent Backend Implementation

In the boilerplate application, agent.ts module takes up the role of agent (as the name suggests). The index.ts module takes up the role of client.

To turn boilerplate application into echo application, the agent.ts has been updated as follows:

agent.ts

import { openai } from "@ai-sdk/openai";
import { Agent } from "@mastra/core/agent";
import { MastraAgent } from "@ag-ui/mastra";
import { Memory } from "@mastra/memory";
import { LibSQLStore } from "@mastra/libsql";

export const agent = new MastraAgent({
  // @ts-ignore
  agent: new Agent({
    name: "AG-UI Agent",
    instructions: `
    You are an echo server agent. 
    Your primary function is to receive echo whatever last user message that is given to you.
    Use some good suffix to show that this is an echo.
  `,
    model: openai("gpt-4o-mini"),
    memory: new Memory({
      storage: new LibSQLStore({
        url: "file:./mastra.db",
      }),
    }),
  }),
  threadId: "1",
});

All the agent is going to do is to make a call to LLM and generate a suitable echo output.

While agent.ts module takes up the role of agent, the index.ts module takes up the role of client. The index.ts contains the communication between agent and client.

index.ts (unchanged)

import * as readline from "readline";
import { agent } from "./agent";
import { randomUUID } from "node:crypto";

const rl = readline.createInterface({
  input: process.stdin,
  output: process.stdout,
});

async function chatLoop() {
  console.log(
    "🤖 AG-UI chat started! Type your messages and press Enter. Press Ctrl+D to quit.\n",
  );

  return new Promise<void>((resolve) => {
    const promptUser = () => {
      rl.question("> ", async (input) => {
        if (input.trim() === "") {
          promptUser();
          return;
        }
        console.log("");

        rl.pause();

        agent.messages.push({
          id: randomUUID(),
          role: "user",
          content: input.trim(),
        });

        try {
          const r = await agent.runAgent(
            {},
            {
              onTextMessageStartEvent() {
                process.stdout.write("🤖 AG-UI assistant: ");
              },
              onTextMessageContentEvent({ event }) {
                process.stdout.write(event.delta);
              },
              onTextMessageEndEvent() {
                console.log("\n");
              },
              onToolCallStartEvent({ event }) {
                console.log("🔧 Tool call:", event.toolCallName);
              },
              onToolCallArgsEvent({ event }) {
                process.stdout.write(event.delta);
              },
              onToolCallEndEvent() {
                console.log("");
              },
              onToolCallResultEvent({ event }) {
                if (event.content) {
                  console.log("🔍 Tool call result:", event.content);
                }
              },
            },
          );
        } catch (error) {
          console.error("❌ Error running agent:", error);
        }

        rl.resume();
        promptUser();
      });
    };

    rl.on("close", () => {
      console.log("\n👋 Goodbye!");
      resolve();
    });

    promptUser();
  });
}

async function main() {
  await chatLoop();
}

main().catch(console.error);

4.3 AG-UI Event Flow Integration

For the simple echo agent application (that can demonstrate AG-UI), a subset of the events are used (also are relevant). The flow is roughly as follows:

The app receives messages in the form of a RunAgentInput which describes the details of a request being passed to the agent including messages and state. Events from the agent, including tool calls (if there are any), are converted to AG-UI events and streamed back to the caller as callback events (but could use Server-Sent Events (SSE) as well).

The echo agent implements the following event sequence:

  1. RUN_STARTED event initiates agent execution
  2. TEXT_MESSAGE_START begins response streaming
  3. TEXT_MESSAGE_CONTENT delivers echo content in chunks
  4. TEXT_MESSAGE_END completes message transmission
  5. RUN_FINISHED signals completion

The above message sequence represents one run of the user input. This is without any tool calling. The number of messages may seem excessive, but they are important to build standard, interoperable, and aware applications.

4.4 Running the echo application

The simple echo agent application can be started & tested using npm run dev as follows:

~/Work/source/my-ag-ui-cli-app: npm run dev

> [email protected] dev
> tsx --watch src/index.ts

🤖 AG-UI chat started! Type your messages and press Enter. Press Ctrl+D to quit.

> hello!

🤖 AG-UI assistant: hello! - echo!

> echo this one for me

🤖 AG-UI assistant: echo this one for me - echo!

> 

Each user input goes through a minimum of 5 events. As mentioned earlier, the events would likely be large in a real application.

5. Understanding the Event Flow

5.1 Event Stream Architecture

The AG-UI protocol implements a unified event stream where the client makes a single POST request to the agent endpoint, then listens to a continuous stream of events. Each event contains a type identifier and minimal payload data. The protocol defines focused event types designed to support real-time agent interactions through Server-Sent Events (SSE) streaming.

5.2 Message Lifecycle Events

5.2.1 TEXT_MESSAGE_START Event

The TEXT_MESSAGE_START event signals that a message has begun streaming, indicating the assistant has started generating a response. This event includes:

  • messageId: Unique identifier for the message
  • role: Always set to "assistant" for agent responses
  • timestamp: Optional event creation time

5.2.2 TEXT_MESSAGE_CONTENT Event

TEXT_MESSAGE_CONTENT events deliver message content in streaming chunks through the delta property. Each event maintains the same messageId from the corresponding TEXT_MESSAGE_START event, enabling proper message reconstruction on the client side.

5.2.3 TEXT_MESSAGE_END Event

TEXT_MESSAGE_END signals message completion and provides opportunity for output finalization or UI animation triggers. The event carries the matching messageId to close the message stream.

5.3 Echo Server Event Sequence

The echo server implements the following standardized event flow:

  1. Request Initiation: Client sends RunAgentInput with message content
  2. RUN_STARTED: Agent execution begins with threadId and runId
  3. TEXT_MESSAGE_START: Response generation initiates
  4. TEXT_MESSAGE_CONTENT: Echo content streams in delta chunks
  5. TEXT_MESSAGE_END: Message transmission completes
  6. RUN_FINISHED: Agent execution terminates successfully

The frontend event handler processes streaming events through callbacks:

          const r = await agent.runAgent(
            {},
            {
              onTextMessageStartEvent() {
                process.stdout.write("🤖 AG-UI assistant: ");
              },
              onTextMessageContentEvent({ event }) {
                process.stdout.write(event.delta);
              },
              onTextMessageEndEvent() {
                console.log("\n");
              },
              onToolCallStartEvent({ event }) {
                console.log("🔧 Tool call:", event.toolCallName);
              },
              onToolCallArgsEvent({ event }) {
                process.stdout.write(event.delta);
              },
              onToolCallEndEvent() {
                console.log("");
              },
              onToolCallResultEvent({ event }) {
                if (event.content) {
                  console.log("🔍 Tool call result:", event.content);
                }
              },
            },
          );

5.5 Debugging Event Flow

Event flow debugging requires monitoring the sequence and timing of events. The integration receives messages in the form of a RunAgentInput object that describes the details of the requested agent run including message history, state, and available tools. Events from the agent are converted to AG-UI events and streamed back to the caller as Server-Sent Events.

Common debugging approaches include:

  • Console logging of event types and payloads
  • Verification of event sequence ordering
  • Monitoring for missing or duplicate events
  • Validation of messageId consistency across message events

5.6 Error Handling

The RUN_ERROR event type handles execution failures with message and optional code properties. Error events terminate the current run and require client-side error state management for user notification and recovery procedures.

6. Adding Smart Features

6.1 Enhanced Message Processing with State Management

The basic echo server can be extended with sophisticated state management capabilities using the TypeScript AG-UI client. The @ag-ui/client provides agent implementations that handle the full lifecycle of AG-UI communication: connecting to servers, processing streaming events, managing state mutations, and providing reactive subscriber hooks.

Enhanced agent implementation with message statistics tracking:

import { openai } from "@ai-sdk/openai"
import { Agent } from "@mastra/core/agent"
import { MastraAgent } from "@ag-ui/mastra"
import { Memory } from "@mastra/memory"
import { LibSQLStore } from "@mastra/libsql"
import { createTool } from "@mastra/core/tools"
import { z } from "zod"

interface MessageStats {
  totalMessages: number
  totalCharacters: number
  averageWordsPerMessage: number
  sentimentScore: number
}

// Create message analysis tool
const messageAnalysisTool = createTool({
  id: "analyze-message",
  description: "Analyze message characteristics and maintain conversation statistics",
  inputSchema: z.object({
    message: z.string().describe("The message to analyze"),
  }),
  outputSchema: z.object({
    characterCount: z.number(),
    wordCount: z.number(),
    sentimentScore: z.number().min(-1).max(1),
    messageStats: z.object({
      totalMessages: z.number(),
      totalCharacters: z.number(),
      averageWordsPerMessage: z.number(),
      sentimentScore: z.number(),
    }),
  }),
  execute: async ({ context }) => {
    const characterCount = context.message.length
    const wordCount = context.message.split(/\s+/).filter(word => word.length > 0).length
    
    // Simple sentiment analysis based on keywords
    const positiveWords = ['good', 'great', 'excellent', 'amazing', 'wonderful', 'happy', 'love']
    const negativeWords = ['bad', 'terrible', 'awful', 'hate', 'sad', 'angry', 'disappointed']
    
    const words = context.message.toLowerCase().split(/\s+/)
    const positiveCount = words.filter(word => positiveWords.includes(word)).length
    const negativeCount = words.filter(word => negativeWords.includes(word)).length
    const sentimentScore = (positiveCount - negativeCount) / Math.max(words.length, 1)
    
    // Update conversation statistics (simplified - would use actual persistence)
    const messageStats: MessageStats = {
      totalMessages: 1, // Would increment from stored value
      totalCharacters: characterCount,
      averageWordsPerMessage: wordCount,
      sentimentScore: sentimentScore,
    }
    
    return {
      characterCount,
      wordCount,
      sentimentScore,
      messageStats,
    }
  },
})

export const enhancedAgent = new MastraAgent({
  agent: new Agent({
    name: "Enhanced Echo Assistant",
    instructions: `
    You are an intelligent echo server with message analysis capabilities.
    For each user message:
    1. Use the analyze-message tool to get detailed statistics
    2. Echo the original message with analytical insights
    3. Provide helpful feedback about communication patterns
    Be conversational and insightful in your responses.
    `,
    model: openai("gpt-4o"),
    tools: { messageAnalysisTool },
    memory: new Memory({
      storage: new LibSQLStore({
        url: "file:./enhanced_echo.db",
      }),
    }),
  }),
  threadId: "enhanced-conversation",
})

6.2 Real-time Typing Indicators and Progress Feedback

The CLI interface can be enhanced to provide visual feedback during agent processing phases. Event handling includes onTextMessageStartEvent, onTextMessageContentEvent, and onTextMessageEndEvent for streaming display management.

Enhanced CLI interface with typing indicators:

import * as readline from "readline"
import { enhancedAgent } from "./enhanced-agent"
import { randomUUID } from "node:crypto"

interface ProcessingState {
  isThinking: boolean
  isAnalyzing: boolean
  isResponding: boolean
  currentTool?: string
}

class EnhancedCLI {
  private rl: readline.Interface
  private processingState: ProcessingState = {
    isThinking: false,
    isAnalyzing: false,
    isResponding: false,
  }
  private spinnerInterval?: NodeJS.Timeout

  constructor() {
    this.rl = readline.createInterface({
      input: process.stdin,
      output: process.stdout,
    })
  }

  private showSpinner(message: string) {
    const spinner = ['⠋', '⠙', '⠹', '⠸', '⠼', '⠴', '⠦', '⠧', '⠇', '⠏']
    let i = 0
    
    this.spinnerInterval = setInterval(() => {
      process.stdout.write(`\r${spinner[i]} ${message}`)
      i = (i + 1) % spinner.length
    }, 100)
  }

  private hideSpinner() {
    if (this.spinnerInterval) {
      clearInterval(this.spinnerInterval)
      process.stdout.write('\r\x1b[K') // Clear current line
    }
  }

  async startChat() {
    console.log("🚀 Enhanced AG-UI Echo Assistant")
    console.log("Features: Message Analysis, Statistics, Smart Responses")
    console.log("Type your messages and press Enter. Press Ctrl+D to quit.\n")

    return new Promise<void>((resolve) => {
      const promptUser = () => {
        this.rl.question("> ", async (input) => {
          if (input.trim() === "") {
            promptUser()
            return
          }

          console.log("")
          this.rl.pause()

          // Add user message to conversation
          enhancedAgent.messages.push({
            id: randomUUID(),
            role: "user",
            content: input.trim(),
          })

          try {
            await enhancedAgent.runAgent(
              {},
              {
                onRunStartedEvent: () => {
                  this.processingState.isThinking = true
                  this.showSpinner("Agent is thinking...")
                },

                onToolCallStartEvent: ({ event }) => {
                  this.hideSpinner()
                  this.processingState.isAnalyzing = true
                  this.processingState.currentTool = event.toolCallName
                  
                  console.log(`🔧 Analyzing with: ${event.toolCallName}`)
                  this.showSpinner(`Running ${event.toolCallName}...`)
                },

                onToolCallArgsEvent: ({ event }) => {
                  // Show tool arguments being processed
                  process.stdout.write(event.delta)
                },

                onToolCallEndEvent: () => {
                  this.hideSpinner()
                  console.log("✅ Analysis complete\n")
                  this.processingState.isAnalyzing = false
                },

                onToolCallResultEvent: ({ event }) => {
                  if (event.content) {
                    try {
                      const result = JSON.parse(event.content)
                      console.log("📊 Message Statistics:")
                      console.log(`   Characters: ${result.characterCount}`)
                      console.log(`   Words: ${result.wordCount}`)
                      console.log(`   Sentiment: ${result.sentimentScore > 0 ? 'Positive' : result.sentimentScore < 0 ? 'Negative' : 'Neutral'}`)
                      console.log("")
                    } catch (e) {
                      console.log("🔍 Tool result:", event.content)
                    }
                  }
                },

                onTextMessageStartEvent: () => {
                  this.hideSpinner()
                  this.processingState.isResponding = true
                  process.stdout.write("🤖 Enhanced Echo: ")
                },

                onTextMessageContentEvent: ({ event }) => {
                  process.stdout.write(event.delta)
                },

                onTextMessageEndEvent: () => {
                  console.log("\n")
                  this.processingState.isResponding = false
                },

                onRunFinishedEvent: () => {
                  this.processingState.isThinking = false
                  console.log("💫 Response complete\n")
                },

                onRunErrorEvent: ({ event }) => {
                  this.hideSpinner()
                  console.error("❌ Error:", event.message)
                  if (event.code) {
                    console.error("   Code:", event.code)
                  }
                },
              }
            )
          } catch (error) {
            this.hideSpinner()
            console.error("❌ Unexpected error:", error)
          }

          this.rl.resume()
          promptUser()
        })
      }

      this.rl.on("close", () => {
        this.hideSpinner()
        console.log("\n👋 Enhanced Echo Assistant session ended!")
        resolve()
      })

      promptUser()
    })
  }
}

async function main() {
  const cli = new EnhancedCLI()
  await cli.startChat()
}

main().catch(console.error)

6.3 Custom Event Processing for Advanced Features

Custom events enable application-specific functionality beyond the standard AG-UI event types. The protocol supports tool integration for real-world functionality and provides interactive chat interface capabilities.

Implementation of custom event handling for conversation insights:

// Custom event types for enhanced features
interface ConversationInsight {
  type: 'mood_trend' | 'topic_shift' | 'engagement_level'
  data: any
  timestamp: number
}

const conversationInsightsTool = createTool({
  id: "generate-insights",
  description: "Generate conversation insights and trends",
  inputSchema: z.object({
    conversationHistory: z.array(z.string()),
    currentMessage: z.string(),
  }),
  outputSchema: z.object({
    insights: z.array(z.object({
      type: z.enum(['mood_trend', 'topic_shift', 'engagement_level']),
      data: z.any(),
      timestamp: z.number(),
    })),
    recommendations: z.array(z.string()),
  }),
  execute: async ({ context }) => {
    const insights: ConversationInsight[] = []
    
    // Analyze mood progression
    const recentMessages = context.conversationHistory.slice(-5)
    const moodTrend = analyzeMoodTrend(recentMessages)
    insights.push({
      type: 'mood_trend',
      data: { trend: moodTrend, confidence: 0.8 },
      timestamp: Date.now(),
    })
    
    // Detect topic shifts
    const topicShift = detectTopicShift(context.conversationHistory, context.currentMessage)
    if (topicShift) {
      insights.push({
        type: 'topic_shift',
        data: { previousTopic: topicShift.from, newTopic: topicShift.to },
        timestamp: Date.now(),
      })
    }
    
    // Generate recommendations
    const recommendations = generateRecommendations(insights)
    
    return { insights, recommendations }
  },
})

function analyzeMoodTrend(messages: string[]): 'improving' | 'declining' | 'stable' {
  // Simplified mood analysis implementation
  const sentimentScores = messages.map(msg => calculateSentiment(msg))
  if (sentimentScores.length < 2) return 'stable'
  
  const recent = sentimentScores.slice(-2)
  const diff = recent[1] - recent[0]
  
  if (diff > 0.1) return 'improving'
  if (diff < -0.1) return 'declining'
  return 'stable'
}

function detectTopicShift(history: string[], current: string): { from: string, to: string } | null {
  // Simplified topic detection - would use more sophisticated NLP in practice
  const keywords = extractKeywords(current)
  const previousKeywords = history.length > 0 ? extractKeywords(history[history.length - 1]) : []
  
  const overlap = keywords.filter(k => previousKeywords.includes(k)).length
  const threshold = Math.min(keywords.length, previousKeywords.length) * 0.3
  
  if (overlap < threshold && history.length > 0) {
    return {
      from: previousKeywords.join(', '),
      to: keywords.join(', ')
    }
  }
  
  return null
}

function extractKeywords(text: string): string[] {
  // Simple keyword extraction - remove common words
  const stopWords = new Set(['the', 'a', 'an', 'and', 'or', 'but', 'in', 'on', 'at', 'to', 'for', 'of', 'with', 'by'])
  return text.toLowerCase()
    .split(/\W+/)
    .filter(word => word.length > 3 && !stopWords.has(word))
    .slice(0, 5)
}

function calculateSentiment(text: string): number {
  // Simple sentiment calculation
  const positiveWords = ['good', 'great', 'excellent', 'amazing', 'wonderful', 'happy', 'love', 'perfect', 'awesome']
  const negativeWords = ['bad', 'terrible', 'awful', 'hate', 'sad', 'angry', 'disappointed', 'horrible', 'worst']
  
  const words = text.toLowerCase().split(/\W+/)
  const positive = words.filter(w => positiveWords.includes(w)).length
  const negative = words.filter(w => negativeWords.includes(w)).length
  
  return (positive - negative) / Math.max(words.length, 1)
}

function generateRecommendations(insights: ConversationInsight[]): string[] {
  const recommendations: string[] = []
  
  insights.forEach(insight => {
    switch (insight.type) {
      case 'mood_trend':
        if (insight.data.trend === 'declining') {
          recommendations.push("Consider asking about the user's concerns or offering support")
        } else if (insight.data.trend === 'improving') {
          recommendations.push("The user seems more positive - good time to explore their interests")
        }
        break
      case 'topic_shift':
        recommendations.push(`Topic changed from "${insight.data.previousTopic}" to "${insight.data.newTopic}" - acknowledge the transition`)
        break
      case 'engagement_level':
        if (insight.data.level < 0.5) {
          recommendations.push("User engagement seems low - try asking open-ended questions")
        }
        break
    }
  })
  
  return recommendations
}

6.4 Enhanced CLI with Visual Feedback

The enhanced CLI implementation provides comprehensive visual feedback and state management:

// Enhanced event handler with custom insight processing
const eventHandlers = {
  // ... previous handlers ...

  onCustomEvent: ({ event }: { event: any }) => {
    if (event.name === 'conversation_insights') {
      console.log("\n🧠 Conversation Insights:")
      
      event.value.insights.forEach((insight: ConversationInsight) => {
        switch (insight.type) {
          case 'mood_trend':
            const trendEmoji = insight.data.trend === 'improving' ? '📈' : 
                             insight.data.trend === 'declining' ? '📉' : '📊'
            console.log(`   ${trendEmoji} Mood trend: ${insight.data.trend}`)
            break
          case 'topic_shift':
            console.log(`   🔄 Topic shift detected: ${insight.data.previousTopic} → ${insight.data.newTopic}`)
            break
          case 'engagement_level':
            const engagementEmoji = insight.data.level > 0.7 ? '🔥' : 
                                  insight.data.level > 0.4 ? '👍' : '😴'
            console.log(`   ${engagementEmoji} Engagement level: ${Math.round(insight.data.level * 100)}%`)
            break
        }
      })
      
      if (event.value.recommendations.length > 0) {
        console.log("\n💡 Recommendations:")
        event.value.recommendations.forEach((rec: string) => {
          console.log(`   • ${rec}`)
        })
      }
      console.log("")
    }
  },

  onStateEvent: ({ event }: { event: any }) => {
    if (event.type === 'STATE_SNAPSHOT') {
      console.log("📸 State updated:", JSON.stringify(event.snapshot, null, 2))
    }
  },
}

7 Conclusion

The agent protocol landscape continues evolving toward standardization and interoperability. MCP functions as a universal translator, enabling seamless dialogue between AI systems and external resources, while A2A enables cross-platform agent communication. AG-UI contributes to this ecosystem by addressing the agent-user interaction domain.

As agent applications become more sophisticated, the need for standardized user interaction patterns will likely drive increased AG-UI adoption. The protocol's technical foundation positions it well for broader market acceptance as developers recognize the complexity of implementing custom agent-user communication systems.


Written by mayankc | Mayank loves to write tech articles & books.
Published by HackerNoon on 2025/08/27