Every developer who’s built an AI feature knows how quickly a simple chat setup grows complex. You start with an API call and streaming text, then run into tool calls, multi-step workflows, and state management, ending up with brittle handlers and custom WebSocket logic.
Most AI apps stop at a chat box with a model behind it. The Agent User Interaction (AG UI) Protocol takes a different path by defining a structured, event-driven contract between agents, tools, and UIs. Instead of wiring everything manually, you build on a protocol that supports streaming, tools, messages, and UI state from the start.
This guide is for developers who want to move beyond basic chat UIs and build event-driven AI experiences. We’ll walk through the core ideas, then build a CLI client using the AG-UI TypeScript SDK with streaming output, tool calls, and multi-step runs. By the end, you’ll understand how AG-UI works in practice and whether it’s a better choice than stitching everything together yourself.
The Replay is a weekly newsletter for dev and engineering leaders.
Delivered once a week, it's your curated guide to the most important conversations around frontend dev, emerging AI tools, and the state of modern software.
Before we dive in, make sure you have:
npm install -g pnpm)Set up your OpenAI API key as an environment variable:
export OPENAI_API_KEY=your-api-key-here
If you’re new to TypeScript, don’t worry, we’ll keep things straightforward and explain concepts as we go.
Think about how you’ve built AI features before. You call the OpenAI API, stream the response, handle tool calls with custom logic, manage conversation state yourself, and write separate handlers for every event type.
Now imagine doing that for a web app, a CLI, and a Slack bot. You end up rewriting the same logic three times, with small differences that make everything harder to maintain. That’s the problem AG-UI is designed to solve.
When you use the OpenAI API directly, you mostly get text or tokens back. Everything beyond that is up to you: managing state, handling tool calls, coordinating UI updates, and keeping multi-step runs in sync. AG-UI adds a protocol layer between the agent and the UI that standardizes these interactions.
Instead of one-off responses, AG-UI streams structured events that the UI can react to in real time. It’s a protocol, not a framework. It defines how agents and clients communicate without dictating how you build either side, so you can write agent logic once and reuse it across multiple clients.
At its core, AG-UI is built around a few simple ideas. Agents are long-lived assistants with instructions, tools, and memory. Messages are structured exchanges between the user and the agent. Events capture real-time updates like streamed text, tool calls, and state changes. Tools are typed functions that the agent can call to perform real work. State management ties it all together by tracking runs and context so you don’t have to manage it yourself.
AG-UI isn’t something you install and call. It’s a protocol that defines a shared contract for how agents and clients communicate. A useful comparison is HTTP: it specifies how browsers and servers talk, but leaves you free to use Express, Flask, or any other framework to implement it.
The protocol defines:
This matters because AI agents don’t behave like traditional request-response APIs.
With a typical API, you send a request and wait for a response. AI agents are different. Their output streams over time, they pause to call tools, and their state changes mid-run as they reason, act, and respond. Some runs even span multiple turns. AG-UI embraces this reality with an event-driven model.
Instead of waiting for a complete response:
const response = await agent.complete(prompt);
You handle events as they arrive:
agent.runAgent({}, {
onTextMessageContentEvent({ event }) {
process.stdout.write(event.delta);
},
onToolCallStartEvent({ event }) {
console.log(`Calling tool: ${event.toolCallName}`);
},
});
This gives you fine-grained control and responsive UIs without complex state management.
Because AG-UI is a protocol, any client that understands it can talk to any agent that follows it. You write your agent logic once, then reuse it across a web app, a CLI, a Slack bot, a mobile app, or even a VS Code extension.
Let’s build something real. We’ll create a weather-assistant CLI client that streams responses, handles tool calls, and maintains conversation memory, all using the AG-UI protocol.
Create a new directory and initialize your project:
mkdir weather-assistant cd weather-assistant pnpm init
Install TypeScript and development dependencies:
pnpm add -D typescript @types/node tsx
Create a tsconfig.json file with proper configuration:
{
"compilerOptions": {
"target": "ES2022",
"module": "commonjs",
"lib": ["ES2022"],
"outDir": "./dist",
"rootDir": "./src",
"strict": true,
"esModuleInterop": true,
"skipLibCheck": true,
"forceConsistentCasingInFileNames": true,
"resolveJsonModule": true
},
"include": ["src/**/*"],
"exclude": ["node_modules", "dist"]
}
Update your package.json to include useful scripts:
{
"name": "weather-assistant",
"version": "1.0.0",
"scripts": {
"start": "tsx src/index.ts",
"dev": "tsx --watch src/index.ts",
"build": "tsc",
"clean": "rm -rf dist"
}
}
This gives you:
pnpm start – Run your client oncepnpm dev – Run with auto-restart on file changespnpm build – Compile TypeScript to JavaScriptpnpm clean – Remove compiled filesNow install the core AG-UI packages:
# Core AG-UI packages pnpm add @ag-ui/client @ag-ui/core @ag-ui/mastra
Here’s what each package does:
@ag-ui/client – Provides helpers for building AG-UI clients@ag-ui/core – Defines the protocol types and core functionality@ag-ui/mastra – Adapter that makes Mastra agents communicate using AG-UINext, install the Mastra ecosystem packages:
pnpm add @mastra/core @mastra/memory @mastra/libsql
@mastra/core – Agent framework with tools and workflows@mastra/memory – Conversation memory and persistence@mastra/libsql – SQLite-based storage for memoryFinally, install the AI SDK and utilities:
pnpm add @ai-sdk/openai zod@^3.25
Your project structure should now look like this:
weather-assistant/ ├── node_modules/ ├── src/ ├── package.json ├── pnpm-lock.yaml └── tsconfig.json
In the next section, we’ll create our first agent that speaks AG-UI.
AG-UI agents aren’t just functions that take a prompt and return text. They’re long-lived assistants with instructions, memory, and the ability to emit structured events as they work.
To create one, you wrap a Mastra agent with the MastraAgent AG-UI adapter. This adapter is what lets the agent speak the AG-UI protocol.
Create src/agent.ts:
import { openai } from "@ai-sdk/openai";
import { Agent } from "@mastra/core/agent";
import { MastraAgent } from "@ag-ui/mastra";
import { Memory } from "@mastra/memory";
import { LibSQLStore } from "@mastra/libsql";
export const agent = new MastraAgent({
agent: new Agent({
name: "Weather Assistant",
instructions: `
You are a helpful AI assistant with weather capabilities.
Be friendly, conversational, and provide clear information.
When users ask about weather, always specify the location clearly.
`,
model: openai("gpt-4o"),
memory: new Memory({
storage: new LibSQLStore({
url: "file:./assistant.db",
}),
}),
}),
threadId: "main-conversation",
});
At the top level, MastraAgent acts as the bridge between your agent and AG-UI. It translates the agent’s behavior into protocol events, so you don’t have to manage that yourself.
Inside it, you configure the MastraAgent:
For memory, we’re using LibSQL (SQLite). This means the agent remembers previous messages even if the app restarts, and conversations persist across runs instead of resetting every time.
The threadId groups messages into a single conversation. You can think of it as a conversation ID that keeps related interactions connected.
When you call methods on this agent, it doesn’t just return strings. It emits AG-UI events:
The key advantage of AG-UI is that you don’t have to emit these events yourself. The MastraAgent adapter handles all the protocol details:
// Behind the scenes, MastraAgent is doing this: // 1. Receive your message // 2. Emit "run started" event // 3. Stream tokens → emit "text delta" events for each token // 4. If tool needed → emit "tool call" events // 5. Emit "run complete" event
You just handle the events in your client. Let’s build that client now.
Time to build the interface. We’ll create a chat loop that demonstrates how AG-UI’s event system works in practice.
Create src/index.ts and add the following:
import * as readline from "readline";
import { agent } from "./agent";
import { randomUUID } from "@ag-ui/client";
const rl = readline.createInterface({
input: process.stdin,
output: process.stdout,
});
This sets up a command-line chat interface that allows you to communicate with your AG-UI agent in real-time. It uses Node.js’s built-in readline module to read user input from the terminal and print the agent’s responses as they stream back.
Now, define the chatLoop functions as follows:
// src/index.ts
async function chatLoop() {
console.log("🤖 Weather Assistant started!");
console.log("Type your messages and press Enter. Press Ctrl+D to quit.\n");
return new Promise<void>((resolve) => {
const promptUser = () => {
rl.question("> ", async (input) => {
if (input.trim() === "") {
promptUser();
return;
}
console.log("");
rl.pause();
// Add user message to conversation
agent.messages.push({
id: randomUUID(),
role: "user",
content: input.trim(),
});
try {
// Run the agent with event handlers
await agent.runAgent(
{},
{
onTextMessageStartEvent() {
process.stdout.write("🤖 Assistant: ");
},
onTextMessageContentEvent({ event }) {
process.stdout.write(event.delta);
},
onTextMessageEndEvent() {
console.log("\n");
},
}
);
} catch (error) {
console.error("Error:", error);
}
// Resume input
rl.resume();
promptUser();
});
};
// Handle Ctrl+D to quit
rl.on("close", () => {
console.log("\n👋 Thanks for using Weather Assistant!");
resolve();
});
promptUser();
});
}
async function main() {
await chatLoop();
}
main().catch(console.error);
Inside chatLoop, promptUser continuously reads input and skips empty messages. When the user submits text, input is paused so the agent can respond without overlapping terminal output.
The message is added to agent.messages to preserve conversation context, then the agent is run with agent.runAgent. Its response streams back as events, printing text in real time as it arrives.
Errors are caught to avoid crashing the process. Once the response completes, input resumes and the loop continues. Pressing Ctrl+D exits the app with a goodbye message.
Notice how we’re not managing the response text ourselves. We just handle events as they arrive:
onTextMessageContentEvent({ event }) {
process.stdout.write(event.delta);
}
This prints each token and is the core of AG-UI’s event-driven model. Instead of waiting for a complete response and then displaying it, you react to each event as it streams in.
Let’s see it in action. Run your client:
pnpm dev
You should see:
🤖 Weather Assistant started! Type your messages and press Enter. Press Ctrl+D to quit. >
Try these prompts:
Notice how:
This is AG-UI’s event system at work. Each token triggers an event, and your client reacts immediately.
Let’s trace through what actually happens when you send a message. Understanding this flow is crucial for debugging and extending your client.
Add some debug logging to see events in action. Update your onTextMessageContentEvent:
onTextMessageContentEvent({ event }) {
console.log(`[DEBUG] Text delta received: "${event.delta}"`);
process.stdout.write(event.delta);
}
Run the client again and send “Hi there.” You’ll see output like this:
[DEBUG] Text delta received: "Hello" [DEBUG] Text delta received: "!" [DEBUG] Text delta received: " How" [DEBUG] Text delta received: " can" [DEBUG] Text delta received: " I" [DEBUG] Text delta received: " help" [DEBUG] Text delta received: " you" [DEBUG] Text delta received: " today" [DEBUG] Text delta received: "?"
Each delta is a chunk of text, sometimes a word, sometimes a character. Your client displays them as fast as they arrive.
Here’s how AG-UI events map to what users see:
| Event type | What it means | UI impact |
|---|---|---|
onRunStartEvent |
Agent begins processing | Show “thinking” indicator |
onTextMessageStartEvent |
Agent starts responding | Show assistant label |
onTextMessageContentEvent |
Token arrives | Append text to output |
onTextMessageEndEvent |
Response complete | Finalize display |
onToolCallStartEvent |
Agent calls a tool | Show tool activity |
onToolCallResultEvent |
Tool returns data | Show result indicator |
onRunCompleteEvent |
Entire run finished | Re-enable input |
The protocol gives you fine-grained control over UX without complex state management.
Right now, we’re only handling text events. But AG-UI supports much more:
Run lifecycle events:
onRunStartEvent({ event }) {
console.log(`[RUN] Started: ${event.runId}`);
}
onRunCompleteEvent({ event }) {
console.log(`[RUN] Complete: ${event.runId}`);
}
Tool call events (we’ll implement these next):
onToolCallStartEvent({ event }) {
console.log(`Calling tool: ${event.toolCallName}`);
}
onToolCallResultEvent({ event }) {
console.log(`Tool result: ${JSON.stringify(event.content)}`);
}
Error events:
onErrorEvent({ event }) {
console.error(`Error: ${event.error.message}`);
}
Let’s add comprehensive logging to understand the full event flow. Update your agent.runAgent() call:
await agent.runAgent(
{},
{
onRunStartEvent({ event }) {
console.log(`\n[EVENT] Run started: ${event.runId}`);
},
onTextMessageStartEvent({ event }) {
console.log(`[EVENT] Text message started`);
process.stdout.write("🤖 Assistant: ");
},
onTextMessageContentEvent({ event }) {
process.stdout.write(event.delta);
},
onTextMessageEndEvent({ event }) {
console.log(`\n[EVENT] Text message ended`);
},
onRunCompleteEvent({ event }) {
console.log(`[EVENT] Run complete: ${event.runId}\n`);
},
onErrorEvent({ event }) {
console.error(`[EVENT] Error: ${event.error.message}`);
},
}
);
Now when you send a message, you’ll see the complete lifecycle:
[EVENT] Run started: abc-123-def [EVENT] Text message started 🤖 Assistant: Hello! I'm here to help. What would you like to know? [EVENT] Text message ended [EVENT] Run complete: abc-123-def
This trace shows you exactly when each phase happens. In a real app, you’d use these events to:
onRunStartEventThe event-driven model gives you precise control over UX timing.
Chat is nice, but real AI apps need to do things. Let’s add a weather tool that fetches live data.
We’ll start by defining a typed tool with Mastra’s createTool.
Create a directory for tools:
mkdir -p src/tools
Create src/tools/weather.tool.ts with the following:
import { createTool } from "@mastra/core/tools";
import { z } from "zod";
interface GeocodingResponse {
results: {
latitude: number;
longitude: number;
name: string;
}[];
}
interface WeatherResponse {
current: {
time: string;
temperature_2m: number;
apparent_temperature: number;
relative_humidity_2m: number;
wind_speed_10m: number;
wind_gusts_10m: number;
weather_code: number;
};
}
These interfaces describe the shape of the data returned by the Open-Meteo APIs. They don’t affect runtime behavior, but they give TypeScript strong typing so you know exactly which fields are available when working with the responses.
//weather.tool.ts
export const weatherTool = createTool({
id: "get-weather",
description: "Get current weather for a location",
inputSchema: z.object({
location: z.string().describe("City name or location"),
}),
outputSchema: z.object({
temperature: z.number(),
feelsLike: z.number(),
humidity: z.number(),
windSpeed: z.number(),
windGust: z.number(),
conditions: z.string(),
location: z.string(),
}),
execute: async ({ context }) => {
return await getWeather(context.location);
},
});
This creates a tool the agent can call. The id is how the agent refers to it internally, and the description helps the model understand when and why to use it. The execute method runs when the agent calls the tool. It extracts the location from the validated input and delegates the actual work to the getWeather function.
//weather.tool.ts
const getWeather = async (location: string) => {
const geocodingUrl = `https://geocoding-api.open-meteo.com/v1/search?name=${encodeURIComponent(
location
)}&count=1`;
const geocodingResponse = await fetch(geocodingUrl);
const geocodingData = (await geocodingResponse.json()) as GeocodingResponse;
if (!geocodingData.results?.[0]) {
throw new Error(`Location '${location}' not found`);
}
const { latitude, longitude, name } = geocodingData.results[0];
const weatherUrl = `https://api.open-meteo.com/v1/forecast?latitude=${latitude}&longitude=${longitude}¤t=temperature_2m,apparent_temperature,relative_humidity_2m,wind_speed_10m,wind_gusts_10m,weather_code`;
const response = await fetch(weatherUrl);
const data = (await response.json()) as WeatherResponse;
return {
temperature: data.current.temperature_2m,
feelsLike: data.current.apparent_temperature,
humidity: data.current.relative_humidity_2m,
windSpeed: data.current.wind_speed_10m,
windGust: data.current.wind_gusts_10m,
conditions: getWeatherCondition(data.current.weather_code),
location: name,
};
};
The getWeather function fetches live weather data from Open-Meteo. This API is free and requires no authentication.
function getWeatherCondition(code: number): string {
const conditions: Record<number, string> = {
0: "Clear sky",
1: "Mainly clear",
2: "Partly cloudy",
3: "Overcast",
45: "Foggy",
48: "Depositing rime fog",
51: "Light drizzle",
53: "Moderate drizzle",
55: "Dense drizzle",
61: "Slight rain",
63: "Moderate rain",
65: "Heavy rain",
71: "Slight snow fall",
73: "Moderate snow fall",
75: "Heavy snow fall",
80: "Slight rain showers",
81: "Moderate rain showers",
82: "Violent rain showers",
85: "Slight snow showers",
86: "Heavy snow showers",
95: "Thunderstorm",
96: "Thunderstorm with slight hail",
99: "Thunderstorm with heavy hail",
};
return conditions[code] || "Unknown";
}
Open-Meteo returns numeric weather codes. The getWeatherCondition helper maps those codes to readable descriptions such as “Clear sky” or “Moderate rain,” making the output easier to understand.
Now update src/agent.ts to use the tool:
import { openai } from "@ai-sdk/openai";
import { Agent } from "@mastra/core/agent";
import { MastraAgent } from "@ag-ui/mastra";
import { Memory } from "@mastra/memory";
import { LibSQLStore } from "@mastra/libsql";
import { weatherTool } from "./tools/weather.tool"; // Add this import
export const agent = new MastraAgent({
agent: new Agent({
name: "Weather Assistant",
instructions: `
You are a helpful AI assistant with weather capabilities.
Be friendly, conversational, and provide clear information.
When users ask about weather:
- Always use the weatherTool to fetch current data
- Present the information in a friendly, conversational way
- Include temperature, conditions, and relevant details
- If a location isn't specified, ask for it
`,
model: openai("gpt-4o"),
tools: { weatherTool }, // Add the tool here
memory: new Memory({
storage: new LibSQLStore({
url: "file:./assistant.db",
}),
}),
}),
threadId: "main-conversation",
});
From this point on, weather questions trigger a structured tool call, and the agent responds with accurate, up-to-date information in a conversational tone.
Now update src/index.ts to handle tool events:
await agent.runAgent(
{},
{
...
// New tool event handlers
onToolCallStartEvent({ event }) {
console.log(`\n Tool call: ${event.toolCallName}`);
},
onToolCallArgsEvent({ event }) {
process.stdout.write(event.delta);
},
onToolCallEndEvent() {
console.log("");
},
onToolCallResultEvent({ event }) {
if (event.content) {
console.log(`Tool result: ${JSON.stringify(event.content, null, 2)}`);
}
}
}
);
When the agent decides it needs external data, onToolCallStartEvent fires and logs which tool is being called, making that decision visible. As the tool arguments are prepared and streamed, onToolCallArgsEvent prints them in real time so you can see exactly what input the agent is sending. When argument streaming finishes, onToolCallEndEvent fires. Finally, onToolCallResultEvent logs the structured result returned by the tool before the agent continues responding.
Let’s see this in action. Restart your client:
pnpm dev
Try these queries:
> What's the weather in London?
You should see:
[EVENT] Run started
Tool call: get-weather
{"location":"London"}
Tool result: {
"temperature": 12,
"feelsLike": 10,
"humidity": 78,
"windSpeed": 15,
"windGust": 25,
"conditions": "Partly cloudy",
"location": "London"
}
🤖 Assistant: The weather in London is currently partly cloudy with a temperature of 12°C (feels like 10°C). The humidity is at 78%, and there's a moderate wind at 15 km/h with gusts up to 25 km/h.
[EVENT] Run complete
Notice what happened:
weatherTool.execute() fetched real dataThis is the power of AG-UI’s tool protocol. The agent automatically:
> How's the weather?
The agent should prompt for a location since none was provided.
> What's the weather in Atlantis?
You’ll see an error because the location doesn’t exist:
Error: Location 'Atlantis' not found
The protocol handles errors gracefully, and the agent can explain what went wrong.
Let’s test a few more scenarios to see AG-UI’s robustness:
> Compare the weather in Tokyo and Seattle
The agent will call the tool twice and present both results.
> What's the weather in Paris? > How about tomorrow?
The agent should remember you’re asking about Paris (thanks to conversation memory).
> Is it raining in Seattle? > Do I need an umbrella in London? > What's the temperature in NYC?
GPT-4o is smart enough to extract locations from natural queries.
The beauty here is that you didn’t have to:
AG-UI and the underlying LLM handle it. You just defined the tool interface and handled events.
Let’s push further with adding a browser tool. Real agents need multiple capabilities.
Install the open package:
pnpm add open
Create src/tools/browser.tool.ts:
import { createTool } from "@mastra/core/tools";
import { z } from "zod";
import { open } from "open";
export const browserTool = createTool({
id: "open-browser",
description: "Open a URL in the default web browser",
inputSchema: z.object({
url: z.string().url().describe("The URL to open"),
}),
outputSchema: z.object({
success: z.boolean(),
message: z.string(),
}),
execute: async ({ context }) => {
try {
await open(context.url);
return {
success: true,
message: `Opened ${context.url} in your default browser`,
};
} catch (error) {
return {
success: false,
message: `Failed to open browser: ${error}`,
};
}
},
});
Now, update your agent with multiple tools.
Update src/agent.ts:
import { weatherTool } from "./tools/weather.tool";
import { browserTool } from "./tools/browser.tool";
export const agent = new MastraAgent({
agent: new Agent({
...
tools: { weatherTool, browserTool },
}),
threadId: "main-conversation",
});
Now try:
> Show me the weather website for London
The agent will:
weatherTool to get London’s weatherbrowserTool to open itOr more directly:
> Open Google for me
The agent will open https://www.google.com in your browser.
This demonstrates tool composition; the agent can use multiple tools in sequence to accomplish complex tasks. AG-UI handles the orchestration automatically.
Use AG-UI when:
Skip AG-UI if:
By building this CLI client, you’ve seen AG-UI in action through real-time streaming, tool calls, shared state, and an event-driven flow that matches how AI agents actually behave. More importantly, you now understand why AG-UI exists and when it makes sense to use it.
AG-UI shines in production AI features that go beyond simple prompts. For quick prototypes or one-off chats, it can feel like overkill. But for real applications, it adds structure where things tend to break down.
From here, the path is straightforward. Swap the CLI for a web UI, add more tools, deploy it as a service, or integrate it into existing systems, all without changing your agent logic.

useEffectEventJack Herrington breaks down how React’s new useEffectEvent Hook stabilizes behavior, simplifies timers, and enables predictable abstractions.

Frontend frameworks are often chosen by default, not necessity. This article examines when native web APIs deliver better outcomes for users and long-term maintenance.

Valdi skips the JavaScript runtime by compiling TypeScript to native views. Learn how it compares to React Native’s new architecture and when the trade-off makes sense.

What trends will define web development in 2026? Check out the eight most important trends of the year, from AI-first development to TypeScript’s takeover.
Would you be interested in joining LogRocket's developer community?
Join LogRocket’s Content Advisory Board. You’ll help inform the type of content we create and get access to exclusive meetups, social accreditation, and swag.
Sign up now