Every AI feature you ship has a hidden tax.
Often, you write the same logic twice. Need a weather tool? You write the OpenWeatherMap fetch on your server, then write the client-side schema and validation again. Database query tool? Implement it server-side, duplicate the interface client-side. User geolocation? Same story.
This isn’t just an annoying problem; it’s an expensive one. Codebases bloat, types drift, and months later it’s unclear which implementation is the source of truth. What starts as a simple AI assistant with 10 tools can easily turn into hundreds of duplicated lines across server and frontend code, written solely to keep everything in sync.
TanStack AI addresses this problem with isomorphic tools. You define a tool once using toolDefinition(), then explicitly choose where it executes with .server() or .client(). The definition and types remain the same across environments, eliminating duplication. In the rest of this article, we’ll walk through how isomorphic tools work by building a sample AI assistant and contrast this model with the approach used by Vercel AI SDK. We’ll also examine how TanStack AI reduces vendor lock-in and enforces stronger type safety.
The Replay is a weekly newsletter for dev and engineering leaders.
Delivered once a week, it's your curated guide to the most important conversations around frontend dev, emerging AI tools, and the state of modern software.
Before we dive into the technical details, here’s a quick reference showing how TanStack AI and Vercel AI SDK compare across the dimensions that matter for production apps:
| Feature | TanStack AI | Vercel AI SDK |
|---|---|---|
| Isomorphic Tools | Define once, run anywhere (.server()/.client()) |
Separate server/client implementations required |
| Framework Support | React, Solid, Vanilla JS, any framework | React (Next.js optimized) |
| Provider Support | OpenAI, Anthropic, Gemini, Ollama | OpenAI, Anthropic, Google, 20+ providers |
| Type Safety | Per-model providerOptions with 3 generics |
Flexible typing with optional strictness |
| Vendor Lock-in | Adapter-based, swap providers with 1 line | Provider-agnostic but Next.js optimized |
| Bundle Size | Tree-shakeable adapters (import only what you need) | Full SDK bundle |
| Modalities | Text, image, video, audio, TTS, transcription, structured outputs | Text, structured outputs, embeddings |
| Protocol | Open, documented protocol for custom transports | Closed implementation |
| Maturity | Alpha (Dec 2025) | Stable (v6, Dec 2025) introduced Agents and Tool Execution Approval |
| MCP Support | Roadmap | Full support |
| DevTools | Isomorphic devtools for server + client | Built-in, production-ready |
Now, let’s examine the architectural difference that defines everything else: isomorphic tools.
The difference between TanStack AI and the Vercel AI SDK largely comes down to where tools are defined and how many times they must be implemented.
In the Vercel AI SDK, tools are effectively split across environments. You define a tool on the server, where the LLM executes it, and then reimplement the same tool on the client so the UI can interpret and display its behavior. This duplication introduces extra surface area for drift between server logic and client state.
// Server-side (app/api/chat/route.ts)
import { openai } from '@ai-sdk/openai';
import { streamText, tool } from 'ai';
import { z } from 'zod';
export async function POST(req: Request) {
const { messages } = await req.json();
const result = await streamText({
model: openai('gpt-4'),
messages,
tools: {
getWeather: tool({
description: 'Get the current weather for a location',
parameters: z.object({
location: z.string().describe('City name'),
}),
execute: async ({ location }) => {
const response = await fetch(
`https://api.openweathermap.org/data/2.5/weather?q=${location}&appid=${process.env.WEATHER_API_KEY}`
);
const data = await response.json();
return {
temperature: data.main.temp,
description: data.weather[0].description,
};
},
}),
},
});
return result.toDataStreamResponse();
}
Here’s the client-side source code:
// Client-side (components/chat.tsx)
'use client';
import { useChat } from 'ai/react';
export default function Chat() {
const { messages, input, handleInputChange, handleSubmit } = useChat({
api: '/api/chat',
});
return (
<div>
{messages.map((m) => (
<div key={m.id}>
{m.role === 'user' ? 'User: ' : 'AI: '} {m.content}
{m.toolInvocations?.map((tool, i) => (
<div key={i}>
Calling {tool.toolName} with {JSON.stringify(tool.args)}
</div>
))}
</div>
))}
<form onSubmit={handleSubmit}>
<input
type="text"
value={input}
onChange={handleInputChange}
/>
</form>
</div>
);
}
The server knows how to execute the tool. The client knows how to display it. But they’re separate implementations. If you want type safety across both, you’re building that bridge yourself.
Now add nine more tools. Your server file grows to 400 lines. Your client needs to handle each tool’s arguments and results. Change a parameter on the server? Update the client. Add validation? Write it twice.
TanStack AI takes a different approach. You define the tool once, then tell it where to run. Here’s that same weather tool in TanStack AI:
// shared/tools.ts
import { toolDefinition } from '@tanstack/ai';
import { z } from 'zod';
export const weatherTool = toolDefinition({
id: 'getWeather',
description: 'Get the current weather for a location',
parameters: z.object({
location: z.string().describe('City name'),
}),
}).server(async ({ location }) => {
const response = await fetch(
`https://api.openweathermap.org/data/2.5/weather?q=${location}&appid=${process.env.WEATHER_API_KEY}`
);
const data = await response.json();
return {
temperature: data.main.temp,
description: data.weather[0].description,
};
});
Here’s the server-side source code:
// app/api/chat/route.ts
import { openaiText } from '@tanstack/ai-openai';
import { chat } from '@tanstack/ai';
import { weatherTool } from '@/shared/tools';
export async function POST(req: Request) {
const { messages } = await req.json();
const result = await chat({
adapter: openaiText('gpt-4'),
messages,
tools: [weatherTool],
temperature: 0.7,
});
return Response.json(result);
}
Here’s the client-side source code:
// components/chat.tsx
import { useChat, fetchServerSentEvents } from '@tanstack/react-ai';
import { weatherTool } from '@/shared/tools';
export default function Chat() {
const { messages, input, setInput, sendMessage } = useChat({
connection: fetchServerSentEvents('/api/chat'),
tools: [weatherTool], // Same tool, client knows the types
});
return (
<div>
{messages.map((m) => (
<div key={m.id}>
{m.role === 'user' ? 'User: ' : 'AI: '} {m.content}
{m.toolCalls?.map((call, i) => (
<div key={i}>
Calling {call.name} with {JSON.stringify(call.arguments)}
</div>
))}
</div>
))}
<input
type="text"
value={input}
onChange={(e) => setInput(e.target.value)}
onKeyDown={(e) => e.key === 'Enter' && sendMessage()}
/>
</div>
);
}
One definition. Two environments. The tool knows its own shape, so both server and client get full type safety automatically. Change the parameters? TypeScript catches it everywhere. The tool is the source of truth.
For a single tool, the savings are modest. But watch what happens when you add nine more tools. With Vercel AI SDK, you’re at 600 lines. With TanStack AI, you’re at 300 lines, and that’s being conservative. The real savings come from eliminating the mental overhead of keeping two implementations in sync.
Here’s where the architecture pays off even more. Some tools should run on the client, for example user geolocation, local file access, and browser APIs. In Vercel AI SDK, you still define these server-side for the LLM, then implement them client-side.
In TanStack AI, you just swap .server() for .client():
// shared/tools.ts
export const geolocationTool = toolDefinition({
id: 'getUserLocation',
description: 'Get the user current GPS coordinates',
parameters: z.object({}),
}).client(async () => {
return new Promise((resolve, reject) => {
navigator.geolocation.getCurrentPosition(
(position) => resolve({
latitude: position.coords.latitude,
longitude: position.coords.longitude,
}),
reject
);
});
});
The LLM knows the tool exists. The client knows how to execute it. No server roundtrip. No API endpoint. No duplication.
Mix server and client tools in the same conversation:
const result = await chat({
adapter: openaiText('gpt-4'),
messages,
tools: [
weatherTool, // runs on server
geolocationTool, // runs on client
databaseTool, // runs on server
],
});
TanStack AI handles the coordination automatically. The LLM can invoke tools that run on the server or the client, and their results flow back into a single conversation, all while the tool definitions themselves are written only once.
TanStack AI and Vercel AI SDK both claim to be provider-agnostic, but they approach the problem differently. One prioritizes true portability. The other optimizes for a specific ecosystem.
TanStack AI uses a split adapter system. Instead of importing a monolithic adapter that handles everything, you import only what you need:
import { openaiText } from '@tanstack/ai-openai';
import { anthropicText } from '@tanstack/ai-anthropic';
import { geminiText } from '@tanstack/ai-gemini';
Each adapter has its own focus. openaiText handles text generation for OpenAI models. openaiImage handles image generation. anthropicText handles Claude. No shared abstractions trying to be everything to everyone. No bloated imports.
This architecture provides a direct benefit: swapping providers becomes a one-line change.
Here’s a chat implementation using OpenAI:
import { openaiText } from '@tanstack/ai-openai';
import { chat } from '@tanstack/ai';
const result = await chat({
adapter: openaiText('gpt-4'),
messages: [{ role: 'user', content: 'Explain quantum computing' }],
temperature: 0.7,
});
Switch to Anthropic’s Claude:
import { anthropicText } from '@tanstack/ai-anthropic';
import { chat } from '@tanstack/ai';
const result = await chat({
adapter: anthropicText('claude-sonnet-4-20250514'),
messages: [{ role: 'user', content: 'Explain quantum computing' }],
temperature: 0.7,
});
Switch to Google’s Gemini:
import { geminiText } from '@tanstack/ai-gemini';
import { chat } from '@tanstack/ai';
const result = await chat({
adapter: geminiText('gemini-2.0-flash-exp'),
messages: [{ role: 'user', content: 'Explain quantum computing' }],
temperature: 0.7,
});
The API stays the same. The options stay the same. The tool definitions stay the same. Only the import and adapter line change. Your tools, UI, and types remain untouched.
Compare this to Vercel AI SDK:
// OpenAI
import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';
const result = await streamText({
model: openai('gpt-4'),
messages,
});
// Anthropic
import { anthropic } from '@ai-sdk/anthropic';
import { streamText } from 'ai';
const result = await streamText({
model: anthropic('claude-sonnet-4-20250514'),
messages,
});
// Google
import { google } from '@ai-sdk/google';
import { streamText } from 'ai';
const result = await streamText({
model: google('gemini-2.0-flash-exp'),
messages,
});
Vercel AI SDK also makes this easy. The difference shows up when you use provider-specific features.
Every provider has different options. OpenAI’s reasoning models support reasoning_effort. Anthropic’s Claude models support thinking budgets. Gemini has safetySettings. These aren’t universal. They are specific to each provider and sometimes specific to each model.
TanStack AI gives you compile-time safety for these options through modelOptions:
import { openaiText } from '@tanstack/ai-openai';
const result = await chat({
adapter: openaiText('o1-preview'),
messages,
modelOptions: {
reasoning_effort: 'high', // TypeScript knows this exists for o1 models
},
});
Try to use reasoning_effort with GPT-4? TypeScript error. Try to use a Claude-specific option with OpenAI? TypeScript error. The adapter knows which options work with which models, and your IDE shows you exactly what’s available.
import { anthropicText } from '@tanstack/ai-anthropic';
const result = await chat({
adapter: anthropicText('claude-sonnet-4-20250514'),
messages,
modelOptions: {
thinking: {
type: 'enabled',
budget_tokens: 5000, // Claude-specific extended thinking
},
},
});
Vercel AI SDK handles this differently. Provider-specific options go into an experimental field:
import { openai } from '@ai-sdk/openai';
const result = await streamText({
model: openai('o1-preview'),
experimental_providerMetadata: {
openai: {
reasoning_effort: 'high',
},
},
});
It works, but TypeScript can’t guarantee the options are valid. You’re back to reading documentation and hoping you got it right.
TanStack AI ships with three client libraries out of the box:
// React
import { useChat } from '@tanstack/react-ai';
// Solid
import { useChat } from '@tanstack/solid-ai';
// Vanilla JavaScript also supported (see docs)
All three share the same server protocol, so the backend is agnostic to the framework you choose. You can pair a vanilla JavaScript client with a Python backend without friction. Because the protocol is open and documented, you are free to use any transport layer you prefer, whether that is HTTP, WebSockets, or even a custom RPC protocol.
Vercel AI SDK is framework-agnostic too, but it’s built for Next.js first:
// Next.js App Router (optimized)
import { streamText } from 'ai';
import { createStreamableUI } from 'ai/rsc';
// React (works but fewer features)
import { useChat } from 'ai/react';
// Other frameworks (community adapters)
import { useChat } from 'ai/vue';
import { useChat } from 'ai/svelte';
The Next.js integration is strong, with support for React Server Components, streaming UI, and built-in caching. However, those advantages do not carry over to other frameworks. When building with Remix, Express, or FastAPI, you are effectively limited to the basic useChat hook, without the additional Next.js-specific polish.
TanStack AI takes a different approach by avoiding optimization for any single framework. React, Solid, and vanilla JavaScript are treated as first-class options. If you are firmly committed to the Next.js ecosystem, Vercel AI SDK offers more out-of-the-box conveniences. If you anticipate switching frameworks or want to preserve that option, TanStack AI allows you to do so without forcing an early commitment.
Choose TanStack AI when you need isomorphic tools to avoid duplicated logic, want the ability to change model providers without refactoring, or expect to work across frameworks beyond Next.js. Its architecture is designed around portability and long-term flexibility, favoring durable abstractions over framework-specific convenience.
On the other hand, choose Vercel AI SDK when you are building a Next.js application on Vercel and want a mature, production-ready solution immediately. It is feature-complete, tightly integrated with the Next.js and Vercel ecosystem, and optimized for teams that value stability and out-of-the-box capabilities over cross-framework flexibility.
TanStack AI addresses a structural problem that most production AI applications encounter sooner or later: tool duplication across server and client boundaries. By allowing tools to be defined once and executed in the appropriate environment through .server() and .client(), it removes an entire class of synchronization issues.
When combined with per-model type safety and vendor-agnostic adapters, the result is an architecture that favors explicit control, portability, and long-term maintainability over framework-specific optimizations. This makes TanStack AI particularly well suited for teams that expect their stack to evolve, whether that means switching model providers, introducing new frameworks, or supporting heterogeneous backends.
Vercel AI SDK, by contrast, is optimized for teams operating squarely within the Next.js and Vercel ecosystem. It offers a polished, production-ready experience with tight integration into React Server Components, streaming UI, and platform-level optimizations.
Those benefits come with trade-offs: tools must be implemented in multiple places, and the architecture assumes a long-term commitment to Next.js. For teams prioritizing speed to production and deep alignment with Vercel’s platform, this is often an acceptable and even desirable constraint.
The decision ultimately depends on your constraints and priorities. If minimizing duplication, preserving architectural flexibility, and reducing long-term maintenance risk are central concerns, TanStack AI is the stronger choice. If immediate productivity, ecosystem integration, and proven stability within Next.js matter more than portability, Vercel AI SDK is likely the better fit. Both approaches are viable, but they optimize for different definitions of success.

A practical guide to React Router v7 that walks through declarative routing, nested layouts, dynamic routes, navigation, and protecting routes in modern React applications.

Handle user authentication with React Router v7, with a practical look at protected routes, two-factor authentication, and modern routing patterns.

AI now writes frontend code too. This article shows how to design architecture that stays predictable, scalable, and safe as AI accelerates development.

Learn how to build a Next.js 16 Progressive Web App with true offline support, using IndexedDB, service workers, and sync logic to keep your app usable without a network.
Hey there, want to help make our blog better?
Join LogRocket’s Content Advisory Board. You’ll help inform the type of content we create and get access to exclusive meetups, social accreditation, and swag.
Sign up now