Being a responsible developer, you’ve put a standard login flow in front of an impressive AI agent that can chat with users, understand their goals, and maybe even start to take action on their behalf. Your user is authenticated. All secure, right?
Well, not quite. This is where we hit a new and massive security blind spot that most of us are only just beginning to confront. The user might be authenticated, but is the agent authorized?
Once an AI agent can act autonomously on a user’s behalf, accessing their data, calling other APIs, or performing critical tasks, a simple login screen is no longer enough.
Agent authorization is the next-level challenge in application security, and it’s a problem the traditional auth model was never designed to solve. In this article, we’ll use Auth0’s Auth for GenAI platform as our guide to discuss and solve three key authorization problems developers commonly face:
The Replay is a weekly newsletter for dev and engineering leaders.
Delivered once a week, it's your curated guide to the most important conversations around frontend dev, emerging AI tools, and the state of modern software.
Before we can start authorizing our agent, we need a pre-configured Next.js starter project with a complete authentication flow powered by Auth0, plus the basic UI and AI SDK setup.
Clone the sample application from Auth0:
git clone https://github.com/auth0-samples/auth0-ai-samples.git cd auth0-ai-samples/authenticate-users/vercel-ai-next-js
For a detailed walkthrough, see the Auth0 guide on authenticating a Next.js application with Auth0.
Once configured, install dependencies and start the dev server:
npm install && npm run dev
Navigate to http://localhost:3000 to sign up and log in.
With our login working, let’s set up the ability for the AI agent to search for public repositories on GitHub. This immediately surfaces our first major challenge: how do we let our agent call the GitHub API on the user’s behalf without creating a massive security hole?
The less secure way is to ask for a GitHub Personal Access Token and store it in your database — making your app a high-value target for attackers.
A more secure approach uses Auth0’s Token Vault, ensuring your app never sees or stores long-term credentials. Instead, your agent requests a short-lived, single-use token only when needed.
Steps:
Now create a helper function in lib/auth0-ai.ts:
// lib/auth0-ai.ts
import { Auth0AI, getAccessTokenForConnection } from '@auth0/ai-vercel';
import { getRefreshToken } from './auth0';
export const getAccessToken = async () => getAccessTokenForConnection();
const auth0AI = new Auth0AI();
export const withGitHubConnection = auth0AI.withTokenForConnection({
connection: 'github',
scopes: ['public_repo'],
refreshToken: getRefreshToken,
});
Then define the session helper in lib/auth0.ts:
// lib/auth0.ts
import { Auth0Client } from '@auth0/nextjs-auth0/server';
export const auth0 = new Auth0Client();
export const getRefreshToken = async () => {
const session = await auth0.getSession();
return session?.tokenSet?.refreshToken;
};
Create the tool at lib/tools/github.ts and wrap it with the helper:
// lib/tools/github.ts
import { tool } from 'ai';
import { z } from 'zod';
import { getAccessToken, withGitHubConnection } from '../auth0-ai';
export const searchGitHubRepositories = withGitHubConnection(
tool({
description: 'Searches for public repositories on GitHub based on a query.',
parameters: z.object({
query: z.string().describe('The search query, e.g., "Next.js AI projects"'),
}),
execute: async ({ query }) => {
const accessToken = await getAccessToken();
try {
const response = await fetch(
`https://api.github.com/search/repositories?q=${encodeURIComponent(query)}&per_page=5`,
{
headers: {
Authorization: `Bearer ${accessToken}`,
'User-Agent': 'Auth0-GenAI-Research-Assistant',
},
}
);
if (!response.ok) {
return `Failed to search GitHub. Status: ${response.statusText}`;
}
const data = await response.json();
const repositories = data.items.map(
(repo) => `- ${repo.full_name}: ${repo.description || 'No description'}`
);
return repositories.length
? `Found ${repositories.length} repositories:\n${repositories.join('\n')}`
: `I couldn't find any repositories for the query "${query}".`;
} catch (error) {
return `An error occurred while calling the GitHub API: ${error.message}`;
}
},
})
);
Finally, add it to the agent in /app/api/chat/route.ts:
// /app/api/chat/route.ts
import { type CoreMessage, streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
import { searchGitHubRepositories } from '@/lib/tools/github';
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();
const result = await streamText({
model: openai('gpt-4o'),
messages,
tools: { searchGitHubRepositories },
});
return result.toAIStreamResponse();
}
Restart your app and ask the agent:
“Find me some popular Next.js repositories about AI.”
It should securely call GitHub and return the results.
Next, what if we want the agent to do something — like creating a public GitHub Gist?
We’ll introduce asynchronous authorization, meaning the agent pauses and waits for explicit human approval using Auth0’s Client-Initiated Backchannel Authentication (CIBA) Flow.
Setup steps:
Update lib/auth0-ai.ts to add a helper for user confirmation:
// lib/auth0-ai.ts
import { Auth0AI, getAccessTokenForConnection, getCIBACredentials } from '@auth0/ai-vercel';
import { AccessDeniedInterrupt } from '@auth0/ai/interrupts';
import { getRefreshToken, getUser } from './auth0';
export const withHumanApproval = auth0AI.withAsyncUserConfirmation({
userID: async () => {
const user = await getUser();
return user?.sub;
},
bindingMessage: async ({ description }) =>
`Do you approve creating a public Gist with the title: "${description}"?`,
audience: process.env.AUTH0_AUDIENCE!,
scopes: ['create:gist'],
onAuthorizationRequest: 'block',
onUnauthorized: async (e) => {
if (e instanceof AccessDeniedInterrupt) {
return 'The user has denied the request to create a Gist.';
}
return `An error occurred during authorization: ${e.message}`;
},
});
Add a getUser function:
export const getUser = async () => {
const session = await auth0.getSession();
return session?.user;
};
Create lib/tools/gist.ts:
// lib/tools/gist.ts
import { tool } from 'ai';
import { z } from 'zod';
import { getCIBACredentials } from '@auth0/ai-vercel';
import { withHumanApproval } from '../auth0-ai';
const createGistLogic = tool({
description: 'Creates a public GitHub Gist with the provided content.',
parameters: z.object({
description: z.string().describe('A short description for the Gist.'),
content: z.string().describe('The text content to put in the Gist file.'),
}),
execute: async ({ description }) => {
console.log('User approved! Creating public Gist...');
const credentials = getCIBACredentials();
const accessToken = credentials?.accessToken;
if (!accessToken) return 'Could not get authorization to create the Gist.';
return `Successfully created a public Gist titled "${description}".`;
},
});
export const createPublicGist = withHumanApproval(createGistLogic);
Add it to /app/api/chat/route.ts:
import { createPublicGist } from '@/lib/tools/gist';
export async function POST(req: Request) {
const { messages } = await req.json();
const result = await streamText({
model: openai('gpt-4o'),
messages,
tools: {
searchGitHubRepositories,
createPublicGist,
},
});
return result.toAIStreamResponse();
}
Now, the agent pauses until you approve the request — then proceeds.
Our agent can now call APIs securely and get human approval for sensitive actions. The next threat: cross-user data leakage in multi-tenant RAG setups.
Without fine-grained authorization, an agent could accidentally use one user’s private document to answer another user’s question.
We’ll fix this with Auth0 FGA (Fine-Grained Authorization), built on Google Zanzibar’s relationship model.
model
schema 1.1
type user
type doc
relations
define owner: [user]
define viewer: [user, user:*]
define can_view: owner or viewer
user:[email protected] has owner relation to doc:document_id_123Now update your RAG tool:
// lib/tools/rag.ts
import { tool } from 'ai';
import { z } from 'zod';
import { FGAFilter } from '@auth0/ai';
import { findRelevantContent } from '@/lib/rag/embedding';
import { auth0 } from '../auth0';
export const searchInternalDocuments = tool({
description: 'Searches internal documents to answer a question securely.',
parameters: z.object({ question: z.string() }),
execute: async ({ question }) => {
const session = await auth0.getSession();
const user = session?.user;
if (!user?.email) return 'There is no user logged in.';
const fgaFilter = FGAFilter.create({
buildQuery: (doc) => ({
user: `user:${user.email}`,
object: `doc:${doc.documentId}`,
relation: 'can_view',
}),
});
const candidates = await findRelevantContent(question, 25);
if (!candidates.length) return "I couldn't find relevant documents.";
const authorized = await fgaFilter.filter(candidates);
if (!authorized.length)
return 'Found documents, but you lack permission to view them.';
return authorized.map((doc) => doc.content).join('\n\n');
},
});
Add the new tool to /app/api/chat/route.ts:
import { searchInternalDocuments } from '@/lib/tools/rag';
export async function POST(req: Request) {
const { messages } = await req.json();
const result = await streamText({
model: openai('gpt-4o'),
messages,
tools: {
searchGitHubRepositories,
createPublicGist,
searchInternalDocuments,
},
});
return result.toAIStreamResponse();
}
Now your RAG pipeline includes document-level access control, closing the biggest security hole in multi-tenant AI apps.
We didn’t just build an AI agent — we built a secure one.
Starting with simple user authentication, we incrementally solved three of the biggest security challenges in AI systems:
Together, these steps define what it really means to authorize your agent — by asking: “What is this agent allowed to do on a user’s behalf?”

A hands-on guide to building an FTC-ready chatbot: real age checks, crisis redirects, parental consent, audit logs, and usage limits – designed to protect minors and prevent harm.

CSS text-wrap: balance vs. text-wrap: prettyCompare and contrast two CSS components, text-wrap: balance and text-wrap: pretty, and discuss their benefits for better UX.

Remix 3 ditches React for a Preact fork and a “Web-First” model. Here’s what it means for React developers — and why it’s controversial.

A quick guide to agentic AI. Compare Autogen and Crew AI to build autonomous, tool-using multi-agent systems.
Hey there, want to help make our blog better?
Join LogRocket’s Content Advisory Board. You’ll help inform the type of content we create and get access to exclusive meetups, social accreditation, and swag.
Sign up now