Building a modern AI chat interface in React presents a unique set of state management challenges. It’s far more complex than a simple form or a standard data display. We need to gracefully handle a continuous, asynchronous stream of messages, manage complex loading states to provide clear user feedback, and keep the entire UI perfectly synchronized, all without descending into a tangled mess of useState
and useEffect
hooks.
When a user sends a message, the UI can’t just wait. It needs to update instantly, show that the AI is “thinking” or “typing,” disable input to prevent race conditions, and then append a streaming response chunk by chunk as it arrives from the server. This combination of optimistic updates, asynchronous operations, and derived UI state is where traditional state management can become complicated and error-prone.
In this tutorial, we will tackle these specific problems head-on. We’ll build a real-time AI chat interface and, in the process, explore a clean, reactive pattern for managing its state. To do this, we’ll use Fluent-state, a lightweight, modern library that is particularly well-suited to solving these challenges without the boilerplate and overhead of larger state management solutions.
The foundation of any chat application is its history of messages. For a React developer, the primary challenge is managing this growing list. To ensure our components re-render predictably and to avoid a whole class of bugs, we must treat our state as immutable.
This means we can’t just push()
a new message into an existing array. Instead, every time a message is added, we have to create a brand new array containing all the previous messages plus the new one. The standard approach is using the spread syntax, like setMessages(prev => [...prev, newMessage])
. While this works, setting it up across a complex application can lead to prop-drilling or require setting up React Context manually.
Fluent-state simplifies this by providing a single hook that gives us a reactive store that we can easily share across our application. Since useFluentState
is a hook, we’ll start by creating a standard React Context to provide our store to any component that needs it.
First, let’s define our state shape and create a ChatProvider
component. This is a standard pattern for sharing hook-based state in React.
// src/chatStore.tsx import React, { createContext, useContext } from 'react'; import { useFluentState } from 'fluent-state'; // Define the shape of our state interface Message { id: string; role: 'user' | 'assistant'; content: string; } interface ChatState { messages: Message[]; isStreaming: boolean; } // Create the context const ChatContext = createContext<ReturnType<typeof useFluentState<ChatState>> | null>(null); // Create the provider component export function ChatProvider({ children }: { children: React.ReactNode }) { const store = useFluentState<ChatState>({ messages: [], isStreaming: false, }); return <ChatContext.Provider value={store}>{children}</ChatContext.Provider>; } // Create a custom hook to easily access the store export function useChatStore() { const context = useContext(ChatContext); if (!context) { throw new Error('useChatStore must be used within a ChatProvider'); } return context; }
Now, let’s create a function to add a new message. With Fluent-state, the state object you receive from the hook has properties that act as both getters and setters.
state.messages()
gets the current value.state.messages(newValue)
sets a new value.The library ensures that whenever you call the setter, a re-render is triggered for any component that uses that piece of state.
// src/chatActions.ts import { state } from './chatStore'; // We'll export the state from the store file export function addUserMessage(content: string) { const newMessage: Message = { id: crypto.randomUUID(), role: 'user', content, }; // Get the current array, create a new one, and set it const currentMessages = state.messages(); state.messages([...currentMessages, newMessage]); }
Here, we’re still using the spread operator to ensure immutability, but the logic is neatly contained. The primary benefit is that we now have a centralized, reactive store that any component can interact with, without needing to pass props down the tree. This simple, clean foundation is what we’ll build on to solve the more complex problems.
Communicating with an LLM is an asynchronous process. After a user sends their message, the application has to wait for the AI to process the request and generate a response. During this waiting period, the UI must provide clear feedback.
A simple isLoading
boolean often falls short. An AI’s response can stream in word by word, so we need a state that accurately represents this entire “AI is typing” phase. This state is critical for controlling the UI, for example, by disabling the input form to prevent the user from sending more messages while a response is being generated.
We can solve this by adding a simple boolean flag to our store, let’s call it isStreaming
. This flag will represent the entire period from when the user’s message is sent until the AI’s response is fully received.
First, update the ChatState
interface in your src/chatStore.tsx
file to include the new property. The useFluentState
call will automatically handle the new default value.
// src/chatStore.tsx // ... interface ChatState { messages: Message[]; isStreaming: boolean; // Add the new state property } // ... export function ChatProvider({ children }: { children: React.ReactNode }) { const store = useFluentState<ChatState>({ messages: [], isStreaming: false, // Set the initial value }); //... } //...
Next, let’s create a new file src/chatActions.ts
to manage our asynchronous logic. This function will be responsible for setting isStreaming
to true
before the request and resetting it to false
once the operation is complete. Using a try...finally
block is a robust way to ensure our state is always reset, even if the API call fails.
// src/chatActions.ts import { chatStore } from './chatStore'; import { addUserMessage, addAssistantMessage } from './messageHelpers'; export async function getAiResponse() { // 1. Set the streaming state to true immediately chatStore.state.isStreaming(true); try { const response = await fetch('/llm-api/chat', { method: 'POST', body: JSON.stringify({ messages: chatStore.state.messages() }), }); if (!response.ok) { throw new Error('API call failed'); } const data = await response.json(); // 2. Add the assistant's response to the message history addAssistantMessage(data.message.content); } catch (error) { console.error("Failed to get AI response:", error); // Optionally, we can add some errors to the chat history addAssistantMessage("Sorry, I encountered an error."); } finally { // 3. Always reset the streaming state to false when done chatStore.state.isStreaming(false); } }
With this pattern, we have a reliable way to track the asynchronous state of the AI’s response. The isStreaming
flag gives us a single source of truth that we can now use to drive changes throughout our UI.
useEffect
hooksWe now have an isStreaming
flag in our store. The next step is to use it to orchestrate the UI. When the AI is responding, we need to:
The common approach in React is to pull the isStreaming
value into our component and write conditional logic directly in the JSX. This works, but it tightly couples our component’s rendering logic to the raw state. If more UI elements need to react to this flag, the component’s complexity grows, making it less declarative and harder to maintain.
This is a classic case for derived state. The fact that our form is disabled is not a piece of core state, it is derived from the fact that the AI response is streaming. Fluent-state provides an elegant way to handle this with the compute
hook.
The compute
hook creates a memoized value that automatically updates only when the state it depends on changes. This allows us to co-locate our logic and keep our components clean.
First, let’s make sure our ChatProvider
passes the compute
function through its context. We’ll update our useChatStore
hook to return the entire tuple from useFluentState
.
// src/chatStore.tsx import React, { createContext, useContext } from 'react'; import { useFluentState } from 'fluent-state'; // ... (Message and ChatState interfaces) // The context will now hold the entire tuple const ChatContext = createContext<ReturnType<typeof useFluentState<ChatState>> | null>(null); export function ChatProvider({ children }: { children: React.ReactNode }) { const store = useFluentState<ChatState>({ messages: [], isStreaming: false, }); return <ChatContext.Provider value={store}>{children}</ChatContext.Provider>; } // The hook now returns everything export function useChatStore() { const context = useContext(ChatContext); if (!context) { throw new Error('useChatStore must be used within a ChatProvider'); } return { state: context[0], effect: context[1], compute: context[2], }; }
// src/components/ChatInputForm.tsx import { useChatStore } from '../chatStore'; import { getAiResponse } from '../chatActions'; export function ChatInputForm() { const { state, compute } = useChatStore(); // Create a derived value. It will only recompute when `state.isStreaming` changes. const isFormDisabled = compute(() => state.isStreaming()); const handleSubmit = (e: React.FormEvent) => { e.preventDefault(); const formData = new FormData(e.target as HTMLFormElement); const prompt = formData.get('prompt') as string; // (Here you would add the user message to the state) getAiResponse(prompt); }; return ( <form onSubmit={handleSubmit}> {/* The disabled attribute is bound to the result of our computed value */} <input type="text" name="prompt" disabled={isFormDisabled()} /> <button type="submit" disabled={isFormDisabled()}> Send </button> {/* We can also use it to conditionally render UI */} {isFormDisabled() && <div>AI is typing...</div>} </form> ); }
The component’s code is now incredibly clean. It doesn’t contain any conditional logic itself, it just uses the isFormDisabled()
value. If we ever need to change the logic for when the form is disabled (for example, adding another condition), we only have to update it in one place: the compute
callback.
Our chat application has another common requirement: automatically scrolling the chat window to the bottom whenever a new message is added. This ensures the user always sees the latest message without having to scroll manually.
This is a classic side effect. The action, scrolling the view, is not part of the component’s rendering logic. It’s an imperative DOM manipulation that needs to happen in response to a state change. The standard way to handle this in React is with a useEffect
hook that depends on the messages
array. While this works, it means our component is now managing both rendering and imperative DOM logic, and we have to manually keep the dependency array in sync.
Fluent-state provides a dedicated effect
hook for handling side effects. Its key advantage is that it automatically tracks its dependencies. You don’t need to provide a dependency array, the effect will automatically re-run whenever a piece of fluent state that’s accessed inside its callback changes.
Let’s create a MessageList
component that is responsible for rendering our chat history and handling the scroll effect.
// src/components/MessageList.tsx import { useRef } from 'react'; import { useChatStore } from '../chatStore'; export function MessageList() { const { state, effect } = useChatStore(); const messages = state.messages(); const containerRef = useRef<HTMLDivElement>(null); // This effect will automatically re-run whenever `state.messages()` changes effect(() => { // By calling `state.messages()`, we've made it a dependency state.messages(); // Now, run the side effect if (containerRef.current) { containerRef.current.scrollTop = containerRef.current.scrollHeight; } }); return ( <div ref={containerRef} id="chat-container"> {messages.map(msg => ( <div key={msg.id} className={`message ${msg.role}`}> {msg.content} </div> ))} </div> ); }
In this example, the effect
hook automatically detects that state.messages()
was called within its callback. It now knows to re-run this scrolling logic only when the messages
array changes. We don’t need to manually manage a dependency array like [messages]
.
This creates a clean separation of concerns. The component’s JSX is responsible for the declarative rendering, while the effect
hook handles the imperative side effect, making our code easier to read and maintain.
We’ve successfully built a functional AI chat interface by breaking down its state management needs into a series of distinct, solvable problems. By tackling each challenge one by one, we addressed:
While Fluent-state was the tool we used, the real takeaway is the power of the underlying pattern. By centralizing our logic in a reactive store, we simplified the complex state interactions required by a modern, real-time application. This approach allowed our React components to remain clean and declarative, focusing only on their primary job: rendering the UI. Tools like Fluent-state enable this powerful pattern without imposing heavy boilerplate, making them an excellent choice for the next generation of dynamic web applications.
Install LogRocket via npm or script tag. LogRocket.init()
must be called client-side, not
server-side
$ npm i --save logrocket // Code: import LogRocket from 'logrocket'; LogRocket.init('app/id');
// Add to your HTML: <script src="https://cdn.lr-ingest.com/LogRocket.min.js"></script> <script>window.LogRocket && window.LogRocket.init('app/id');</script>
Would you be interested in joining LogRocket's developer community?
Join LogRocket’s Content Advisory Board. You’ll help inform the type of content we create and get access to exclusive meetups, social accreditation, and swag.
Sign up nowExplore the hidden dangers of BaaS, and how frontend-focused teams can use BaaS platforms without suffering from their major risks.
As AI tools take over more routine coding work, some companies are cutting early-career dev roles — a short-sighted move that could quietly erode the next generation of tech leaders if we aren’t careful.
Explore daisyUI 5’s new features, performance upgrades, and theming engine built for Tailwind CSS 4 developers.
Discover why you might be having difficulty with AI coding tools, and learn some practical strategies to work with AI more effectively.