Let’s take a look at the new AI tool incorporated in the Figma suite: FigJam AI. This AI tool seems to be linked to Diagram, the AI-specialized company acquired by Figma this year.
The icon symbolizing FigJam AI is similar to the Magician plugin Figma uses. It means that a dedicated team is working on this project and that we can expect many updates in 2024.
FigJam AI is incorporated directly into the UI of FigJam. Based on that, we can predict that Figma might get a similar update and natively include the Magician plugin to generate texts, pictures, and wireframes.
In the previous article, we talked about Jambot, which is a FigJam community widget. Its main purpose is generating ideas from a sticky note written by the user, ideal for learning about a new topic or rewriting an idea.
Meanwhile, FigJam AI has the primary purpose of generating content that you can quickly use to run team exercises.
Designers who want to test this tool will have no difficulties finding it. FigJam opens a window when you create a new file, and it’s accessible at any time in the upper left corner of the page, symbolized by a sparkle:
This assistant takes the form of a floating window with a text field to write a prompt and provide suggestions to guide curious users, like those creating an organization chart. Above the window at the beginning of a project, FigJam offers template ideas. This helps us understand the main functionality of the AI, which is to generate a personalized version of an AI template.
Before designing anything, you’ll need to create lots of complex objects: personas, information architecture, Gantt charts, etc. This is what FigJam AI offers to create for us.
Writing a prompt will generate a complex object made with elements like text, small illustrations, and even widgets. In some cases, FigJam AI will even put the Jambot widget in what we’ve asked it to create. Writing a prompt isn’t always necessary because the tool suggests some of the elements it can handle: brainstorming, organization charts, decision trees, and so on:
One good way to use FigJam AI is to prepare and run workshops. By asking the AI for a brainstorm, we can get a large presentation that follows the different steps of the ideation process. The instructions are written, the sticky notes for each participant are ready to go, and the AI even uses some icons to illustrate the different sections.
Just like for a PowerPoint presentation. some people prefer to make simple designs and others are comfortable with personalizing every element. With a more personalized and fun presentation, participants are more willing to apply themselves in the workshop and suggest ideas.
After getting ideas on sticky notes, we can select anything and get the option to sort it, either organizing the sticky notes or adding a text recap of every idea. This uses the AI to find differences and similarities in texts. In one click, the ideas are sorted and the participants can focus on talking about the results of their brainstorm rather than waiting for the organizer to sort it manually.
With workshops, ideation, or one-on-one meetings, FigJam AI helps users focus on their job. Thanks to this AI, any beginner with FigJam can create excellent things.
Because the tool is still in beta, creating workshops seems to be the best use for it now. Other functionalities are available, notably the Gantt calendar and the decision tree, but as we will see, not everything can be made as easily as we wish for now.
FigJam AI, like many other AI tools, is a beta version. Companies are pushing AI tools to see what’s possible to achieve with them and study their use cases. The more we use them, the better they will become. The negative points here are not complaints, but to point out pain points I hope Figma will fix.
Despite a lot of suggestions by the tool (at least eight just by clicking on the text field), most of them will generate the same kind of artifact. There is a kind of redundancy in the results that can be annoying. A brainstorm, a one-on-one meeting, and a team retrospective are not the same, so why always get the same results?
The only different results are for decision trees and calendars. For these two elements, the AI is distinct. For the rest, it will generate the same layout with three sections, some text to describe what to do in these sections, and some sticky notes.
Having similar results could be a problem that users transform into an advantage, but it’s not possible because of our second problem: results aren’t reproducible.
Although there are similarities between the results of different prompts, using the same prompt twice won’t generate exactly the same thing.
Because of that, it’s important to prepare a workshop in advance, just like one would do with a traditional template. If you do everything spontaneously, you will have to improvise with what the AI made this time or rewrite a prompt until one result is good enough. It can be frustrating when we expect something full of widgets and only got text:
Lastly, widgets aren’t used to their full potential. To its credit, there are possibilities in FigJam that I didn’t know about before using the AI. For example, you can make an interactive calendar thanks to a dedicated widget. Some widgets are here to connect FigJam to other popular tools like GitHub or Jira. I learned this because the AI implemented the widget in one of my tests.
Just like community plugins for Figma, FigJam could expand its possibilities thanks to widgets. Unfortunately, the AI tends to forget about this resource and will only use it once in a while. This could be an occasion to showcase widgets to new users and expand the results of prompts for people knowing what they need:
Even if I can make these criticisms, FigJam AI is still in beta and already usable. This new tool holds great potential and will become better and better over time.
For this use case, I created an ideation workshop for an educational application. Let’s see how FigJam AI can work.
To prepare a brainstorm, I wrote down the following prompt:
Let’s make a brainstorm for an educational app.
I ran this prompt two times. The first time I got two sections with a lot of text and sticky notes prefilled by the AI with assumptions about the users and their needs. Because it was interesting to get these insights, I decided to keep it and use it as a support for debate between the participants.
The second time, I got the traditional three-section result with a bonus icebreaker. This time, I got something better suited for a workshop with a lot of sticky notes prepared for the participants so they could write their ideas:
After some rewriting, I used the AI to sort the sticky notes and immediately saw four themes from the brainstorm, with one being more empty than the others.
To counterbalance the only sticky note in the “efficiency in finding information” theme, I used Jambot (another AI tool) to generate ideas and got five ideas of features that could address the idea written in the sticky note:
Using FigJam to generate training support materials proved to be efficient in helping people talk about and generate ideas. Using Jambot on the FigJam idea helped to compensate for the underrepresented theme and develop the participant’s idea. I could have done the same kind of workshop to, for instance, create a persona:
From the brainstorming, we can only create a few types of designs. The best one for my brainstorm was a representation of the user journey.
I would have liked to get a traditional user journey map to fill up, but the AI doesn’t offer this kind of artifact; it’s limited today to decision trees, which are suitable for representing user activity.
I used the sticky note from the brainstorming session to write a prompt in FigJam AI and asked to get a decision tree. I tested two ways of doing that: the first one was to write the prompt myself by describing the idea from the sticky note. I got a simple decision tree that was coherent but not very deep; it felt like the AI could do better.
The second way was to ask Jambot to rewrite a sticky note as an action and use the result as a prompt. This time, I got something more complex with parallel actions:
These results are interesting, but users shouldn’t forget that FigJam AI is here to replace templates. Even if it’s personalized it should be used as a baseline, discussed with your team, and iterated on:
The last kind of artifact I created for this workshop was a Gantt chart. It’s the last kind of artifact that was useful for me (making an organization chart wasn’t necessary in my case).
To get a good Gantt chart, it’s best to keep the prompt simple. Asking for a “Gantt chart for an educational app” gave me the best result. Because I was vague, the AI started from a blank page and thought through UX, development, QA, and even post-launch.
What’s important in this kind of element is to remember all stages of the design process. It’s easy to forget the QA or the post-launch when managing a project, and it would impact the whole team by adding unplanned work.
It’s best to iterate until the Gantt FigJam AI delivers seems large enough:
I used three kinds of artifacts fully created by FigJam AI:
FigJam opens itself to the nondesigner by with AI tools like these. The tool is very new and still in beta, and we can only hope for more elements like a user journey map or an information architecture chart. Being an internalized tool made by Figma, there is no doubt it will become better over the next months and it’s worth learning how to use it today.
LogRocket lets you replay users' product experiences to visualize struggle, see issues affecting adoption, and combine qualitative and quantitative data so you can create amazing digital experiences.
See how design choices, interactions, and issues affect your users — get a demo of LogRocket today.
Nostalgia-driven aesthetics is a real thing. In this blog, I talk all about 90s website designs — from grunge-inspired typography to quirky GIFs and clashing colors — and what you can learn from them.
You’ll need to read this blog through and through to know what’s working and what’s not in your design. In this one, I break down key performance metrics like task error rates and system performance.
Users see a product; designers see layers. The 5 UX design layers — strategy, scope, structure, skeleton, and surface — help build UIs step by step.
This blog’s all about learning to set up, manage, and use design tokens in design system — all to enable scalable, consistent, and efficient collaboration between designers and developers.