Hardly a week goes by these days without some AI meme trending on social media or lighting up the blogosphere. A few weeks ago, it was the ‘Studio Ghibli’ and ‘vibe coding’ trends.
As I write this, a new one is being reported — the Barbiecore “doll” trend:
We’re also seeing a surge in design-specific AI tools, like Figma plugins or Visily, which can create art and wireframes in minutes.
So, are we at the dawn of a technological revolution that’ll put designers out to pasture the way horses were eliminated by steam and coal — or is this all just overblown hype?
In this post, I try to make sense of AI technology’s challenge to design. I hope to cover both sides of the argument — about AI’s ability to transform design, examine what AI cannot replace in UX design, and identify some practices that utilize AI to empower designers.
Historically, design and code required privilege: formal education, access to tools, and navigating gatekeepers. Now, with AI art tools — just look at the Studio Ghibli trend — anyone can produce stunning work that used to take an entire animation studio.
I’ll illustrate the point further. This year’s Oscar for Best Animated Feature was won by an indie animator, Gints Zilbalodis, using open-source software like Blender. In my opinion, irrespective of the other issues, there is no doubt that an unprecedented quantity of vast, cheap processing power will be available for indie producers, artists, and developers on a scale that has never existed before.
This, coupled with open-source, video-based learning resources, means that anyone, anywhere on the planet, can teach themselves how to be a designer.
Vibe coding is having a moment. Andrej Karpathy coined the term on February 6th, 2025. He wrote:
There’s a new kind of coding I call ‘vibe coding’, where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It’s possible because the LLMs (e.g. Cursor Composer w Sonnet) are getting too good. Also I just talk to Composer with SuperWhisper so I barely even touch the keyboard.
This is now an increasingly popular workflow practice compared to traditional software development, where steps were sequential and included planning, system architecture, and manual coding of each part. Instead, the vibe coding flow is about describing what you want, using AI to generate most of it, and just tweaking as you go.
So, if AI can generate visuals, it’s not a stretch to imagine it wireframing or prototyping your next app as well.
One of the promises of technology is freedom from mind-numbingly boring tasks. AI tools like Canva, Adobe Sensei, and MidJourney can create basic design elements such as logos, social media posts, and even website layouts.
In the context of Japanese anime, which is based on recombining standardized art styles in stunningly creative ways to serve story and genre innovations, AI is being incorporated into the production pipeline to facilitate character style iteration and smooth animation. According to Hiroshi Kawakami, AI helped cut a character styling task from a week down to five hours. As he puts it:
We always make sure a human checks, adds to, or retouches the work. As creators, we don’t want to rely too heavily on generative AI. However, we believe AI can save significant time. We can use that time for more creative things.
AI won’t kill design — but it will create a new kind of designer. It will create a super-category of designers who are more valued in the job market for their skills due to their proficiency with AI tools.
For example, the smartphone revolution has created a template where an average content creator possesses the ability to photograph, illustrate, edit images, write descriptive copy, perform piece-to-camera, and edit video. In the past, that was the work of an entire building of people, if not a studio lot. Productivity increases enabled by technology mean that this work is now often done by a much smaller team of people. Yet, the rise of smartphone photography has not eliminated professional photographers or camera operators. Rather, it has changed how professionals in these industries work.
The same will be true here. Designers who are fluent in AI tools may become more valuable — not less — because they’ll be able to produce more, faster, and smarter.
Someone who opposes technology is generally referred to as a Luddite. Despite their modern reputation, the original Luddites were neither opposed to technology nor inept at using it. Many were highly skilled machine operators in the textile industry. Nor was the technology they attacked particularly new.
The point is that industry employment practices are complex and multifaceted. They depend on many factors, including the state of the national economy and the culture of a corporation.
Artists are well within their rights to oppose business practices that they feel are predatory or devalue the nature of their work.
Even before generative AI flooded the internet, Hayao Miyazaki condemned AI-generated animation, calling it “an insult to life itself.”
Designers might fare better than voice actors, who — like those in the French gaming industry — have had to unionize to fight digitization. Where labor protections are weak, the risks are real. Multiple lawsuits are already challenging AI models for violating copyright law.
This underscores a key truth: human-made art still holds legal and economic value.
Getting text and image to work together isn’t new — it dates back to medieval illuminated manuscripts. The shift from ink to pixel didn’t change the fundamentals. We still use fonts, rules of composition, and color theory that are centuries old.
Design rooted in human craft resonates. As Eric Gill wrote in 1931, handicrafts persist because they meet an “inherent, indestructible, permanent need in human nature.”
So far, especially in the case of vibe coding, the biggest beneficiaries of using AI to code have been high-level, advanced coders. It remains to be seen whether a complete newbie can develop and debug a sophisticated app without prior knowledge of code fundamentals.
As one blogger points out:
If LLM wrote the code for you, and you then reviewed it, tested it thoroughly and made sure you could explain how it works to someone else that’s not vibe coding, it’s software development. The usage of an LLM to support that activity is immaterial.
In other words — understanding fundamentals still matters.
Treating LLMs like chat buddies might be fun, but it’s not professional. At a very basic level, UX designers should understand some aspects of prompt engineering — understanding the technical parameters of LLMs, building with them, and mitigating biases.
Some knowledge of how to tailor prompts using technical shortcuts or other search parameters, and using guiding principles (such as avoiding straightforward plagiarism) is good professional practice.
We are still a long way from using AI to straight-up generate a design to perfection. While AI tools are great for generating art, they are not suitable for UI design. That doesn’t mean you can’t use it for inspiration and iteration. It’s just a co-pilot — not an autopilot.
The key to unlocking the full automation power of AI is to start thinking about design systems as modular, self-referential, and component-oriented. AI can help manage sources of values, tokens, libraries, and generate components, freeing the designer to do what they do best — design!
There is a real danger of generating too much data to be effectively monitored. AI can help sift through this and quickly return actionable information that can be applied in user experience research.
In UX design, machine learning algorithms and LLMs can help surface impactful and actionable insights. This can help identify severe technical issues such as errors, failed network requests, and error states; usability issues leading to user struggle, such as rage clicks, dead clicks, and frustrating network requests; and issues causing users to drop out of funnels and key workflows.
The obvious conclusion is that while AI is unlikely to replace human designers, the substantial investment in the technology will undoubtedly result in some transformative changes in the design industry. AI could empower designers to be more creative, efficient, and impactful by providing them with powerful new tools and insights.
To keep up, we need to boost our AI literacy. That means understanding the tools, the ethics, and the human context. If we do that, we won’t just survive the AI wave — we’ll ride it.
To quote Greta Gerwig’s Barbie:
We always have to be extraordinary, but somehow we’re always doing it wrong.
Maybe the future of AI in design lies in finally finding a way to do it right.
LogRocket lets you replay users' product experiences to visualize struggle, see issues affecting adoption, and combine qualitative and quantitative data so you can create amazing digital experiences.
See how design choices, interactions, and issues affect your users — get a demo of LogRocket today.
This guide shows you how to apply the UX honeycomb in your design process — plus how it stacks up to other UX models.
Although designers use them interchangeably, style guides and design systems are two different tools with their own strengths and weaknesses.
I’ve tested a bunch of approaches, and these are the UX tweaks that consistently boost trial sign-ups.
Get strategic with your design process. This blog walks through 15 UX frameworks, categorized and summarized for fast, informed decision-making.