Some publications cite 1980 as the start of the AI boom, but the period we’re living through today began in the late 2010s. Back then, “AI” was mostly a marketing buzzword companies used to sound innovative. Fast forward to today, and it has become a core strategy for staying competitive in the fast-moving tech industry.
The launch of OpenAI’s ChatGPT in 2022 marked a new era for generative AI. The concept wasn’t new, but this was the first time it reached everyday consumers. More models quickly followed, including Google’s Gemini, Perplexity, Anthropic’s Claude, and Grok, and skepticism gave way to daily use. Today, people rely on AI for art, image editing, video creation, and even code generation.
When AI tools first hit the scene in widespread fashion, developers relied on prompt engineering, a methodical approach to guiding LLMs towards the results you want.
But now, it feels like everyone wants to “vibe code” their way to working applications: ignoring the method and process required to work with AI. In just a few months, prompt engineering has gone from a necessity to a “lost art” in some circles.
While vibe coding can feel effortless, it often produces sloppy outputs and highlights the need for deliberate, well-crafted prompts. The broader hype cycle around new models only fuels this problem. But a thoughtful approach rooted in prompt engineering helps developers work more productively and sustainably with AI.
Let’s explore the importance of prompt engineering and some tips for better AI code generation.
Among developers, the newest buzzword is vibe coding.
Andrej Karpathy, OpenAI cofounder, introduced it in a tweet posted on February 3, 2025. He described vibe coding as a“new kind of coding,” where you “fully give in to vibes” and “forget the code even exists.” In other words, letting the AI handle the work without you having a formal plan.
Its reception has been explosive. Vibe coding has become influential in such a short time; not every trend gets a Wikipedia page. Some call it game-changing, others say it’s an insecure mess. Either way, mentioning it online is a guaranteed way to get attention.
Somewhere along the way, prompt engineering has taken a backseat, remember when this was the buzzword?
Prompt engineering has existed since the start of the AI boom, and the release of ChatGPT saw it become a crucial skill with a tangible career path. Once seen as one of the hottest AI jobs in 2023, according to The Wall Street Journal, it has become obsolete just two years later.
Prompt engineering is essentially training or guiding a large language model (LLM) to generate the best and most accurate outcomes. Businesses often use it for chatbots: an engineer identifies prompts that produce appropriate outputs and uses that information to modify the LLM. Prompt engineering ensures business chatbots can communicate naturally and generate responses with the right tone and style relevant to the context provided by the user.
And when you interact with LLMs directly, prompt engineering allows you to tailor responses, set the tools they should use, and control how far they explore for an answer.
Today, with LLMs becoming more powerful, it’s easy to get decent results, or results that look decent, with less precise prompts. But “decent” is dangerous.
Prompt engineering is the art of carefully planning and crafting prompts to get consistent, high-quality results. It’s about breaking down problems, and iterating with intent, not just hoping the AI does the right thing. As a developer, it’s about understanding how the code works.
Sure, it’s no longer a hot job prospect or a trendy topic. But as a skill? Prompt engineering forces you to think better, not just faster. Maybe it’s time to reel in the vibes and embrace critical thinking.
What if prompt engineering made a comeback? We might see less AI slop that goes viral online for the wrong reasons.
And speaking of AI coverage online, there’s too much noise!
Social media is broken, especially X. Why? Clickbait.
Clickbaiting isn’t new, but it’s now worse: evolving into persistent ragebaiting from blue-tick grifters’ farming impressions.
My feed is a decent mix of tech, design, sports, and entertainment. People are tweeting about React, Figma, Photoshop, Canva, CSS, iPhones, Football (or Soccer), the MCU, TV shows, music artists, and more. AI has found its way into the mix, and every announcement sparks the same exhausting cycle.
A new model is released by any of the big players or other niche companies, and the alarmists, doomsayers, and hype merchants come out:
“Video editors are panicking right now!”
“Beware, editors, AI is coming for your jobs!”
“Hollywood is in serious trouble.”
“GPT-5 just ended the thumbnail industry.”
“It’s over. Google’s new model just killed Photoshop,” and many other variations of “AI is going to take over.”
But when big companies actually use AI, there’s backlash. Coca-Cola’s 2024 Christmas commercial and Marvel’s Secret Invasion intro are prime examples. People accused them of cutting corners and producing “slop.” One post commenting on the intro called it “unethical and dangerous” and designed to “eliminate” the career of real artists.
So “AI is going to take over” until it’s used, and then it’s “AI needs to be stopped” or “AI isn’t ready” afterwards.
This hype cycle does more than just create noise and confusion. It pushes people toward the wrong approach to AI development. When every new model is hyped as revolutionary, with shallow demos suggesting anyone can build anything, it actively encourages vibe coding. Instead of thoughtful engineering, we get this casual, vibe-driven approach that often leads to disappointment.
GPT-5 came out on August 7, and within the first few hours, the internet was flooded with hype. Fast forward a few weeks, and the sentiment has completely flipped. Looking closer, some of the examples shared were code generated with ambiguous prompts dumped into the chatbot.
Take one influencer as an example. On launch day, he praised GPT-5, saying it “does what you tell it to do” and that “no other model behaves this well.” He also added that we should all try it and “watch it cook.” But exactly seven days later, he was calling GPT-5 the “absolute worst” at code generation. His reason? A gradient in a UI created with the model leaked out of its container.
Now, at the bottom of every chatbot, ChatGPT, Claude, or Gemini, there’s a disclaimer stating they can make mistakes and encouraging the user to double-check responses. Anyone who understands CSS could’ve easily fixed the issue with the gradient. This mistake was minor, but it’s been used as proof that GPT-5 is a failure.
So was this really a flaw in the model, our own expectations, or simply a skill issue?
Not long ago, if you ran into a bug you couldn’t fix on your own, you either searched Google, checked Stack Overflow, or found a YouTube tutorial. But today, things are different. Instead of lengthy research, you can often get an answer in seconds.
I’ve found myself trusting solutions from the AI-generated summary at the top of the search results. And I’ve seen developers recommend copying and pasting error messages into an AI chatbot. It’s quick, and in many cases, it works. AI has become a legitimate search alternative, but maybe we’re becoming more dependent on it than we realize.
You almost have to feel sorry for Sam Altman, even though he contributed to the hype around GPT-5 by claiming it would be smarter than him, and that he “doesn’t feel sad about it” because of all the incredible things it could enable.
But when GPT-5 was eventually released, one of the biggest complaints was that it wasn’t as “friendly” as previous models. This forced Altman to release a statement on X, where he addressed an interesting point: people were forming strong attachments to AI models, stronger than with other kinds of tech.
He did admit that “suddenly deprecating old models that users depended on in their workflows was a mistake. ” He further explained that, for some, these tools were more than assistants; they were acting as therapists, coaches, and even companions. This is particularly risky, especially for those who struggle to distinguish between fiction and reality.
Sam Altman’s point was that, if someone is getting real, long-term value from ChatGPT, that’s great. But it develops into a relationship that nudges them towards over-reliance and dependency; that’s a problem. OpenAI did say they were tweaking the GPT-5 to feel warmer and more approachable.
There’s a widely shared Reddit post where someone describes GPT-4.5 as their “only friend” they lost overnight with the release of GPT-5. Yes, this is a unique case, but it highlights the real frustration behind the noise, especially from coders: Why isn’t the new model just doing all the work for me?
Vibe coding is on the rise, and many modern tools almost encourage it. LLMs are smarter than ever and appear capable of doing everything. But that comes with the risk of an entire wave of vibe coders who can’t actually code. Maybe this is the moment to stop chasing the hype and make prompt engineering mainstream again.
To understand how prompt engineering works, let’s start with a bad example. In the first few days of GPT-5’s release, when the hype was at its peak and social media was flooded with “insane” demos, one post stood out. This example is from someone calling himself the “God of Prompt.”
In his post, he asked GPT-5 to build a “full AI app from scratch” with just two prompts:
“Build me a beautiful calorie tracker frontend in React,” and “Add backend, payments, database, and make it production-ready.”
Let’s look at the issues in the first prompt. I’m a front-end developer, so I can see immediately that it’s too vague about design. Beauty is subjective, even among humans. Do you mean minimalist? Dark and light mode? What about accessibility? The LLM doesn’t know what exactly constitutes “beautiful.”
Then, there’s no scope. What are the features? Does it need search, login, charts, or progress bars? What about styling? What libraries are allowed? Do you want Tailwind or plain CSS? Do you want Typescript? State management? Finally, how would the data be structured, and what APIs should it use? Without constraints, the model is guessing and most likely hallucinating.
The second prompt has its own issues. “Backend” is broad. Is it REST API, GraphQL, or something else? “Payments” is unspecified. There are several processors, like PayPal, Stripe, PayStack, and more. Will it have a subscription model? What about international support and PCI compliance? The “database” is unclear; is it SQL or NoSQL? Cloud-hosted?
And “production-ready” is also weak. Security, CI/CD, error handling, scaling, backup — each one is a project in itself. Bundling all this into one request makes it meaningless.
By his own admission, things got messy after the first prompt, so he used an AI app builder (which he was promoting) to fix the frontend with the second prompt. However, the entire post was misleading, suggesting that anyone could create fully functional applications with ambiguous requests and, by extension, vibes.
Before we proceed, here’s a quick example of how a well-crafted prompt can make a huge difference in the output.
I entered this prompt into Nano banana: “Blurred photo of a man running.” This was the result:
It generates a generic, blurred image of a man running.
Now, taking inspiration from a few graphic designers online, I rewrote the prompt with more context: “You’re a cinematographer for a top advertising company, taking photos of athletes. Produce blurred photos of a sprinter in motion. It should be a blurred silhouette of a man in a white sports kit, captured with a slow shutter for dynamic motion blur. High contrast, minimalist composition with no logos or text. Neutral background, abstract, and energetic. The editorial style should be reminiscent of Adidas advertising, with an aspect ratio of 3:2.”
This was the result of the second prompt:
Much better. That’s the power of clear communication. And when it comes to code generation, the same principles apply.
With the vibe approach, you might say “Build me a login form in React.” The LLM does exactly that, except it cannot read your mind and therefore has no consideration for your actual needs. As a result, the output will be unpredictable.
When I tried this prompt with ChatGPT, Gemini and Claude, each built a fairly complex form with Tailwind CSS. Claude and Gemini even added icons from Lucide. However, I prefer plain CSS and Font Awesome Icons, if I needed them in the first place. But the LLM had no way of knowing.
Now compare with a more structured better prompt: “Build a React login form with email and password fields. Add basic validation: email must be valid, password at least 8 characters. If validation fails, show inline error messages. Use plain CSS and no external libraries.”
This time, the output was more familiar, and with less cluttered code I could work with. The first prompt is vague and would lead to multiple revisions to cleanup/debug the code. Structured prompts get you closer to you desired output, saving you time and effort.
Some of us have a childhood memory of an older relative asking us to fetch something from their purse. The instruction always sounds clear: get me X from Y. But when you get there, you realize it’s not as straightforward. It’s only after returning to say you couldn’t find it that they decide to do it themselves, with you watching, that you see that there were many other variables you didn’t know about.
Prompt engineering works the same way. The issue isn’t the LLM’s intelligence; it just requires you to be a good communicator and developer. You need to tell the model what you want and how you want it so it generates useful code.
Different LLMs have their own prompting guides, and there are several prompt engineering courses online. But if you’re a developer who sees AI as a tool to improve productivity (and not an overworked assistant), here’s how to write better prompts:
Don’t just tell the AI what you want; be specific about what you want the code to do. The model needs to understand the“why” behind your code; don’t let it guess.
You can even give the AI a role. Try telling it what kind of developer you want it to be. Share relevant project details so it doesn’t make wrong assumptions.
So, instead of “Build me a calorie tracking app,” say “I’m building a calorie tracking app for users who want simple, accurate food logging…” Be explicit about what you want and why.
Be upfront about frameworks, libraries, and limits. If you just say “React frontend”, the model will choose its own defaults that may not align with your preferences.
Instead, be specific. For example: Use “Next.js with Typescript and Tailwind CSS.”
Constraints are important. Define very specifically when you want the LLM to stop “thinking” and generate the code. For example, you can instruct the AI to add login features to the app, but it shouldn’t implement user authentication or payment features. That way, the model doesn’t run off and build features you didn’t ask for.
There are several things to consider when building an app. By reducing the scope, you’ll prevent feature creep, get faster responses, and targeted tool calling.
LLMs can hallucinate if the expected output is too long. Keep your prompts concise.
Think one component at a time; you don’t have to generate a complete app or website at once. Also, if you paste an entire broken file and say “fix this code,” the model might rewrite everything, including working parts, forcing you to go back and forth.
The LLM does not need your entire code file; it can use a few lines at a time. Give it the part you’re working on, with context.
LLMs will work best with what they know best. That means sticking with popular languages, frameworks, and libraries. GPT-5, for example, recommends React, HTML, Next.js, Tailwind CSS, shadcn/ui, and Redux. These tools are widely used and well-documented.
If you prompt with these tools, the model has more training data to draw from. You’ll get more reliable outputs and spend less time debugging.
This is where your developer skills and expertise matter most. AI may not address things like accessibility, error handling, or performance consideration the way you would.
Iteration is your back-and-forth with the model. The clearer you are about each step, the easier it is to steer it in the right direction, because you have thought about each step beforehand.
Other useful techniques you can try when generating code with AI:
Prompt engineering isn’t just about generating more accurate outputs; it can save you time and money.
LLMs process tokens to generate responses. Tokens are units or parts of words that a model can process, and there’s a maximum limit for input and output.
LLMs work with the patterns in the data they trained on through the process of token probability. When you enter a prompt, the model doesn’t “know” the answer beforehand. When generating an output, it looks at your prompt and everything written so far, then predicts what comes next. This has been described as autocomplete on steroids.
If your input is vague or ambiguous, the AI model has no limit and would pull from a huge range of possibilities and patterns when generating the output. And since every request uses tokens, you might quickly reach the limit and either have to wait for it to reset or pay for an upgrade.
That’s why intentional, meticulous prompts are much better than vibes.
The conversation around AI is always shifting. Right now, the buzz is vibe coding. Next, it could be AGI (artificial general intelligence), which is already gaining traction. But before we get carried away, we should pause and ask: is that really what we want? Do we need AI models to think exactly like us?
Today’s AI tools are already powerful, and when combined with the technologies we know, they can make us far more productive. But they work best as assistants that extend our abilities, not as replacements.
That’s why prompt engineering deserves a comeback. Clear, structured prompts aren’t just about better outputs; they save time, energy, and money. Instead of handing everything over to “vibes,” we should focus on guiding LLMs with intent. Your skill matters, use it to direct the AI so it builds with you, not for you.
Maybe the future of AI isn’t about creating machines that think like humans. Maybe it’s about helping humans think better with AI.
Hey there, want to help make our blog better?
Join LogRocket’s Content Advisory Board. You’ll help inform the type of content we create and get access to exclusive meetups, social accreditation, and swag.
Sign up nowShipping modern frontends is harder than it looks. Learn the hidden taxes of today’s stacks and practical ways to reduce churn and avoid burnout.
Learn how native web APIs such as dialog
, details
, and Popover bring accessibility, performance, and simplicity without custom components.
Read about how the growth of frontend development created so many tools, and how to manage tool overload within your team.
Discover what you actually need to build and ship AI-powered apps in 2025, with tips for which tools to choose and how to implement them.