Figma is dead.
You might have heard this in design circles. And they could be right. Generative AI can prototype in code now. We can skip the design tool. Design and code are the same thing.
But is it that simple?
The old problem
Not long ago, the situation was this:
You design in Figma. Ten high-fidelity screens in five days. They look finished, they might even test well. The design feels done.
Then the engineers start building it.
Components don’t map cleanly to the front-end structure. Responsive behaviour turns out to be implied rather than defined. Constraints emerge. What looked precise in the design tool turns out to be ambiguous in the real medium.
This step — the translation from design to production code — feels clunky. But Figma offers a good balance of speed and control, and it’s the industry standard. Not using it would add its own friction, so we tolerate it.
These problems weren’t random. They came from the way the tool itself understood the web. When you choose a tool, you also inherit its way of thinking.
Tools like Figma don’t work in the medium of the web itself. They approximate it. Figma has its own model of how the web works — a simplified and opinionated one. Typography, colour, components, variables, tokens, auto-layout — these are all parts of that model.
And hidden inside that model are assumptions, shortcuts, and opinions, embedded deliberately or not by the people who built the tool.
Figma wasn’t the web, so eventually the design had to be implemented in code. The problems that appear during implementation are familiar to anyone who has worked on web projects at scale.
For years we described those problems as a medium mismatch.
The thinking was simple: if Figma isn’t the web, then the solution is to design in the web itself.
Prototype in code. Remove the mismatch. Design directly in the target medium. At the time this explanation felt convincing. But maybe the medium mismatch wasn’t the real issue. The real issue was that the tool had opinions.
Back to now
Large Language Models have arrived. With Claude Code and similar tools you can generate a prototype in code directly. Agentic AI can work in parallel, and it’s fast. No Figma. No accidental adoption of Figma’s model of the web. You’re designing in the actual medium.
You can see why people are saying Figma is dead.
But the problems aren’t solved. They’ve moved — and become harder to see.
Medium mismatch, version two
An LLM-generated prototype looks like it’s in the right medium. It’s code. It opens in a web browser. It runs.
But it’s code shaped by training data, a statistical pattern generated by a training set tuned for plausibility over correctness.
You’ve eliminated the Figma-to-code mismatch. But you’ve introduced something worse: the gap between what the model thinks a solution looks like and what your actual problem needs.
The model has opinions about architecture. About how to structure things. About what patterns work. Those opinions are baked into the code it generates. And those opinions come from training data sourced from what already exists.
If your problem fits those patterns, great. You get a fast prototype.
But if your problem needs something genuinely new — something that doesn’t fit what the model has been trained on — the model will tend to paper over that nuance with familiar patterns.
It’s a different kind of mismatch. One you can’t see.
You can’t see it because the code looks right. It’s in the right medium. It works. It has the confidence of something trained on millions of examples. It’s designed to fill gaps silently rather than stop and say ‘I don’t know’.
So you build on top of it. You refine it. By the time you realise how much has already been decided for you, it’s half-built. And now you’re not translating from one tool to another. You’re unpicking baked-in assumptions.
And this is the real danger of fast tools. They move you past the moment where their assumptions should be questioned.
Why this is worse
With Figma, the mismatch was visible. You had to move from one medium to another. That friction forced you to think. Asking “Why doesn’t this component translate?” made you articulate decisions you’d made unconsciously in Figma.
With LLM code, there’s no friction. The code appears to work. The thinking feels done.
But it’s not. The model made decisions for you — decisions about structure, trade-offs, priorities. And because the code works, because it’s already in the target medium, and because it appears finished, you don’t question those decisions until they break.
The problem wasn’t just the medium. The problem was the assumptions embedded in the tool. LLMs don’t remove those assumptions — they hide them inside code that already looks finished.
What this means
Figma probably is dead. But the underlying problem hasn’t gone away.
For years, the friction between design and implementation forced us to confront the assumptions embedded in our tools. When a component didn’t translate, we had to ask why.
LLM-generated prototypes remove that friction. The code runs, it looks finished, and the structure appears to make sense. But the assumptions that shaped it arrived with the tool.
Tools don’t just help you build things. They shape what feels possible. And now the tool doing that shaping is an opaque system trained on what already exists.
A fast tool makes it easy to arrive at something that looks finished long before it’s been properly thought through. The tool shapes the outcome and influences decisions in ways we can’t see. By the time we realise, changing them isn’t fast.