It's both incredible and deeply unsettling.
But here's what nobody talks about when they discuss AI replacing designers and developers:
The code is maybe 20% of the work.
The other 80%? That's the design thinking. The strategic decisions. The taste. The judgment that comes from 20 years of watching people interact with systems and learning what actually works.
Let me show you what I mean.
The Three-Day Decision.
I spent three full days this week trying to answer one question:
When someone buys the AI Prompts for $19, should they create a password immediately after purchase, or should I email them an invitation to set it up later?
I asked AI to implement both options. It gave me perfectly functional code for both approaches in under a minute.
And then I sat there, staring at two working implementations, with absolutely no idea which one was the right choice.
Because "right" wasn't about the code working. It was about:
- What makes someone feel confident after spending money
- What reduces confusion when something goes wrong
- What works if they already downloaded the free Documentation System
- What builds trust with someone who's been burned before
- What I'll wish I'd done six months from now when edge cases emerge
AI couldn't answer any of those questions.
Not because the technology isn't good enough. But because these aren't technical questions at all. They're human questions. Design questions. Strategic questions.
And that realization led me to map out every single decision AI couldn't make while building Unrule. Turns out, it's most of them.
Category 1: Information Architecture.
The question: What should people see when they first log in?
AI could build me any dashboard I specified. But it couldn't tell me which one to build.
The options I considered:
Option A: Resource Library First
Shows all available resources immediately. Clear what they have access to. Feels abundant.
Option B: Welcome Message + Guided Entry
Personal greeting. Explains what it is. Onboards gently.
Option C: Recent Activity Feed
Shows what they were last doing. Assumes return visits. Optimizes for continuation, not discovery.
Option D: One Primary CTA
"Continue Reading Documentation System." Single clear next action. Removes decision paralysis.
Each option serves a different goal. Creates a different first impression. Assumes a different mental model.
AI could implement all four in minutes.
But which one makes someone who just created an account feel like they made the right choice?
That required understanding the emotional state of the user at that moment. What they need. What they expect. What will make them feel confident vs. overwhelmed.
After mapping the user journey from "I just got my credit stolen at work" to "I found a tool that might help," I realized: They need to see what they have immediately (Option A), but with just enough context to not feel lost.
So I built a hybrid: Library view with a subtle welcome message and recent activity sidebar. AI couldn't make that call. It required 20 years of watching people interact with systems and knowing what actually reduces friction.
Category 2: Visual Design and Taste.
Here's where it gets really interesting. AI can generate components. It can follow design systems. It can even suggest color palettes based on parameters.
But it can't exercise taste.
The typography decision:
I used a clean serif for "Your Library" in the header. Why serif, not sans-serif?
- Serifs signal permanence, credibility, trustworthiness
- This is a library, not a dashboard
- These are resources you'll reference repeatedly, not disposable content
- The women using this have been dismissed and questioned. They need something that feels genuine and trustworthy
Could I have used a sans-serif? Absolutely. Would have been perfectly functional.
But the serif does emotional work the sans-serif doesn't.
AI can't make that judgment. It doesn't understand the semiotics of typefaces. It doesn't know that serif = authority in a way that matters to someone who's been told she doesn't know what she's talking about.
Category 3: Language and Voice.
Every single piece of microcopy is a design decision.
The navigation labels:
I went with "Your Library" not "Dashboard" or "My Resources" or "Content."
Why "Library"?
- Libraries are safe spaces for learning
- Libraries are quiet and focused
- Libraries are places you return to repeatedly
- Libraries contain resources you trust
"Dashboard" feels corporate. "My Resources" feels transactional. "Content" feels disposable.
"Library" does the emotional and conceptual work I need.
It signals: This is permanent. This is trustworthy. This is yours. AI couldn't make that call because it doesn't understand the connotations and cultural context of words.
What AI Actually Does Well.
I don't want to be dismissive of AI. It's legitimately incredible at:
- Implementing specifications: Give it detailed requirements and it generates working code faster than I could type it
- Boilerplate and scaffolding: Authentication flows, database schemas, CRUD operations
- Standard patterns: Login forms, password resets, email verification
- Refactoring and optimization: It can take working code and make it cleaner
- Documentation: It writes clear documentation for the code it generates
All of that is valuable. It's made me significantly faster at building.
But building fast doesn't matter if you're building the wrong thing.
The Real Skill: Knowing What to Build.
After 20 years in UX and product design, I can tell you:
Implementation was never the bottleneck.
The bottleneck was always:
- Understanding what users actually need (vs. what they say they need)
- Anticipating problems before they happen
- Making a thousand tiny decisions that shape the experience
- Exercising taste and judgment in service of a specific goal
- Balancing business goals, user needs, and technical constraints
- Knowing when to follow patterns and when to break them
Those skills don't come from prompting AI well. They come from watching users struggle and learning from their pain. Shipping products that failed and understanding why. Testing assumptions and being wrong repeatedly. Developing taste through exposure to thousands of examples. Understanding context, culture, psychology, emotion.
AI accelerates execution. It doesn't replace judgment.
What This Means for Designers.
I've seen a lot of panic about AI replacing designers and developers. Here's what I think is actually happening:
The gap between people who can think strategically and people who just execute is getting wider.
Because now anyone can execute quickly. AI democratized implementation.
But strategic thinking? That's more valuable than ever.
The designers who will thrive are the ones who can:
- Make the hard decisions AI can't make
- Exercise taste and judgment at every level
- Understand user psychology deeply
- Think in systems, not just screens
- See patterns across thousands of experiences
- Balance multiple constraints simultaneously
The work is becoming more purely intellectual. Less "can you make this button work" and more "should this button exist, where should it go, what should it say, and why?"
And honestly? That's the work I always wanted to do.
The implementation was never the interesting part.
The thinking was always the interesting part. AI just made that incredibly, unavoidably obvious.
The Multi-Month Reality.
So yes, the code for that checkout flow took 30 seconds.
But the decision about which code to use took three days.
And the hundreds of other decisions like it (typography, color, language, spacing, interaction patterns, error states, empty states, edge cases, brand voice, information architecture, emotional design) took months.
That's the real timeline.
30 seconds of implementation. Months and ultimately years of thinking.
AI made the 30 seconds instant. It didn't touch the months.
And if you're a designer who can do those months of thinking well, if you can make a thousand good decisions that compound into an experience that actually serves users, you're going to be fine.
Better than fine.
Because now that everyone can build fast, the only differentiation is thinking well.
And thinking well? That's still very, very human.
Originally published on Medium