ProductiveTechTalk - AI, Development Tools, and Productivity Blog

If You Don’t Design With AI Like This, You’re Already Behind

Kim Jongwook · 2026-04-27

Meta description: Learn the exact AI-native workflow that turned an engineer into a “good enough” designer in 1 hour using Pencil, Claude, and Gemini.

Related: AI Software Development in 2026 | Complete Guide

Related: Claude Design Is Quietly Killing Old Design Workflows

Related: AI Productivity Paradox Exposes Your Dev Metrics Lie

Related: AI Emotional Intelligence: Blake Lemoine’s Radical View

Related: AI Development Workflow: 12 Lessons for 2026 | Guide

TL;DR

  • An engineer used only AI tools to design a full clipboard app UI and landing page in 1 hour.
  • Pencil’s multi-agent workflow makes parallel design iteration feel like working with multiple junior designers.
  • AI defaults look generic unless you feed strong visual references and very specific prompts.
  • Designing in AI tools first is far cheaper and faster than jumping straight into code.
  • A repeatable AI-native workflow emerges: deep research → mood board → PRD → design system → screen iteration.
Table of Contents

The idea is simple: take a non-designer engineer, give them Claude Code, Pencil AI, Gemini, some good references, and exactly one hour. Then see how far an AI-native workflow can actually go.

In this first “Build It With AI” episode, the creator tackled a clipboard manager app called Clippy: mobile UI, basic design system, and a full marketing landing page. The result wasn’t design-award material. But it was shockingly usable for something built entirely by AI agents.

I’ve run similar experiments, and the pattern is consistent: AI gets you 70–80% of the way there — fast — but only if you drive it with sharp prompts, real visual references, and a structured process.

Quick overview

  • Start with deep research on the product, market, and competitors, compiled as Markdown.
  • Build a mood board from Pinterest, Dribbble, and 21st.dev to define style and tone.
  • Ask an LLM like Gemini to draft a PRD, then refine it with requirements and SwiftUI constraints.
  • Feed the PRD and references into Pencil to auto-generate a design system and core screens.
  • Use Pencil’s multi-agent setup to iterate separate screens and components in parallel.
  • Design the marketing page in parallel, pointing AI to 21st.dev for higher-quality layouts.
  • Continuously fight “generic AI design” by iterating, critiquing, and pushing past defaults.

At-a-glance summary

Question Quick answer
What is an AI-native design workflow? A start-to-finish design process where AI tools do primary work.
Can non-designers ship UI with AI in 1 hour? Yes, if they use structured prompts and strong references.
Why use Pencil instead of coding UI directly? Design iteration in Pencil is far cheaper and faster than code.
How do you avoid generic AI-looking designs? Provide concrete visual references and specific style constraints.
What role do multi-agents play? They let multiple screens evolve in parallel like a small design team.
Does this replace professional designers? No, it mainly lowers the bar for solo founders and engineers.

Key comparisons at a glance

Option/Concept Best for Biggest benefit Main drawback
Claude Code Engineers prototyping apps Strong coding + reasoning in one place Needs clear instructions and context
Pencil AI UI/UX design with AI Fast UI iteration and layout generation Occasional visual bugs, clunky agent management
Gemini Text + image generation Drafting PRDs and mock images Initial outputs often messy, need iteration
21st.dev Web UI references High-end component and layout inspiration Not an end-to-end tool, just reference
Manual design (Figma alone) Professional designers Full creative control, pixel-perfect detail Much slower and higher skill requirement

What is the “Build It With AI” challenge and why does it matter?

The “Build It With AI” challenge is a time-boxed experiment that tests whether AI tools can fully perform a specific job function within one hour. In this first episode, the role under test is “designer” for a real consumer app and its marketing page.

The rules are brutally simple: set a 60-minute timer, use only AI tools, attempt the actual work end-to-end. Perfection isn’t the goal — exposing practical capabilities and limitations is.

“The goal isn’t a perfect result; it’s to see how far AI tools can realistically get you as a non-designer.”

The series plans to extend this to roles like data scientist, product manager, and finance analyst, swapping in different tool stacks along the way. What makes this format useful — and why polished case studies rarely are — is that the creator openly criticizes AI’s weak points as they appear. You see the failures in real time.

Tip: If you want to run this yourself, keep the hard one-hour constraint. It forces prioritization over pixel perfection, which is exactly the point.


How did deep research shape the Clippy app concept?

Deep research is a structured AI-assisted process for gathering background context before any design work begins. For Clippy, this stage covered clipboard management apps, user pain points, and typical feature sets.

The inspiration was Paste, a paid clipboard manager, but the goal was something more accessible. Core concept: store copied items longer, sync across devices, and make re-using clipped content feel pleasant rather than functional.

Step Purpose Output
Market scan Understand competitors and gaps Feature list, pricing, UX notes
User problems Identify pain points Common frustrations and desires
Feature definition Decide app scope List of must-have screens and flows
Context export Feed AI tools Markdown dossier used as PRD input

The research landed in Markdown and got pasted directly into AI design tools as context. That move mattered more than it sounds. It let tools like Pencil and Claude understand what they were designing — not just how it should look.

In practice, treating this research like a proto-PRD noticeably improves downstream AI decisions. It mirrors real product teams, where clear requirements anchor design and engineering from day one.


How does a mood board and reference strategy supercharge AI design?

A mood board is a curated set of visual references that define the style, colors, and overall feel for a design project. For Clippy, the mood board leaned toward a “sketch aesthetic” — hand-drawn, slightly imperfect, reminiscent of Notion-style illustrations.

References came from Pinterest, Dribbble, and 21st.dev, chosen to capture that vibe while keeping implementation realistic in SwiftUI.

Source Who it’s for Key benefit Main drawback
Pinterest Broad visual concepts Huge variety of styles Hard to filter for app-appropriate UI
Dribbble UI/UX inspiration High-polish app shots Can be unrealistically complex
21st.dev Web & product UI Production-grade component patterns Focused on web more than mobile

One constraint surfaced fast: some sketchy elements looked great in references but would be genuinely painful to implement responsively — irregular rounded borders that require custom SVGs, for instance. Catching that early kept the design from drifting into unbuildable territory.

Color came together organically. Bold yellow, black, and gray — office stationery energy, highlighter vibes. The creator even polled viewers on what color they associate with “copy,” and landed on gray + yellow as the combo that fit the brand metaphor.

“Good taste still matters. AI will happily generate unbuildable, over-stylized UI unless you keep one eye on implementation reality.”

Warning: Skip the mood board and AI tools default to bland. They need visual anchors, or they’ll give you something you’ve seen a hundred times.


How was Pencil AI used as a Figma replacement for rapid UI?

Pencil AI is an AI-native design tool that auto-generates and edits UI layouts, built to replace tools like Figma in early-stage design work. It connects to Claude Code via Model Context Protocol (MCP), enabling chat-driven creation and rearrangement of UI components.

The standout feature here was Pencil’s multi-agent parallel workflow — which made it feel less like a single tool and more like a small team.

Tool Best for Main benefit Main drawback Ideal user
Pencil AI Fast UI design Multi-agent parallel editing Visual bugs, agent misalignment Non-designers prototyping apps
Figma Detailed design systems Precision and collaboration Slower, higher skill bar Professional designers
Code-only UI (SwiftUI) Final implementation Full control, performance Slowest to iterate Senior engineers

Rather than one long chat, the creator opened multiple Pencil sessions simultaneously:

  • One agent refining the Search view
  • Another fixing alignment in the Pinned view
  • A third building the marketing page mood board

That parallelism let a one-hour window cover far more surface area than any single-agent approach could.

The experience wasn’t seamless, though. Several problems showed up:

  • Visual bugs when multiple agents touched the same layout
  • Agents ignoring initial instructions and needing full restarts
  • Clunky UI for managing several active sessions at once

Even so, the verdict was clear: iterating in Pencil was “much faster and cheaper than coding directly,” especially for prototyping. That trade-off matches what I’ve seen in my own work — design in AI-first tools, translate to code once the direction is stable.

“The reason you want to do all of this in Pencil instead of code is because it’s more expensive and slower to do it in code.”


How did PRD writing and design system setup guide the AI?

A PRD (Product Requirements Document) describes app screens, behaviors, and constraints — and in this workflow, it became the primary context fed to AI tools. Gemini drafted the initial version using deep research results and generated images as inputs.

The Clippy PRD defined:

  • Clipboard list views and pinned items
  • Search view and navigation bar
  • Code snippet rendering
  • Rich rendering for images and links

What problems showed up in the first design system?

The initial AI-generated design system was inconsistent. Tabs differed between screens — circular in some, rectangular in others — and the left-hand highlight bar for pinned items had mismatched corner radii across views.

The fix was a targeted prompt asking the AI to:

  • Research color theory and current iOS design conventions
  • Rebuild the design system with those guidelines in mind

That single tweak — “go research X, then redesign” — improved consistency noticeably and is a pattern worth keeping when tools have web access.

Tip: When targeting Apple platforms, always ask the AI to follow “modern iOS/HIG” or “modern SwiftUI best practices.” Apple’s Human Interface Guidelines live here: https://developer.apple.com/design/human-interface-guidelines/

SwiftUI compatibility shaped decisions throughout. The creator repeatedly steered Pencil toward modern SwiftUI best practices, balancing the sketch aesthetic with components that could be built without workarounds.

One practical detail worth stealing: they asked the AI to show color tokens as visual swatches, not just hex codes. It makes systems faster to read and critique — something that sounds minor until you’re doing visual review at speed.


How was 21st.dev used to level up the marketing page design?

21st.dev is a curated gallery of high-quality web UI components that serves as reference material for designing polished interfaces. In this project, it acted like a creative upgrade for every section the AI touched.

The creator explicitly told AI tools to reference 21st.dev when generating sections for the landing page — hero areas, feature grids, and decorative touches.

Option Best for Biggest benefit Main drawback
No reference Fast but generic sites Quick draft structure Extremely generic layouts
Basic templates Simple marketing pages Easy to launch Little differentiation
21st.dev-inspired Polished landing pages More creative, unique visuals Requires extra prompt work

The difference was visible immediately. Pages built with 21st.dev as a reference had better composition and details — blur effects, considered spacing — that felt closer to something you’d actually ship.

“Using 21st.dev really lets the AI be more creative… it makes the websites look more polished and more unique.”

Marketing page work ran in parallel with app UI design. While one chat handled research and copy via ChatGPT, another handled mood board and layout in Pencil. That’s where the AI-native approach really earns its keep — context-switching that fast would be brutal for a human working alone.

For anyone building similar workflows, pairing design tools with component libraries like 21st.dev or shadcn/ui (https://ui.shadcn.com/) consistently raises the output floor.


How can you fight the “generic AI design” problem?

The “generic AI design” problem is the tendency for AI tools to produce safe, cookie-cutter layouts that look like every other app. The creator named this the biggest limitation of AI design right now — and it’s hard to disagree.

Left to their defaults, tools like Claude, Pencil, and ChatGPT gravitate toward safe typography, familiar spacing, and standard components. The result is “things you will see in almost every app.”

Three strategies pushed past it:

  • Provide specific visual references from Pinterest and Dribbble directly in the prompt
  • Lock in a distinct style language (like the sketch aesthetic) and repeat it consistently
  • Force the AI to reference high-quality component libraries like 21st.dev

“If you just ask it to make something, it will use the most basic default-looking apps.”

The creator also spotted recognizable AI fingerprints in generated code — tiny separator bars scattered everywhere. They joked these are the new em dashes: you can tell AI wrote it just by looking.

Tip: When you spot these patterns, name them in your next prompt:
“Remove unnecessary separators and generic patterns that look like AI defaults. Make the layout more opinionated.”

In my own testing, that kind of critique prompt — naming the pattern explicitly and rejecting it — produces noticeably better next iterations.


What did the 1-hour AI-native design challenge actually produce?

The 1-hour challenge result is a near-complete design package for the Clippy app and its marketing page, including:

  • Core app screens: clipboard list, pinned view, search, copy interaction, navigation
  • An initial design system for typography, colors, and components
  • A mood board capturing the sketch aesthetic and brand feel
  • A marketing landing page with details like blur effects that felt close to launch-ready
Deliverable Status Strength Needs work
App core screens Mostly done Clear flows, usable layouts Visual polish, consistency
Design system Draft Tokens and components defined Color harmony, corner radius alignment
Mood board Solid Clear visual direction None, just more examples
Marketing page Launchable starter Polished sections, good hero Brand voice refinement

The honest verdict: “almost there, but not perfect.” Color consistency, pinned content accent bars, and code snippet rendering all needed another pass.

Toward the end, the creator started seriously considering a dark mode direction after finding a compelling reference image — and floated gold as an accent color. That’s something worth noticing: fast AI prototyping surfaces new directions you wouldn’t have found if you’d started in code.

The conclusion was direct: an engineer using tools like Pencil can go very far without a professional designer, especially for early prototypes.

“I think you could get really far using something like Pencil or even Figma to design your own apps. As long as you have an idea and you can articulate it, you could get really far.”

AI doesn’t replace designers. But for solo founders and engineers, it’s a surprisingly powerful head start.


What is the most effective AI design workflow learned from this experiment?

An AI design workflow is a structured sequence of steps where AI tools handle the bulk of design work under human direction. The most efficient pattern from this challenge:

  1. Deep research → build a knowledge base about the product and market.
  2. Mood board → define the style using curated references.
  3. PRD drafting → have an LLM formalize requirements.
  4. Design system creation → generate colors, typography, components.
  5. Screen iteration → refine each view with tight feedback loops.
Stage Primary AI tool Human role Key risk
Deep research Claude / ChatGPT / Gemini Curate and skim Info overload
Mood board Browser + AI Select style Unbuildable visuals
PRD Gemini / Claude Add constraints Missing edge cases
Design system Pencil Critique consistency Generic look
Screens Pencil multi-agents Orchestrate agents Fragmented designs

Multi-agent parallel work was the biggest accelerator. Multiple Pencil sessions running simultaneously acted like junior designers working in parallel — while the human played orchestrator, checking consistency and merging the pieces.

“Multi-agent workflows are like having several junior designers working at once, but you still need to manage them.”

Code came last — and deliberately so. The Xcode project stayed at boilerplate throughout, with finished designs destined for Claude Code after the design pass stabilized. Moving into code too early locks in decisions before you’ve explored enough, and makes experimentation expensive.

The quality ceiling was set by the quality of reference material. Better inputs — Pinterest, Dribbble, 21st.dev — consistently raised the floor of AI output, especially when paired with explicit instructions to consult things like SwiftUI best practices and color theory before generating.

For deeper design-system thinking, Google’s Material Design docs are a useful companion: https://m3.material.io/


Frequently Asked Questions

Q: Can a non-designer really design a usable app UI in 1 hour with AI tools?

A: Yes, with caveats. This challenge produced a near-complete app UI and landing page using only AI tools. Good enough for a prototype — though still below professional designer quality in areas like color balance and layout nuance.

Q: Why is designing in Pencil or Figma before coding so important?

A: Changing layouts, colors, and flows visually is far cheaper than changing code. The creator made this point repeatedly: every adjustment you make in a design tool is one you don’t have to refactor in Xcode. Standard product teams do this for a reason.

Q: How do you avoid generic, AI-looking designs?

A: Give AI strong visual references and precise instructions. Screenshots from Pinterest and Dribbble, 21st.dev as a component reference, and a named style like “sketch aesthetic” all helped push past default, generic-looking output.

Q: What are the main limitations of current AI design tools?

A: Visual bugs, inconsistent components, and a strong pull toward generic defaults. Multi-agent systems like Pencil can also be hard to manage — the human has to actively coordinate and unify outputs. Fine-grained polish is still largely missing.

Q: Do AI design tools replace professional designers?

A: Not yet, and probably not soon for high-stakes products. The creator was candid: they couldn’t match a skilled designer’s eye. What AI tools do is act as force multipliers for non-designers and solo founders — lowering the barrier to solid prototypes, not eliminating the need for design judgment.


Conclusion

The Clippy experiment shows what an AI-native design workflow can actually produce in an hour: a full set of app screens, a draft design system, a mood board, and a marketing page. Not perfect. More than enough to validate ideas, pitch to users, or hand off to a developer.

Three things stand out. AI defaults are generic by design — beating that requires real references and sharp prompts. Multi-agent tools like Pencil turn one person into a mini design team, but only if they accept the role of orchestrator. And staying in design tools before touching code saves real time and money, every time.

As Claude, Gemini, and Pencil keep improving, the gap between “designer” and “non-designer” will keep narrowing. The leverage goes to people who can articulate ideas clearly, curate strong references, and push AI past its comfortingly bland defaults.

Key Takeaways

  • An AI-native workflow let a non-designer build Clippy’s UI and marketing page in 1 hour.
  • Deep research and a solid PRD give AI tools the context they need to design intelligently.
  • Mood boards from Pinterest, Dribbble, and 21st.dev are essential to escape generic AI design.
  • Pencil’s multi-agent feature enables parallel screen iteration but demands careful human orchestration.
  • Designing in AI tools before coding dramatically reduces cost and rework.
  • AI design tools augment rather than replace professional designers — especially useful for early-stage founders.
  • The winning workflow: deep research → mood board → PRD → design system → iterative screens.

Found this article helpful?

Get more tech insights delivered to you.

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.


Discover more from ProductiveTechTalk

Subscribe to get the latest posts sent to your email.

ProductiveTechTalk Avatar

Published by

One response to “AI UI Design Is Beating You if You Still Start in Code”

  1. ProductiveTechTalk Avatar

    The point about AI defaults looking painfully generic unless you feed in real visual references really resonated with me. I’ve noticed the same thing: if you just say “modern, clean UI,” you get the same bland look every time. Treating AI more like a junior designer who needs mood boards, examples, and constraints feels like the real unlock here—not the tools themselves.

    Source: https://www.youtube.com/watch?v=S9OV-yE2GSM

Leave a Reply

Discover more from ProductiveTechTalk

Subscribe now to keep reading and get access to the full archive.

Continue reading