From Vibe Coding to AI-Native

A Designer's Evolution

Overview

Role: Product Designer turning builder Timeline: June 2025 – present Tools: Cursor, Claude Code, VS Code Copilot, ChatGPT, Gemini, Groq, OpenClaw Output: 3 Figma plugins, 1 newsletter website, 1 macOS native app, 1 AI companion system

This isn't a single project. It's the story of how a product designer at Microsoft went from "I should learn what AI can do" to building tools, products, and systems with AI as a native part of the process — in about eight months.


Chapter 1 — The First Plugin: Solving Your Own Pain

June 2025. I was designing search result pages for Bing Image Search. The work involved creating dozens of mocks, each needing real images placed into frames. The workflow: find an image online, copy it, paste it into Figma, resize, repeat. For a single mock with 20+ image slots, this took about an hour.

I had just started exploring Cursor. The question was simple: can I search and insert images directly inside Figma?

Two hours later, I had an MVP — Image Buddy. A Figma plugin that calls the Google Search API (later switched to Bing API), lets you search for images by keyword, and inserts them directly into selected frames. One search, one click, done.

What used to take an hour now took about a minute.

Image Buddy plugin in action — keyword search and one-click image insertion into Figma frames

I shared it with my team. I was the first designer in the group to build a plugin this way. The immediate impact wasn't just efficiency — it was inspiration. Xiaohan, a designer on the video team, saw what I'd done and built a similar plugin that pulls YouTube thumbnails. The idea spread not because I promoted it, but because the solution was obvious once someone demonstrated it was possible.

What this taught me: The most powerful tools come from scratching your own itch. And the barrier to building them is lower than most designers think.


Chapter 2 — The Chain Reaction: Plugin → Plugin → Website

The second plugin came from another recurring task. Every month, our team presents project highlights at a team meeting. Everyone creates a showcase image — roughly 600×480, Behance-style project cards. The process was manual: pick a layout, place screenshots, adjust shadows and corners, export.

I built a plugin with preset templates and adjustable parameters — corner radius, shadow, background, layer ordering. Drag your screenshots in, pick a template, fine-tune, export. I shared it with Saber (my +2 manager) and the team. About 15–20 people started using it.

Then Saber asked me to help with the team newsletter. The existing workflow: curate content, build an email from a fixed template in Outlook, send. The hard part wasn't content — it was formatting. I spent nearly a full day testing, sending 20+ test emails to myself, fixing broken gaps and missing images, before getting one perfect version.

So I built a third plugin: design the newsletter layout in Figma, then export clean HTML that pastes directly into Outlook. No more format wrestling.

Saber saw the result and said: we should have a better way to present the newsletter entirely. Not just fix the email formatting — rethink the medium.

Three days later, I'd built the MAI-China Design & Research newsletter website.

This wasn't a blog template with some images. It was a cinematic, scroll-driven web experience built with Next.js, GSAP, and WebGL:

  • Full-screen project cards with cover images, descriptions, and direct links to Figma/Loop files
  • Per-volume visual identity — Vol.1 uses a custom WebGL shader background that shifts hue as you scroll between projects. Vol.2 uses a looping cinematic video backdrop
  • 3D tilt hover effects on project cards with dynamic shadow and scale
  • Variable font weight that responds to cursor proximity — the closer your mouse, the bolder the text
  • Scroll-snap navigation with a sidebar grouped by product area (Copilot, Edge, Bing, MSN, Monetization, CJK Growth, AI Transformation)
  • Multi-volume architecture — a hamburger menu lets you browse past issues, each with its own visual system
  • Static HTML export — the whole thing packages into a zip that works offline, hostable on SharePoint or any static server

The design language: dark museum aesthetic (background #1A1918), Avantt variable font for titles, generous whitespace, cinematic pacing. Every project presented like an exhibit, not a list item.

The team's work spans Copilot Mac, Edge, Bing Search, MSN, monetization, and internal AI transformation initiatives. The website gives each project the visual gravity it deserves — something a flat email newsletter never could.

MAI-China Design & Research newsletter website — cinematic scroll-driven experience with WebGL backgrounds

The website received a "Nicely done" email from Zhang Qi — Microsoft Corporate Vice President and President of Microsoft AI Asia Pacific. A three-day vibe-coding side project, noticed at the SVP level.

The pattern: Each tool I built revealed the next friction point. Solve one problem, and the improved workflow exposes the next bottleneck. Plugin → plugin → website wasn't a plan. It was a chain reaction.


Chapter 3 — Drawell: Behavior as Medium

By this point, I was comfortable building functional software with AI assistance. Drawell was the first project where I wasn't solving a workflow problem — I was making something that didn't need to exist but felt important.

The premise

There's a version of the near future where designers don't drag, click, or draw frames anymore. AI generates layouts from intent. Voice replaces cursor. The physical choreography of design work — the muscle memory of option-dragging a component, the rhythm of cmd-Z, the pause before a decision — disappears.

What if we could preserve that choreography before it's gone?

Not as a screen recording. As art.

What it does

Drawell is a macOS app that silently records your input behavior — mouse movement, keyboard rhythm, pauses, acceleration — on any window you choose. After a set duration (5, 10, or 25 minutes), it generates a single artwork from that session. No live preview. No editing. No parameter tuning. Like a film camera: you shoot, and you see what you get.

Each input dimension maps to a visual quality:

  • Speed → stroke width and opacity
  • Acceleration → trajectory and splatter
  • Pause/dwell → ink pooling, water stains
  • Keyboard rhythm → texture and pattern

Five rendering styles across two series — Traditional Media (Ink Wash, Watercolor, Charcoal) and Artist Series (Mondrian, Pollock) — each interpreting the same behavioral data through a different visual language.

Where it is now

Feature-complete. The capture pipeline, rendering engine, gallery (dark museum aesthetic, British Museum inspiration), and menubar integration all work. The current focus is rendering quality — closing the gap between the generated output and traditional media reference. The ink wash style, for example, still needs more convincing wet-edge behavior, flying white (飞白), and ink pooling effects.

It's unfinished, and I'm okay showing it that way. The interesting part isn't the polish — it's the question it asks about what we lose when our tools evolve past us.

Drawell rendering samples — Ink Wash, Watercolor, and Mondrian styles generated from real design sessions


Chapter 4 — The AI-Native Stack

Looking back, the trajectory has a shape:

June: First Figma plugin with Cursor. Learning that AI can help me build, not just design.

June–August: Three plugins, one website. Learning that the loop of "encounter friction → build solution → discover next friction" compresses from weeks to hours when AI assists.

September–October: Drawell (Draw + Well). Learning that AI-assisted building lets a designer ship a native macOS app — something that used to require a dedicated engineering skillset.

January 2026: OpenClaw + Friday. Learning that AI isn't just a building tool — it's a collaborator, a memory system, a presence.

Each phase expanded what "designer" means. Not designer + engineer. Not designer who codes. A designer who treats AI as native infrastructure — the way a previous generation treated Figma as native infrastructure, or the generation before that treated Photoshop.

The tool landscape

I don't use one AI tool. I use many, each in its place:

  • Cursor — for rapid prototyping, plugins, websites
  • Claude Code — for complex systems, native apps, architecture decisions
  • VS Code Copilot — for in-editor autocomplete during focused coding
  • ChatGPT — for research, brainstorming, quick questions
  • Gemini / Google AI Studio — for multimodal tasks, image generation
  • Groq — for fast inference when speed matters
  • OpenClaw — for persistent AI presence, memory, autonomous tasks

This isn't tool-hopping. It's tool literacy. Knowing which AI to reach for in which context is a skill in itself — and it's one that most designers haven't developed yet.

AI-native tool stack diagram — how Cursor, Claude Code, ChatGPT, Groq, and OpenClaw fit into the workflow


Reflection

Eight months ago, I was a product designer who used AI chatbots for research and brainstorming. Now I ship software, manage a multi-agent system, and think about design problems differently because I know what's buildable.

The biggest shift isn't technical — it's psychological. Once you've built a tool that solves your own problem in two hours, you stop accepting friction as permanent. You start seeing every repetitive workflow as a plugin waiting to be written, every manual process as an automation waiting to happen.

That mindset change is the real product of this journey. The plugins, the app, the systems — those are artifacts. The instinct to build instead of tolerate is what stays.


<!-- Tools: Cursor, Claude Code, VS Code Copilot, ChatGPT, Gemini, Groq, OpenClaw | Stack: Figma Plugin API, Swift/SwiftUI/Core Graphics, HTML/CSS/JS, Node.js | Status: Plugins shipped and in use. Drawell feature-complete, rendering in refinement. AI-native workflow ongoing. -->