Overview
Role: Product Designer, Microsoft Bing — Studio 8 Multimedia & AI Duration: Oct 2024 – Present Domain: Bing Image Search — SERP Image Answer & Image Vertical Scope: Layout experimentation, AI-driven component system design, cross-surface consistency
Over 18 months, I led the design evolution of Bing Image Search from traditional layout optimization to an AI-first component system. This involved 9 rounds of UX research with 50+ concepts, a high-leverage micro-intervention that generated ~$1.4M in annual revenue impact, and the definition of a reusable component architecture that now powers Image Vertical, SERP, and GenIE experiences.
Chapter 1 — The Layout Ceiling
When I joined Bing Image Search, the SERP image answer was a standard waterfall grid. My first mandate was straightforward: make it better.
All
work
News
Images
Videos
Maps
ShopPING
Balcony Ideas
Grace Parker
5
8
About 1,000 Results
Best of Balcony Ideas

Plants

Lighting

Garden

Privacy

Furniture

Railing

Flooring

Animals picture















See more
100,000+ Best Beautiful Images & Pictures - Pexels
www.url-wholesale-lorem-ipsum.com
Snippet Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, ipsum nostrud exercitation ullamco laboris sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
What we tried
We started with incremental adjustments — padding, margins, density tuning. Then, as other SERP segments began adopting magazine-style layouts, we followed.
But magazine layouts introduced a new constraint: they work best when images are organized around a theme. A flat stream of search results doesn't naturally have one. So we started experimenting with ways to create structure — splitting ambiguous queries into semantic clusters, introducing in-card titles, testing row-based vs. waterfall arrangements.
Over the next several months, I led 9 rounds of UX Labs testing 50+ design concepts, comparing:
- Waterfall vs. magazine vs. row-based vs. masonry
- Pole position treatments
- RS-entity and RS-visual magazine variants
- Mixed image + video layouts
What we learned
There was no clear winner. Users wanted contradictory things:
- Waterfall for "larger, clearer, vivid, high quality" images
- Magazine for "organized, easy to navigate and compare, easy to browse"
Every layout had trade-offs. Incremental improvements weren't converging on anything shippable. ACF-compliant containers showed engagement regressions. The team spent significant energy just making existing layouts behave — responsiveness, reflow, image-dropping logic at smaller viewport widths.
The breakthrough (and the pivot)
We eventually found a new approach: pure image cards with no in-card titles, filling the entire card surface with imagery. This was a meaningful step forward from the traditional magazine format.
Once this pattern proved itself, we expanded it into a full-size card system — from 2×2 (200×200px) all the way to 11×4 (1208×424px) — building a complete library of image card assets. These weren't just for our own Image Answer; we made them available to other SERP segments as a shared component. What started as a layout experiment became an infrastructure contribution.
We also solved an engagement problem that had persisted through the layout transitions. On the pure image cards, I added an overlay See More button on the last image in the card — a small, contextual nudge rather than a separate UI element. The results were significant:
- Adjusted DAU +0.25% (Triggered Desktop)
- Short-term SUU +0.74% (Triggered Mobile)
- Overall PCR +0.96%
- Image PCR +16.31% (Filtered Desktop)
- Click-through to Image Vertical +2.67% (Filtered Desktop)
This became a permanent design pattern that we've kept ever since — proof that the right micro-interaction in the right place can move macro metrics.
But the bigger shift was coming. By September–October 2025, we started exploring AI-driven layouts — structured top answers, semantic categorization, intent-based rendering. This became vNext.
The insight I took from the layout experiments wasn't just "waterfall vs. magazine doesn't matter." It was that layout-level optimization had hit its ceiling. The real opportunity was in how we structure answers, not how we arrange images.
Chapter 2 — Small Change, Big Leverage: The See More Story
Before vNext fully kicked off, a crisis hit. The ACF (Answer Card Framework) transition caused performance drops across Image Answer. By November 2025, the team identified the primary regression hotspot: the See More button in Mainline.
This was a different context from Chapter 1. Everything above — the layout experiments, the overlay See More — happened in TopWide, the premium slot above algorithmic results. Chapter 2 is about Mainline, where Image Answer lives alongside other verticals. When ACF rolled out across Mainline, it broke things that had been working fine.
My intervention
Instead of proposing a layout redesign, I focused on the behavioral bottleneck. The See More button had become visually recessive — small font, theme-colored text that blended into the page.
I proposed the minimum viable change:
- Font size: 13px → 16px
- Color: theme color → always-blue (still using ACF tokens)
Two CSS properties. That's it.
The result
After flighting (results reported December 11, 2025):
- APSAT improved (user satisfaction)
- PCR increased (click-through)
- Image vertical revenue +1.01% (~$1.4M ARR)
I presented the results at the Search Design Council Review and requested to ship.
Mainline See More button — before/after with metric uplift
The twist
ACF coherence reviewers didn't sign off. The always-blue treatment, despite being built with ACF tokens, was deemed inconsistent with the broader design system direction. The change remains in flight limbo — effective but unshipped.
Why this matters for design judgment
This isn't a story about a button. It's about knowing where to act. In a complex system-wide regression with dozens of potential causes, I identified the single highest-leverage point and proved it with the smallest possible intervention. Sometimes the most impactful design decision is choosing not to redesign.
Chapter 3 — vNext: Defining the AI-First Component System
With layout optimization reaching its ceiling, the team pivoted to a fundamentally different approach. Image Vertical vNext would introduce AI-generated top answers with structured, intent-aware modules.
Image Vertical served as our testing ground — SERP has strict ACF coherence requirements where every change needs review, so we experimented in Vertical first and brought proven patterns to SERP later.
I was responsible for defining the core components in the early stage. This wasn't about pixel-perfect screens — it was about establishing what these components are, how they relate to each other, and how they'd work across surfaces.
Hero — The AI Top Answer
The most contentious component. The natural instinct was to treat Hero as a "bigger grid item" — fill it with more images, more metadata, more everything.
I pushed back. Hero's job is to be a high-impact answer, not a denser grid. Blindly filling whitespace creates clutter and dilutes the very thing that makes a top answer feel like an answer.
I proposed removing metadata (title/domain) at rest, showing the source only on hover (desktop) or as an overlay (mobile). The rationale: reduce noise, increase the sense of authority. This was validated through UX labs.
Hero component — AI top answer with hover-reveal metadata
For wide-screen responsiveness, instead of one "correct" solution, I presented four directions at different risk/reward levels:
- T1 Constrained Center — safest, minimal change
- T2 Expanded Gallery Grid — images fill the space
- T3 Three-Column Split — gallery becomes an independent column
- T4 Enriched Content Panel — most aggressive, introduces new modules
Hero responsive directions — T1 Constrained Center → T4 Enriched Content Panel
I also raised a question nobody else was asking: Why would we use Related Searches as a fallback when Visual Gallery is unavailable? We already have RS at the top of the page. What's the rationale for repeating it? This kind of structural questioning — checking that each module has a distinct job — is what separates component architecture from page decoration.
Category Selector — Structure, Not Decoration
Category Selector was initially treated as a visual garnish — a nice-to-have below the AI answer. I pulled it back to its structural role: a navigation element that bridges the AI top answer and image exploration.
By defining it as structural navigation rather than decorative, I ensured it could be reused across Vertical, SERP, and GenIE contexts without losing its purpose.
Swimlane / Carousel — The Reusable Container
Swimlanes needed to work as row-based containers for different content types (images, topics, mixed) while staying consistent with SERP patterns. I focused on defining the structural logic — what goes in, how it scrolls, how it relates to other modules — providing the foundation for subsequent refinement.
The handoff
I defined the component system and structural logic; my colleague Dazhou continued with mobile execution, AI filters/Smart Filters, and detailed refinement. This was a deliberate split: early architecture vs. late implementation. The components I defined are now live in Image Vertical vNext.
Chapter 4 — Scaling: From Vertical to SERP GenIE
Once the vNext components proved themselves in Vertical, the next question was natural: could these AI capabilities work on SERP?
This wasn't handed to me — I proposed it. At the Search Design Council Review, I made the case:
- Competitors were pushing AI mode experiences
- Image Vertical was already AI-first
- SERP had the same image queries but none of the AI enhancement
Bridging two worlds
The challenge wasn't just "put Vertical components on SERP." SERP has different constraints — ACF coherence requirements, different viewport contexts, mobile considerations, card-size limitations.
I provided two structural directions:
- ACF-based — built on existing SERP patterns
- Vertical-derived — adapting vNext components to SERP constraints
For Hero and Carousel, both directions were viable. Category was the hardest — it doesn't have a natural SERP equivalent, and I flagged this as requiring team alignment rather than pretending it was solved.
I also mocked Grid as a potential Category replacement, specifically to prevent the team from locking into a single direction prematurely.
Component reusability diagram — Hero / Category / Swimlane across Vertical, SERP, GenIE
Cross-team collaboration
This work touched everyone:
- PMs (Nevin, Narendra) relied on my component expertise, explicitly requesting mocks and design recommendations
- Engineering (Sergei) coordinated implementation feasibility
- Design (Eesha, Heba, Dazhou) aligned on citations, filters, dark mode, accessibility
The weekly design syncs became the coordination layer where component decisions were pressure-tested across all three surfaces.
Chapter 5 — Impact & Reflection
What shipped
- Image Vertical vNext — live, with AI top answers (Hero, Category, Swimlane)
- Accessibility gate passed — Sev1/Sev2 issues tracked and resolved before flighting
- Component system reusable across 3 surfaces (Vertical / SERP / GenIE)
What was proven but not shipped
- See More button — +1.01% revenue (~$1.4M ARR) in flight, pending ACF coherence sign-off
By the numbers
- 9 UX Lab rounds, 50+ concepts tested
- 4 responsive Hero directions proposed
- 3 core components defined (Hero / Category / Swimlane)
- 3 surfaces unified (Vertical / SERP / GenIE)
Design Principles (emerged from this work)
1. Intent-driven hierarchy Use user intent to determine CTA priority and information structure — not component specs or convention.
2. Cross-surface coherence over local optimization SERP, Vertical, and GenIE are one system with different constraints, not three separate products.
3. Reduce noise, preserve answer intensity Not all whitespace needs filling. A top answer's job is to feel authoritative, not comprehensive.
4. Evidence over preference When a design decision is ambiguous, push it into a flight or UX lab. Opinions don't ship — data does.
<!-- Visual assets: layout comparison, See More iterations, Hero T1-T4, component diagram, GenIE mocks, UX Lab samples -->