MSPH Batch Video Upload

Designing State as Language

Overview

Role: Product Designer, Microsoft Partner Hub (MSPH) — Studio 8 Duration: Sep 2023 – Mar 2024 Domain: B2B Content Publishing Platform Team: 1 designer, 3 PMs, 3 developers, 1 QA engineer; reporting to design mentor Chen Cheng Scope: End-to-end batch video upload system — file selection through publish result

Microsoft Partner Hub is a B2B content management platform where publishers create, upload, and distribute media at scale. My primary project was redesigning the video upload experience from a single-file workflow into a batch system capable of handling high-volume publishing operations.

Key Impact: 70%+ average growth in onboarded push videos. Weekly video volume grew from 6–10 to 140+ — roughly a 20× increase.


Chapter 1 — The Problem: One at a Time Doesn't Scale

When I joined MSPH, publishers were uploading videos one by one. For someone managing a handful of videos a week, this was tolerable. But the platform was growing, and publishers were starting to push 10, 20, even 50+ videos in a single session.

The single-file workflow had no concept of batch. No way to stage multiple files before committing. No structured feedback on what happened after upload. No error transparency — if something went wrong with one file, the user had to mentally reconstruct which uploads succeeded and which didn't.

What was actually broken

The problem wasn't speed. Uploading was slow, sure, but the real issue was legibility. At scale, the system became opaque:

  • No batch staging — publishers couldn't review or edit metadata across multiple files before uploading
  • No status architecture — "uploading" and "done" were the only states. There was no way to distinguish between a file that was still processing, one that had been submitted, one that failed to upload, and one that uploaded but was rejected during publishing
  • No error granularity — when an upload failed, the user got a generic error with no actionable detail — no distinction between a network issue, a format rejection, or a server-side processing failure
  • No scale consideration — nobody had designed for what the UI should do when 50+ files are in play

This wasn't a "make upload faster" brief. It was a "make the system comprehensible at any scale" problem.

Before/after — single-file upload vs. batch experience side by side


Chapter 2 — The State Architecture: Making the System Honest

The core design decision — the one that shaped everything else — was defining a state model that mapped to user cognition, not engineering implementation.

Why this mattered

Engineers think in terms of process states: a file is in a queue, being processed, or done. But users think in terms of meaning: did my thing work? Do I need to do something? What went wrong?

These aren't the same question. An engineer's "done" might mean "uploaded to server" while the user's "done" means "live and published." An engineer's "failed" might be a network timeout, but the user needs to know: should I retry, or is the file itself rejected?

The states I defined

I split the pipeline into states that each answered a distinct user question:

  • Pending — selected but not yet started ("I'm in the queue")
  • Uploading — actively transferring ("it's happening")
  • Submitted — uploaded to server, awaiting publish pipeline ("it's in your hands now")
  • Success — published and live ("done, for real")
  • Failed — upload or publish failed, with reason ("here's what went wrong")
  • Partial Success — some files succeeded, some didn't ("here's the honest picture")
  • Retry Available — failed but recoverable ("you can try again")

The critical distinction: Uploading ≠ Submitted. Upload Failed ≠ Publish Rejected. Partial Success ≠ Failed.

Each of these used to be collapsed into a binary. Expanding them meant the system could speak precisely to the user at every step.

State architecture diagram — all states and transitions across the upload pipeline

The push backs

Three design decisions required pushing against default engineering and product assumptions:

1. "Partly failed" pages shouldn't show "0 videos failed"

The engineering default was to render a template uniformly — the failure summary page would always display "X videos failed," even when X was zero. My position: if nothing failed, don't show a failure context. State language must match reality. Showing "0 failed" on a failure-styled page creates cognitive dissonance — the frame says something went wrong, but the content says everything's fine.

2. Submit All tooltip should be intent-based, not automatic

The initial implementation showed a tooltip for the "Submit All" action immediately on page load. My position: tooltips should be triggered by hover or demonstrated intent, not forced as system narration. Help should be discoverable when the user needs it, not broadcast as ambient noise. This is the difference between guidance and interruption.

3. 50+ videos must not break the layout

When I raised the question of what happens at 50+ files, the initial response was that it's an edge case — handle it later. My position: if you ship a batch feature, you're making a promise about scale. 50 files isn't an edge case; it's the reason the feature exists. If the UI degrades at the volumes it was designed for, the feature hasn't shipped — it's just been demoed.

The design principle underneath

All three push backs pointed to the same principle: status is not an implementation artifact — it's a user interface. Every state, every label, every page frame is a sentence the system speaks to the user. If the sentence is wrong, vague, or contradicts itself, the user stops trusting the system. In B2B tools where people's jobs depend on the output, that trust is everything.


Chapter 3 — Designing for Scale

With the state model defined, the design work expanded into the full upload experience across all scales.

The flow

File selection → Staging → Metadata editing → Upload → Result

The staging area was the key addition. Before batch, users went straight from file picker to upload. Now they had a workspace where they could:

  • Review all selected files before committing
  • Edit metadata inline, per file
  • Bulk-select and batch-apply metadata (tags, categories, descriptions)
  • Remove files before uploading
  • See estimated upload scope

This staging step serves two purposes: it gives users control before the irreversible action, and it reduces errors by making the batch visible as a whole before submission.

Batch upload flow — file selection → staging area → upload → result

Result as audit trail

The result page wasn't designed as a summary — it was designed as an audit trail. After a batch upload, users should be able to look at the result and know exactly what happened to every file without guessing.

For each file: status, error type if applicable, retry option if available. For the batch as a whole: total succeeded, total failed, total pending. For the partial success case — the one I specifically designed for — the page clearly separates successes from failures, with actionable next steps for each group.

Result page — partial success scenario with successes and failures clearly separated

Scale stress-testing

I designed for the realistic ceiling: batches of 50+ files. This meant:

  • Scroll performance with long file lists
  • Metadata editing at scale (bulk edit vs. per-file)
  • Visual grouping to prevent the list from becoming a wall
  • State indicators that remain scannable at high counts
  • Responsive behavior across viewport sizes

The Figma deliverable was 30+ pages covering interaction states, error paths, edge cases, and responsive variations.

50+ file batch UI — showing scale stress-test with visual grouping and scannable state indicators


Chapter 4 — Impact & Reflection

Metrics

  • 70%+ average growth in onboarded push videos
  • 6–10 → 140+ videos/week (~20× volume increase)
  • Design approach shared in weekly team meetings, influenced the team's approach to state modeling in other features

What I learned

In B2B tools, trust comes from precision. Vague success is almost as bad as hidden failure. When a publisher uploads 40 videos and the system says "done" without specifics, they don't feel confident — they feel anxious. Granular state feedback isn't a nice-to-have; it's what makes the tool trustworthy enough to rely on.

"Edge cases" in batch systems are the center of the value proposition. The whole point of batch is scale. If you don't design for the high end, you haven't designed the feature — you've designed a demo.

State architecture is a design artifact. If you let engineering define states, they optimize for implementation simplicity. If design defines states, you optimize for user comprehension. Both are valid concerns, but the user-facing states should be authored by the person who understands what users need to believe about the system at each moment.

What I'd do differently

[To be added — Yi noted the logic may have been too complex in places; to revisit during deep-dive discussion]


<!-- Visual assets to prepare: batch upload flow, state architecture diagram, before/after comparison, result page partial success, 50+ file UI at scale -->