Overview
VORTEX had long operated under a "feature-stacking" development model, resulting in a fragmented interface, inconsistent copy, and high learning curves for users. This directly impacted frontline sales conversion. In 2025, I led an in-depth interview with a key U.S. system integrator whose candid negative feedback became the catalyst for change. After I compiled the findings into a presentation for management, this frontline market signal became the key evidence that convinced leadership to invest in a platform redesign. Once the project kicked off, I was responsible for driving the full implementation of the design system, establishing a component review process, and managing the design-to-dev handoff workflow. Beyond my core scope, I also initiated the creation of a UX Writing framework for VORTEX, with the goal of using this opportunity to eliminate the design debt that had accumulated over the years.
Time
2025.11 - 2026.03 (On-going)
Our team
Designer x2
Developer x4
PM x1
My responsibilities
* Established the component review process and ran stress tests across all pages.
* Restructured information architecture and defined modular layout patterns.
* Initiated and authored component usage guidelines.
* Initiated and built the UX Writing framework.
* Managed the design-to-development handoff workflow.
Platform
Web
"Compared to competitors in the security industry, VORTEX always feels harder to use…"
The problem with VORTEX wasn't a single missing feature — it was the systemic result of years of "feature-stacking": designers had no component usage framework, engineers relied on manual color-picking and hard-coded CSS, and the product was perpetually driven by urgent feature requests, making UX the first thing to get deprioritized.
This was especially true in a hardware-sales-driven organization, where software refinement rarely made it onto the priority list.
Accumulated client complaints and an SI interview became the turning point
In October 2025, I led an in-depth interview with the head of a key U.S. system integrator and received a clear market warning:
"I think that people now that there's a lot of saturation in the market right now, there's a lot of companies popping up that have similar features functionality cameras. And for most general users. They're going off feeling."
"When they see a complex looking portal, think that makes them nervous."
"If everybody has similar features, which one would you choose?"

These quotes surfaced a structural problem: in the VSaaS security market, the gap in software capabilities is narrowing. When competing products converge on features, the interface experience is no longer just a cost of sale — for resellers, it becomes the deciding factor in closing deals.
In today's VSaaS market, experience is a critical factor in winning the sale.
The product needs to evolve from a functional tool into a security ecosystem with consumer-grade experience.

"Consumer-grade experience" in B2B SaaS means hiding enterprise-level complexity behind intuitive workflows. Notion is a clear example: it didn't simplify project management features, but it made writing a corporate spec feel like jotting down a personal note. VORTEX needed to move in the same direction.
After I compiled the interview findings into a presentation for management, the frontline market signal earned leadership's attention and led to the decision to invest resources in a full software rebuild and experience upgrade. The first round of validation would take place at ISC West 2026, where we would demo to clients on-site and use their feedback as the initial benchmark.
1
Without success metrics, a rebuild easily becomes a visual reskin
The project launched without any quantified acceptance criteria, and the company had no analytics infrastructure — we couldn't build a user behavior funnel to pinpoint specific experience breakpoints. All design decisions had to be grounded in qualitative feedback from the SI interview, competitive benchmarking, and consistency issues identified through a system audit. I assessed this as a high risk: without data, the redesign could easily be steered by management's subjective preferences, ending up as just a new visual skin while the product's underlying debt remained untouched.
So I proactively aligned with the PM and other stakeholders to define the value of this redesign across three levels:
Eliminate frontend tech debt and visual inconsistencies in one pass, and establish a scalable development foundation.
Give the design team a framework to follow so decisions are no longer based on individual judgment, preventing the product from repeating past mistakes.
Elevate interface quality to directly impact frontline sales confidence and clients' first impressions.
For the short-term acceptance benchmark, we anchored on client feedback from ISC West. While this wasn't the ultimate business metric, it gave the team a shared goal to align around and work toward during the project.
2
Frequent management tweaks put Token system progress at risk
Once the redesign entered the development sprint, management began requesting frequent visual adjustments, some of which didn't align with modern web standards. Every intervention directly threatened progress on the Token system.
At that point, I chose to be upfront with management about the trade-off: frequent visual changes were doable within the limited timeline, but the cost would be development delays that could prevent us from shipping a product that met everyone's expectations before the expo.
I ultimately defined a development priority sequence that all stakeholders could accept, giving visual adjustments room to happen without derailing the system-level goals:
Token definition and development completed first
Component definition and core feature pages delivered (ensuring the expo could showcase key functionality)
Post-expo component refinements and Storybook buildout based on feedback and visual requests
To accommodate management's frequent visual adjustments, I negotiated a dual-version buffer mechanism with the engineering team: engineers could proceed with development based on the delivered Token v1.0, while visual revisions would be tracked separately as v2.0 and evaluated for integration only after all core feature pages were completed.
After this alignment, the frequency of management's adjustments noticeably decreased, and development progress stayed on track.
Token implementation → Initial component draft → Stress-test components on feature pages → Feature pages handed off to dev → Visual adjustments for v2.0 if time permits → Expo demo
*An additional goal: "Making the design system AI-readable (AI-Ready)"
While the design system was being built, another team at the company was exploring the feasibility of using AI to generate designs and code. I recognized an opportunity to integrate early: by structuring the design system's specifications into a machine-readable format during this rebuild, we could eventually feed in requirements and generate brand-compliant frontend output directly.
So I initiated a parallel workstream: beyond defining component frameworks and usage guidelines, I also conducted stakeholder interviews to consolidate the UX Writing framework. The goal was to get the product's rules well-defined so that AI could accelerate spec generation, freeing designers to spend more time on user research and product testing.
Division of work and starting point
To ensure we could deliver within the limited timeline, I split responsibilities clearly with another designer: she owned the visual language and design style from scratch, while I owned system implementation and structural integrity — stress-testing the visual language across all key VORTEX 2.0 screens to verify whether the components could truly handle every feature-level requirement.
1
Component audit: from 36 down to 28
In v1.0, there was no component review mechanism. Every designer who touched VORTEX would add new components based on immediate needs, causing functionally overlapping components to pile up and the design framework to gradually lose its boundaries.
To prevent this from recurring in 2.0, I established a structured component review decision tree endorsed by the team — covering semantic validation, Token-layer differentiation, variant rationality, and reuse scope assessment. Every new component request had to pass through this process before entering the system. After running all 2.0 screens through this mechanism, we consolidated from 36 to 28 components, removing functionally redundant state variants and custom components that lacked clear use cases. Every component that remained has a defined usage context and boundary, and every excluded request has a corresponding resolution path.
*Try walking through the review process using the interactive decision tree below*
Taking the Tab component as an example — v1.0 had 5 variants, which we consolidated to 2 in v2.0, each with a distinct pattern and purpose.
2
Modular layouts: reusing the same structure across pages
After component consolidation, I went further to define layout patterns for feature pages, with the goal of enabling the same structure (UI pattern) to be reused across pages with different information hierarchies, reducing the cost of frontend teams rebuilding each page from scratch.
The core layouts we defined:
Search + Filters pattern (used in Devices, Message, Archive)
Designed for pages with broad data search needs. In v1.0, Devices, Message, and Archive each used different UI patterns for filtering, forcing users to relearn the interaction every time they switched pages. The decision to consolidate into a single pattern was based on two factors:
Reducing cross-page learning cost.
Aligning with established interaction conventions from the most experience-mature competitors in the VSaaS market.
New vs. old Message page
New vs. old Archive page
Table + Side panel pattern (used in Devices, Message)
Designed for management pages where users need to scan quickly and view details on a key item immediately. In v1.0's table-only mode, viewing a single device's details required navigating away or opening a modal, breaking the user's list context. Tables also tended to pack in too many columns, making horizontal scanning difficult.
The side panel allows the core workflow of "scan list → view details" to happen within the same view, reducing context-switching cost. This pattern has been widely validated in the most experience-mature products in the category.
New vs. old Devices page
3
Usage guidelines: ensuring the system rules can be inherited and enforced
After design work was completed and engineering entered the sprint phase, I used AI to generate initial drafts of the usage guidelines, then revised and supplemented each one based on the component's actual usage context and boundary conditions. Every component's Do's & Don'ts went through cross-review with both the design team and engineering, which accelerated the completion of the full Usage Principles — covering all 28 components' Do's & Don'ts and interaction logic.
When new designers join or features expand beyond the current scope, these rules need to function correctly without the original designer being present.

Proactively addressing inconsistent copy tone
The design system solved the visual layer's inconsistencies, but while compiling component usage guidelines, I discovered that VORTEX's copy problems were equally severe — UI copy produced by different designers varied wildly in style, some overly engineering-oriented, others too verbose, all because there was no framework to follow. Although this fell outside my original scope (a gray area), I determined that if we didn't address it now, the product's experience would still carry debt and rough edges.
Interviewed five stakeholders to establish product voice consensus
I interviewed five stakeholders across CS, Marketing, and PM, assigning different decision-making weights based on each role's area of responsibility — for example, brand tone was led by Marketing and Design, technical rigor referenced PM's judgment, and CS's frontline feedback focused on emotional warmth and interactivity.
After tallying scores by weight, we arrived at a voice baseline defined across four dimensions:
The "Seriousness" dimension score was a surprise. I had assumed the tone facing end-users should lean approachable, but the product team's consensus was clear: professional credibility cannot be sacrificed.
Contextual adjustments: three tone modes
Beyond the baseline, after further analysis I identified several scenarios specific to the surveillance industry that require dynamic adjustments on top of the baseline. These rules were validated one by one during interviews and formally incorporated into the framework after reaching stakeholder consensus.
Framework output and AI integration
After interview findings and contextual rules were confirmed, I consolidated the following into a complete framework:
Voice & Tone dimension definitions and contextual adjustment rules
Product terminology glossary (covering industry jargon and company-specific terms)
UI component copy guidelines (integrated with the design system components, with reference examples)
Golden examples (proofread with the product team's writer, covering five common scenarios)
Punctuation and sentence structure rules

Packaged as Skills.md, currently being piloted by the design team. The goal is for AI-generated first drafts to reach 80+ in consistency and accuracy without the original author's involvement.
After the framework was published, the design team began testing it in active projects. The System Settings and Device Settings pages benefited most directly — these pages had long suffered from verbose, engineering-oriented descriptions where the copy conveyed status but lacked guidance.
The Skills.md design ensures the AI identifies which persona a feature primarily serves before generating copy, then outputs a draft calibrated to the corresponding tone dimensions. The team's feedback after testing: "The existing copy really does have a lot of fluff. After running it through the framework, the quality improved noticeably."
ISC West expo feedback
In March 2026, VORTEX 2.0 had its first external showing at ISC West in a private demo room.
According to frontline reports: multiple clients proactively asked about test access and launch timelines after the demo.
The signal was clear: the interface changes had already begun to influence sales confidence and clients' purchase intent — precisely the root issue surfaced in the 2025 SI interview.

Current limitations and next steps
This feedback mechanism was built under resource constraints, with questions delivered by frontline sales, limiting the depth of insight. We plan to integrate a structured survey with test access links so that future feedback can cover the full product experience, not just first impressions.
Defining long-term success metrics
SI conversion rate improvement — whether post-demo purchase rates increase
Existing client retention — how smoothly clients adapt after migration to the redesign
Design system fully AI-ready, with specifications that can be accurately read and executed by AI
Significant reduction in handoff gaps, with improved implementation efficiency and output quality
Implementation of AI-Driven Design Ops
Leveraged AI to cross-reference prototypes with functional specifications, establishing a foundation for automated End-to-End test case generation.
Shifted from "static images + redlines" to "interactive .html prototypes." These production-ready assets support version control and seamless sharing.
Implementing AI to audit Design Tokens against frontend code, replacing manual UI inspection with automated visual regression and logic audits.
Utilizing RAG to synthesize design systems and product context, enabling AI-assisted generation of technical specifications.
This project started with a straightforward goal — bring VORTEX up to the baseline a normal SaaS product should meet. But as the work progressed, I came to realize two things that extended beyond the design system itself:
1
AI as a way to redistribute design time
While exploring how to make the design system support AI workflows, I started seeing a bigger problem: designers have long spent disproportionate time on spec writing and handoff, with little time left for the work that actually shapes product direction — user research, success metric definition, and interview process design.
-
If AI can take on the execution side of handoff, designers' time should shift toward higher-value activities for the company.
-
This realization led me to reposition Skills.md from a "personal efficiency tool" to "team workflow infrastructure." It's not about making one designer faster — it's about freeing the entire team to invest time in user research, testing, and metric development. The team is currently piloting the framework in active projects, and initial feedback has been positive. But to truly validate whether designers' time has shifted toward higher-value work, we'll need a longer observation period and concrete efficiency data.
2
Without data anchors, design impact stays unproven
As I mentioned in the challenges, this redesign was carried out without user behavior data as a decision-making foundation. Looking back, even without formal analytics resources, I could have tried to piece together a rough behavior funnel from support tickets, sales feedback, or even manual tracking — identified the steepest drop-off point, then decided where to invest design resources. It wouldn't have been perfect data, but it would have been better than no data. And with that foundation, the conversation with management about investing in design would have been fundamentally different.



