Redesigning a customer care platform

UX/UI Design

Project Overview

Twilio serves as a pivotal digital platform for customer care agents across our brands: Neiman Marcus, Bergdorf Goodman, and Horchow. As part of an upcoming upgrade in 2024 with a brand new design system, an opportunity arose to enhance the interface and experience for agents. My role was to lead this effort, aiming to decrease call handling time and enhance workflow efficiency.

s anapshot of current state

My Role

In collaboration with a colleague, I took on a multifaceted role encompassing discovery, problem definition, design, and delivery of the solution.

Tools Used

- Figjam

- Figma

- Wrike

- Jira

- Teams


Every additional second in customer care communication entails costs. Since migrating to Twilio, average handle time has increased by approximately 30-40 seconds. Our primary goal is to reduce this metric, recognizing that even saving one second per call annually could result in savings of $33,000.

During the initial stages of this redesign, we pinpointed several key strategies to reduce handle time:

1. Streamline workflows for agents: Simplify processes and eliminate unnecessary steps to make agent tasks more efficient

2. Improve accessibility of tools for agents: Ensure that necessary tools are easily accessible, reducing the time agents spend searching for resources

3. Increase resolution time: Equip agents with the tools and information needed to resolve issues promptly, minimizing the time spent on each case

4. Boost the number of resolutions per agent: Enhance agent effectiveness in resolving customer queries to increase productivity

5. Create a visually calming interface: Design an interface that reduces cognitive load and stress for agents, enabling them to focus on their tasks more effectively

communicating the value of the ux design process, and highlighting the need for change


Our research commenced with observing customer care agents in action across various communication channels. We conducted live observations of calls, texts, emails, and SMS interactions. Additionally, user interviews and an analysis of previous platform data were conducted. We also explored the tool firsthand to grasp its functionality.

Part of our research involved mapping out the existing process map, and communicating our key recommendations after observing live agents

The Problem

When we spoke to the agents, we uncovered that the current layout and functionality nests actions behind many clicks, thus slowing them down. It is also quite rigid in it’s layout, meaning agents do not have the luxury of customizing it to meet their needs, and often have to layer multiple applications on top of it or next to it. Further, Twilio fails to differentiate tasks visually, so the dashboard looks very monochromatic, meaning users need to spend longer to locate what they’re searching for. 

Ideation and Conceptualization

After gaining insights into agent usage patterns, we conducted a brainstorming session where we generated low-fidelity wireframes with many different solutions. Our brainstorming sessions led to two distinct design approaches: one refining the existing interface for efficiency and another introducing advanced customization options.

Testing and Iterating

We developed user testing scripts and task-based prototypes to assess the feasibility of our designs. Following an A/B testing format, we evaluated both designs with agents, seeking feedback on usability and intuitiveness. The tasks selected for testing were based on the most common activities, such as locating order numbers, accessing prewritten responses, and navigating between multiple chats.

User Testing Report

A comprehensive report synthesized findings from user testing sessions, providing stakeholders with actionable insights and emphasizing the value of UX processes.

The first user testing report, outlining methodology, key verbatin and next steps

the key findings from A/B testing two different designs

Results and Impact

Given every second costs, we counted the number of clicks required to perform tasks in the current state interface, and compared it to the number of clicks required in the new design. By comparing clicks required in the current interface to those in the new design, we projected significant time savings and potential cost reductions.

Whilst you may be wondering why we didn’t time agents to get a better gauge on the time saved, we recognized there is always an adjustment period when moving to a new interface, and understand that timing agents wouldn’t be an accurate reflection of the ease of use and reduction in time with the new designs. 


Using the results from the first round of testing, I iterated on the design preferred by the majority of agents, tweaking things based on their feedback. From here I created one final design that took the top performing elements from both designs, and conducted a second round of user testing. This second round asked agents to complete a series of tasks using the new design, which was a high fidelity fully functioning prototype. Observing agents and their thoughts throughout, we counted the number of clicks required for agents to complete the tasks.

It was undeniable how much faster actions were to complete in the new design, and beyond the speed, agents showed strong appreciation toward the design surfacing all the necessary information without having to click round and find it.

Lessons Learned

- Testing designs is essential for aligning with user needs

- Designing for diverse user profiles necessitates flexibility

- Small design adjustments can yield substantial improvements

- Separate ideation sessions foster innovative solutions

- Creating a supportive testing environment encourages candid feedback