Mixed methods research provides a richer picture of the topic being studied by combining insights from both quantitative and qualitative methods; leveraging each method's strengths while mitigating its weaknesses.
🕵 UX Research Lead
🕒 4 months
A global leader in assurance, tax, transaction and consulting needed to reduce reliance on costly, complex channels for Talent/HR related support by ensuring satisfactory resolution of transactional support queries through technology-based solutions at the point of need.
Over 3 weeks, we interviewed 70 employees across 6 countries and 3 cohorts. To ensure adequate representation, we choose participants based their role within the service design ecosystem, tenure, rank and other internally significant demographics. This qualitative method was chosen to increase understanding of the current state mental model, pain points and expectations. By sitting with participants as they share their screen, we observe and ask questions as they use tools and systems in real time.
A Qualtrics survey (CES, NPS, CSAT, multiple-choice, ranked choice, open text) was conducted with the general EY employee population (1700 responses). This quantitative method was recommended to increase confidence in qualitative findings from the interviews on a larger scale and to confirm assumptions about the high-level mental model. To help speed analysis, we used AI to assist with first draft coding of open-ended responses.
Findings were presented to stakeholders at regular touchpoints via Teams to build trust and ensure alignment. A supplementary PowerBI dashboard was provided for quick reference of survey data and future triangulation efforts.
As Research Lead, I worked directly with the Analytics Lead to combine survey data and interview insights with employee telemetry data to understand how attitudes and sentiment matched actual user behavior.
We developed a taxonomy to support a more accurate and meaningful analysis as well as inform the information architecture for the concept design.
This information was also integral during our team's in-person cross-functional workshop where we connected the dots between journey mapping, research and service design best practices to co-create our first draft concept for testing.
We conducted two rounds of testing on progressive iterations of a medium-fidelity prototype to gauge initial reactions and discover any show-stopping issues for further iteration before presentation to stakeholders.
Based on survey responses, we selected a diverse group of individuals that varied in role, sentiment and goals for each round.
I worked closely with the Designers to ensure the discussion guide would yield actionable insights. To maintain project momentum, were provided 'first look' feedback when each round's interviews were 80% complete and made sure to flag any strong deviations immediately.
This project started off slow, with functions that were accustomed to working in a silos but who were now being asked to knit together seamlessly at lightning speed. As a collaboration champion, I worked to encourage cross-functional ways of working …
The use of AI to assist in first-draft coding of themes from open-ended survey questions was an interesting experiment. Ideally, we could have spent more time developing the initial keywords for coding as the AI didn't have much knowledge of internal vernacular and was very bad at detecting sarcasm!
With refinement and more effort upfront, I'm sure this kind of solution will revolutionize speed of feedback, but for now, we must remain vigilant for misinterpretation and plan for human review of any initial AI-assisted coding.
I wish I could share more about our findings and the final concept, but ultimately, what I can tell you is that our empathy-fueled service design concept was so well-received that our leadership was asked to roadshow our final presentation to top leadership as the basis for a new service model of employee support.