top of page
Search

Multi-Method UX Research Toolkit

  • jonwalmsley
  • May 15
  • 7 min read

Updated: Jun 19

Over the years, I have regularly utilised a varied of research methodologies across banking, e-commerce, charities, public-sector or marketing to uncover user needs, validate design decisions, spark ideas and benchmark usability. Not only have I spent time integrating research into our workflows, I have also mentored and trained up colleagues and junior members in the processes and value of UX research in order to upskill teams and businesses.


Research Objectives

The objectives of user research vary, depending on the project, client, users, stakeholders, budgets, time or teams involved. Choosing the appropriate techniques and when to use them is vital.


Be that for:

  • surfacing pain-points in existing flows

  • Validating and reshaping information architecture

  • Generating rapid ideation with teams and stakeholders

  • Quantifying usability improvements before and after launch


Research Plan and Phases

Different phases of a project call for different approaches to research.

1. Exploration and Ideation

Here is where we start off. Below are some research methods used during the early discovery phases of projects alongside brief descriptions of where I have used them.


Crazy-8s Ideation

Photo of multiple Crazy 8 sheets pinned to a wall. The variety in styles shows people of all artistic abilities can still contribute to these concepts.
Photo of multiple Crazy 8 sheets pinned to a wall. The variety in styles shows people of all artistic abilities can still contribute to these concepts.

This particular use of Crazy 8s Ideation was during a workshop with the Guide Dogs charity. They were looking for new ideas for fundraising, taking into consideration the whole infrastructure around that charity (the dogs themselves, the volunteers who train and walk the dogs, the systems in place for rehoming old dogs...).


  • Everyone in the session was given paper and pens, and over the course of 8 minutes were to generate 8 different ideas, no matter how wild and left-field.

  • Proposals were explained by each individual, summarised on a whiteboard and dot-voting conducted by all participants to identify which ideas were the most popular.



Competitive Review


During a project for Royal Mail I was looking to identify website & app trends for postal service providers the world over to identify what users expect to see from such an offering (as well as gauging ideas to inspire our approach). A competitive review was an ideal form of research for such a task.


Contextual Inquiry

You can have all the workshops, focus groups and interviews you need, but nothing beats gaining understanding of the way people deal with their situations by sitting with them as they work.


In order to streamline the process that the employees of Omnicom Media Group plan and buy their media campaigns I spent a week 'on the ground' with them, sitting with individual planners/buyers to see how they interact with different systems, what problems they come across, how they transfer data from one system to another, what admin they're required to complete...


This helped in understanding many problems and opportunities that would otherwise be hard to identify, such as:

  • Workarounds

  • Points of friction

  • Context Switching

  • Infrastructure blockers


A fantastic research opportunity to find out situations that the users experience but may not even be aware of that we can help to solve.


Traditional Research

And of course there are multiple further forms of research I have called upon in these early phases of projects:

  • Interviews & Stakeholder Workshops

  • Heuristic & Cognitive Reviews

  • Surveys

  • Data Reviews


  1. Structure & Validation


Identifying problems and opportunities is one thing, but eventually you have to start getting ideas down on paper (and on screen) to see if they hold water.


Card Sorting

While traditionally used to help identify logical information architecture and site flows, Card Sorting can be used in a variety of other ways too.

Card Sorting example of a selection of personal banking panels, laid out by a participant in order of the cards most relevant to them.
Card Sorting example of a selection of personal banking panels, laid out by a participant in order of the cards most relevant to them.

In this situation I used Card Sorting when working on redesigning the logged-in state of an HSBC account page. What do users actually want to see on this dashboard view? A large selection of possible data views was identified and mocked-up and in workshop sessions with a variety of users I got them all to lay out all the views most relevant to them (with post-its to add and additional ones they would also like - no UX research session is complete without post-it notes).


This helped us to identify how much personalisation was required for individual users, what was consistently prioritised and which views are not actually of benefit that were otherwise thought useful.


Laboratory 1:1 Usability Testing

Whether actually using laboratory settings (participant+facilitator in one room, screen recording and stakeholders viewing live user behaviour from behind a two-way mirror) just sat together in a quiet part of an office, or being run remotely over Teams/Zoom, I have used the 1:1 Lab Testing approach very successfully for gauging user behaviours, reactions and opinions on products either in development or long established.

Image from a presentation deck from a Bosch usability testing session
Image from a presentation deck from a Bosch usability testing session

This particular image is from a testing session I ran with potential customers of the live Bosch website, viewing on both Desktop and Mobile (mobile in this example) as part of a wider UX, Accessibility and CRO audit. It showcases how users could stumble in areas not generally expected to cause issues until you see it happening in person.


Guerilla Testing

This technique can be run at various points of a project development, from initial discovery (if a system is already in place and in need of improvement) or- as is the case here - when looking to validate decisions made during the design and development process.

Photograph of a pop-up guerilla testing booth for HSBC design validation
Photograph of a pop-up guerilla testing booth for HSBC design validation

This particular example was when I was testing new processes for Move Money within the HSBC banking website.


This involved setting up a regular (fortnightly) desk in the client’s office to utilise passing foot-traffic to test in-process designs and prototypes. To encourage participation, I offered refreshments and leveraged natural curiosity.


  1. Quantification and Benchmarking

Once something is built that isn't the end of the process (well, unless you're deep into the agency stack and have to just 'over-the-wall' one client project before moving onto the next!). No, it's a good opportunity to learn what went well, what improvements could me made for phase 2 and to interact with the userbase again to help ensure the tools are as well received as possible.


Software Usability Scale

Mock-up of a typical Software Usability Scale (SUS)
Mock-up of a typical Software Usability Scale (SUS)

For an in-house blockchain employee rewards tool developed within the creative agency I worked for I wanted to understand how intuitive and usable the tool would be, so I completed a traditional Software Usability Scale (SUS) in order to both understand if we had made sensible design decisions, but also in order to get quantitative metrics to play back to internal management to help back up our decisions.


Analytics

Gaining qualitative metrics for something as human as 'user experience' is difficult, and analytics only tells one side of the story (what actually happened, not why - and especially not 'why didn't this happen?') but in combination with quantitative metrics can help tell the wider story about how tools and products are received.

Whether traditional metrics such as Google Analytics for identifying user flow behaviour and drop-out points, or tools such as Microsoft Clarity or Hotjar for screen recordings and heatmaps, all play their part. Even internal tools developed for only ~50 users can impart valuable details about how they are being used.


Drop-in Sessions

Following the deployment of most internal tools developed at Omnicom Media Group I wanted to find out how they are being received. But people are busy and it's often very hard to get time with them. So I added a voluntary drop-in session every fortnight on a Friday for people to dial into if they have a chance and a desire, where they could raise concerns, ask questions and generally feed back directly to the people who helped build the tools with any useful information.

When people dialled in we would invariably get useful suggestions for improvements to the tools, while putting minimal pressure on the userbase


Training & Mentoring

Having led research initiatives for years, I’ve also partnered with junior colleagues and eager teammates to bring them up to speed on user research. Whether guiding them through interview moderation, desk-based analysis, or critiquing their research plans, I ensure they gain hands-on experience and confidence.


This has involved my;

  • Co-moderated 1:1 usability tests and interviews, coaching on facilitation and note-taking.

  • Mentored teammates in desk-based research - competitive reviews, heuristic evaluations or accessibility audits.

  • Served as the go-to reviewer for research plans, scripts and findings decks before any project kickoff or debrief.

  • Ran targeted workshops on core methods to upskill the team and embed research into every sprint.


Challenges & Learnings


The Ever-Present Hurdle of Participant Recruitment

One of the most consistent challenges across nearly every user research initiative, regardless of methodology, has been participant recruitment and resourcing. Whether it's coordinating schedules for stakeholders and team members to attend generative workshops, securing follow-up feedback from busy users post-launch, or finding suitable participants for 1:1 usability testing sessions, gaining consistent access to the right people often proves to be the biggest logistical hurdle.


This reality has reinforced a crucial learning for me:

Any user feedback is better than no user feedback. 

While ideally, we aim for robust sample sizes and diverse participant pools, practical constraints often mean making the most of what's available. For instance, even when I've only been able to recruit 2 or 3 participants for a usability test, the insights gained have invariably been invaluable. Observing just a few users navigate a product often uncovers critical pain points, unexpected behaviours, and major opportunities for improvement that would otherwise remain hidden. It's about maximising the learning from every interaction, even when resources are constrained, ensuring that design decisions are still grounded in real user needs rather than assumptions.


In Summary

In every engagement - whether remapping a complex banking dashboard, streamlining a charity’s fundraising flow or overhauling a public-sector portal - I’ve matched the right research tool to the challenge at hand. By blending immersive observation with rapid ideation, rigorous validation and hard-metric benchmarking, I ensure we’re not just building features, but solving real user problems in the most efficient, evidence-backed way.



Comments


Jon Walmsley is a UX designer with over fifteen years of experience, driven by user research and a passion for creating accessible, user-friendly designs through collaboration.

  • LinkedIn

© 2025 - Jon Walmsley

bottom of page