6 Usability Testing Examples & Case Studies

Interested in analyzing real-world examples of successful usability tests?

In this article, we’ll be examining six examples of usability testing being conducted with substantial results.

Conducting usability testing takes only seven simple steps and does not have to require a massive budget. Yet it can achieve remarkable results for companies across all industries.

If you’re someone who cannot be convinced by theory alone, this is the guide for you. These are tried-and-tested case studies from well-known companies that showcase the true power of a successful usability test.

Here are the usability testing examples and case studies we’ll be covering in this article:

  • McDonald’s
  • AutoTrader.com
  • Halo: Combat Evolved

Example #1: Ryanair

Ryanair is one of the world’s largest airline groups, carrying 152 million passengers each year. In 2014, the company launched Ryanair Labs, a digital innovation hub seeking to “reinvent online traveling”. To make this dream a reality, they went on a recruiting spree that resulted in a team of 200+ members. This team included user experience specialists, data analysts, software developers, and digital marketers – all working towards a common goal of improving the user experience of the Ryanair website.

What made matters more complicated, however, is that Ryanair’s website and app together received 1 billion visits per year. Working with a website this large, combined with the paper-thin profit margins of around 5% for the airline industry, Ryanair had no room for errors. To make matters even more stressful, one of the first missions for the new team included launching an entirely new website with a superior user experience.

To give you a visual idea of what they were up against, take a look at their old website design:

website usability test case study

Not great, not terrible. But the website undoubtedly needed a redesign for the 21st century.

This is what the Ryanair team set out to accomplish:

  • Reducing the number of steps needed to book a flight on the website;
  • Allowing customers to store their travel documents and payment cards on the website;
  • Delivering a better mobile device user experience for both the website and app.

With these goals in mind, they chose remote and unmoderated usability testing types for their user tests. This by itself was a change for the team, as the Ryanair team had relied on in-lab, face-to-face testing until that point. 

By collaborating with the UX agency UserZoom , however, new opportunities opened up for Ryanair. With UzerZoom’s massive roster of user testers, Ryanair could access large amounts of qualitative and quantitative usability data. Data that they badly needed during the design process of the new website.

By going with remote unmoderated usability testing, the Ryanair team managed to:

  • Reduce the time spent on usability testing;
  • Conduct simultaneous usability tests with hundreds of users and without geographical barriers;
  • Increase the overall reach and scale of the tests;
  • Carry out tests across many devices, operating systems, and multiple focus groups.

With continuous user testing, the new website was taken through alpha and beta testing in 2015. The end result of all work this was the vastly improved look, functionality, and user experience of the new website:

Ryanair's new website design

Even before launch, Ryanair knew that the new website was superior. Usability tests had shown that to be the case and they had no need to rely on “educated guesses”. This usability testing example demonstrates that a well-executed testing plan can give remarkable results.

Source:   Ryanair case study  by UserZoom

Example #2: McDonald’s

McDonald’s is one of the world’s largest fast-food restaurant chains, with a staggering 62 million daily customers . Yet, McDonald’s was late to embrace the mobile revolution as their smartphone app launched rather recently – in August 2015. In comparison, Starbucks’ smartphone app was already a booming success and accounted for 20% of its’ overall revenue in 2015.

Considering the competition, McDonald’s had some catching up to do. Before the launch of their app in the UK, they decided to hire UK-based  SimpleUsability  to identify any usability problems before release. The test plan involved conducting 20 usability tests, where the task scenarios covered the entire customer journey from end-to-end. In addition to that, the test plan included 225 end-user interviews.

Not exactly a large-scale usability study considering the massive size of McDonald’s, but it turned out to be valuable nonetheless. A number of usability issues were detected during the study:

  • Poor visibility and interactivity of the call-to-action buttons;
  • Communication problems between restaurants and the smartphone app;
  • Lack of order customization and favoriting impaired the overall user experience.

Here’s what the McDonald’s mobile app looks like today:

website usability test case study

This case study demonstrates that investing even a tiny percentage of a company’s resources into usability testing can result in meaningful insights.

Source:   McDonald’s case study  by SimpleUsability

Example #3: SoundCloud

SoundCloud is the world’s largest music and audio distribution platform, with over 175 million unique monthly listeners . In 2019, SoundCloud hired test IO , a Berlin-based usability testing agency, to conduct continuous usability testing for the SoundCloud mobile app. With SoundCloud’s rigorous development schedule, the company needed regular human user testers to make sure that all new updates work across all devices and OS versions.

The key research objectives for SoundCloud’s regular usability studies were to:

  • Provide a user-friendly listening experience for mobile app users;
  • Identify and fix software bugs before wide release;
  • Improve the mobile app development cycle.

In the very first usability tests, more than 150 usability issues (including 11 critical issues) were discovered. These issues likely wouldn’t have been discovered through internal bug testing. That is because the user testers experimented on the app from a plethora of devices and geographical locations (144 devices and 22 countries). Without remote usability testing, a testing scale as large as this would have been very difficult and expensive to achieve.

Today, SoundCloud’s mobile app looks like this:

SoundCloud usability testing example

This case study demonstrates the power of regular usability testing in products with frequent updates. 

Source:   SoundCloud case study (.pdf)  by test IO

Example #4: AutoTrader.com

AutoTrader.com is one of the world’s largest online marketplaces for buying and selling used cars, with over 28 million monthly visitors . The mission of AutoTrader’s website is to empower car shoppers in the researching process by giving them all the tools necessary to make informed decisions about vehicle purchases.

Sounds fantastic.

However, with competitors such as CarGurus gaining increasing amounts of market share in the online car shopping industry, AutoTrader had to do reinvent itself to stay competitive. 

In e-commerce, competitors with a superior website can gain massive followings in an instant. Fifty years ago this was not the case – well-established car marketplaces had massive car parks all over the country, and a newcomer would have little in ways to compete.

Nowadays, however, it’s all about user experience. Digital shoppers will flock to whichever site offers a better user experience. Websites unwilling or unable to improve their user experience over time will get left in the dust. No matter how big or small they are.

Going back to AutoTrader, the majority of its website traffic comes from organic Google search, meaning that in addition to website usability, search engine optimization (SEO) is a major priority for the company. According to John Muller from Google, changing the layout of a website can affect rankings , and that is why AutoTrader had to be careful with making any large-scale changes to their website.

AutoTrader did not have a large team of user researchers nor a massive budget dedicated to usability testing. But they did have Bradley Miller – Senior User Experience Researcher at the company. To test the usability of AutoTrader, Miller decided to partner with UserTesting.com to conduct live user interviews with AutoTrader users.

Through these live user interviews, Miller was able to:

  • Find and connect with target personas;
  • Communicate with car buyers from across the country;
  • Reduce the costs of conducting usability tests while increasing the insights gained.

From these remote usability live interviews, Miller learned that the customer journey almost always begins from a single source: search engines. Here, it’s important to note that search engines rarely direct users to the homepage. Instead, they drive traffic to the inner pages of websites. In the case of AutoTrader, for example, only around 20% of search engine traffic goes to the homepage (data from SEMrush ).

These insights helped AutoTrader redesign their inner pages to better match the customer journey. They no longer assumed that any inner page visitor already has a greater contextual knowledge of the website. Instead, they started to treat each page as if it’s the initial point of entry by providing more contextual information right then and there inside the inner page.

This usability testing example demonstrates not only the power of user interviews but also the importance of understanding your customer journey and SEO.

Source: AutoTrader case study  by UserTesting.com

Example #5: Udemy

Udemy is one of the world’s largest online learning platforms with over  40 million students across the world. The e-learning giant also has a massively popular smartphone app, and the usability testing example in question was aimed at the smartphone users of Udemy.

To find out when, where, and why Udemy users chose to opt for the mobile app rather than the desktop version, Udemy conducted user tests. As Udemy is a 100% digital company, they chose fully remote unmoderated user testing as their testing method. 

Test participants were asked to take small videos showing where they were located and what tasks they were focused on at the time of learning and recording. 

What the user researchers found was that their initial theory of “users prefer using the mobile app while on the go” was false. Instead, what they found was that the majority of mobile app users were stationary. Udemy users, for various reasons, used the mobile app at home on the couch, or in a cafeteria. The key findings of this user test were utilized for the next year’s product and feature development.

This is what Udemy’s mobile app looks like today:

website usability test case study

This usability testing case study demonstrates that a company’s perception of target audience behavior does not always match the behavior of the real end-users. And, that is why user testing is crucial.

Source:   Udemy case study  by UserTesting.com

Example #6: Halo: Combat Evolved

“Halo: Combat Evolved” was the first video game in the massively popular Halo franchise. It was developed by Bungie and published by Microsoft Game Studios in 2001. Within 10 years after its’ release, the Halo games sold more than 46 million copies worldwide and generated Microsoft more than $5 billion in video game and hardware sales. Owing it all to the usability test we’re about to discuss may be a bit of stretch, but usability testing the game during development was undeniably one of the factors that helped the franchise take off like a rocket.

In this usability study, the Halo team gathered a focus group of console gamers to try out their game’s prototype to see if they had fun playing the game. And, if they did not have fun – they wanted to find out what prevented them from doing so. 

In the usability sessions, the researchers placed test subjects (players) in a large outdoor environment with enemies waiting for them across the open space.

The designers of the game expected the players to sprint closer towards the enemies, sparking a massive battle full of action and excitement. But, the test participants had a different plan in mind. Instead of putting themselves in danger by springing closer, they would stay at a maximum distance from the enemies and shoot from far across the outdoor space. While this was a safe and effective strategy, it proved to be rather uneventful and boring for the players.

To entice players to enjoy combat up close, the user researchers decided that changes would have to be made. Their solution – changing the size and color of the aiming indicator in the center of the screen to notify players when they were too far away from enemies. 

Here, you can see the finalized aiming indicator in action:

website usability test case study

Subsequent usability tests proved these changes to be effective, as the majority of user testers now engaged in combat from a closer distance.

User testing is not restricted to any particular industry, OS, or platform. Testing user experience is an invaluable tool for any product – not just for websites or mobile apps. 

This example of usability testing from the video game industry shows that players (users) will optimize the fun out of a game if given the chance. It’s up to the designers to bring the fun back through well-designed game mechanics and notifications.

Source:  “ Designing for Fun – User-Testing Case Studies ” by Randy J. Pagulayan

Integrations

What's new?

In-Product Prompts

Participant Management

Interview Studies

Prototype Testing

Card Sorting

Tree Testing

Live Website Testing

Automated Reports

Templates Gallery

Choose from our library of pre-built mazes to copy, customize, and share with your own users

Browse all templates

Financial Services

Tech & Software

Product Designers

Product Managers

User Researchers

By use case

Concept & Idea Validation

Wireframe & Usability Test

Content & Copy Testing

Feedback & Satisfaction

Content Hub

Educational resources for product, research and design teams

Explore all resources

Question Bank

Maze Research Success Hub

Guides & Reports

Help Center

Future of User Research Report

The Optimal Path Podcast

Maze Guides | Resources Hub

A Beginner's Guide to Usability Testing

0% complete

5 Real-life usability testing examples & approaches to apply

Get a feel for what an actual test looks like with five real-life usability test examples from Shopify, Typeform, ElectricFeel, Movista, and Trint. You'll learn about these companies' test scenarios, the types of questions and tasks these designers and UX researchers asked, and the key things they learned.

If you've been working through this guide in order, you should now know pretty much everything you need to run your own usability test. All that’s left is to get your designs in front of some users.

Just arrived here? Here’s a quick re-cap to make sure you have the context you need:

  • Usability testing is the practice of conducting tests with real users to see how easily they can navigate your product, understand how to use it, and achieve their goals
  • There are many usability testing methods . Picking the right one is crucial for getting the insights you need.
  • Qualitative usability testing involves more open-ended questions, and is good for sourcing ideas or validating early assumptions
  • Quantitative testing is good for testing a higher number of people, which is useful for fine-tuning your design once you have a high-fidelity prototype
  • If it’s too difficult to organize in-person tests, remote usability testing is a fast and cost-effective way to get the info you need
  • Guerrilla usability testing is a great option for some fast, easy insights from real people
  • Ask usability testing questions before, during, and after your test to give more context and detail to your results

Why you need usability testing studies & examples

While it’s essential to learn about each aspect of usability testing , it can be helpful to get a feel for what an actual test looks like before creating your first test plan. Creating user testing scenarios to get the feedback you need comes naturally after you’ve run a few tests, but it’s normal to feel less confident at first. Remember: running usability tests isn’t just useful for identifying usability problems and improving your product’s user experience—it’s also the best way to fine-tune your usability testing process.

For inspiration, this chapter contains real-world examples of usability tests, with some advice from designers and UX researchers on writing usability tasks and scenarios for testing products.

If you’re not sure whether you are at the right stage of the design process to conduct usability studies, the answer is almost certainly: yes !

It’s important to test your design as early and often as possible . As long as you have some kind of prototype, running a usability test will help you avoid relying on assumptions by involving real users from the beginning. So start testing early.

The scenarios, questions, and tasks you should create, as well as the overall testing process, will vary depending on the stage you’re at. Let’s look at five examples of usability tests at different stages in the design process.

Usability tests made easy

Find the real snags in your user journey and fix them. Maze makes it easier to make your product truly customer-centric.

user testing data insights

Discovery phase usability test example: Shopify

The Shopify Experts Marketplace is a platform that connects Shopify merchants with trusted Shopify experts who have demonstrated proven expertise in the services they offer. All partners on the Experts Marketplace are experienced and skilled Shopify partners who help merchants grow their businesses by providing high-quality services.

Feature being tested

When Shopify merchants look for a Shopify-recommended service provider, the first page they find is the Expert profile . There, they can find an overview of services provided, recent client testimonials, examples of past work, and more. If a merchant finds the expert profile page easy to navigate, they’re more likely to reach out to experts and potentially hire them.

Usability testing approach

The Shopify team wanted to make sure they were including all the relevant information in the right place. To do so, they first gathered insights about what merchants would need to know about Experts from generative user interviews .

Once they knew what information was most important, they moved on to evaluative research and conducted card sorting and tree testing studies to evaluate the information architecture of the product.

At that stage of the research process, usability testing was the best way to understand how Expert profiles could create more value for users. Melanie Buset, User Experience Researcher at Spotify and former User Experience Researcher at Shopify, explains:

Now that we knew what information we needed to surface, we needed to evaluate how and where we surfaced this information. Usability testing provided us with insight into how well we were meeting user’s expectations.

Melanie Buset

Melanie Buset , User Experience Researcher

Melanie worked closely with the designer on the team to identify what the research questions should be. Based on these questions, the team created a UX research plan and a discussion guide for the usability test. After having tested the usability plan with coworkers, they recruited the participants and ran the actual test.

By usability testing, Melanie and the team were able to gather actionable feedback and implement changes quickly. They continued to test until they reached a point where users felt they had access to the most relevant information about Experts and felt ready to potentially hire them.

Test scenario

"Imagine that you’re interested in hiring a Shopify Expert to help with developing a marketing campaign.”

The team wanted to recreate a scenario that would be as close to the real world as possible. For this purpose, they selected participants who had previously been interested in hiring a Shopify Expert.

Task and question examples

Participants were first given a set of general tasks and asked to think aloud as much as possible and to share any feedback throughout the session. Melanie would ask them to show her how they would go about trying to find someone to hire via the Experts Marketplace and go through the process as if they were ready to hire someone.

If the participants got lost or weren't sure how to proceed, she would gently encourage them to try to find another way to accomplish their goal or to share what they would expect to do or see.

The team also asked participants more specific questions, such as:

  • What information is helping you determine if an Expert is a good fit for your needs?
  • What does this button mean? What happens if you click on it?

Unsure about what to ask in your usability test? Take a look at our guide to writing usability questions + examples 💭

The key thing they learned

After testing, we learned so much about what’s important to people when they’re looking to hire a freelancer for their business and specific projects. For example, people want to know upfront how freelancers will communicate with them, and they prefer profiles that feel more human and less transactional.

Ready-to-use Maze Templates for product discovery phase

Run a product discovery survey

Run a product discovery survey

Base your next product moves on your users’ needs, not your assumptions. With this template, you can develop a clear picture of what your audience wants so you can work on faster solutions to their problems.

See this template

Discover jobs to be done

Discover jobs to be done

Tap into the minds of your customers and discover their desired outcomes with this easy-to-use JTBD survey. Collect valuable, actionable feedback to help build more customer-centric products that helps users achieve their goals—and reduce their pain points.

Early-stage usability test example: ElectricFeel

ElectricFeel 's product is a software platform for entrepreneurs and public transport companies to launch, grow, and scale fleets of shared electric bicycles and mopeds. It includes a mobile app for riders to rent vehicles and a system for mobility companies to run day-to-day fleet operations.

When a new rider signs up to the ElectricFeel app, a fleet management team member from the mobility company has to verify their personal info and driver’s license before they can rent a vehicle.

The ElectricFeel team hypothesized that if they could make this process smoother for fleet management teams, they could reduce the time between someone registering and taking their first ride. This would make the overall experience for new riders more frictionless.

The idea to improve the rider activation process came from a wider user testing initiative, which the team saw as a vital first step before they started working on new designs. Product designer, Gerard Marti, explains:

To address the gap between how you want your product to be received and how it is received, it’s key to understand your users’ day-to-day experience.

Gerard Marti, Product Designer at ElectricFeel

Gerard Marti , Product Designer at ElectricFeel

After comparing the results of user persona workshops conducted both within the company and with real customers, the team used the insights to sketch wireframes of the new rider activation user interface.

Then Gerard ran some usability tests with fleet managers to validate whether the new designs actually made it easier to verify new riders, tweaking the design based on people’s feedback.

The next step in their process is conducting quantitative tests on alternative designs, then continuing to test and iterate the option that wins with more quantitative testing. Gerard sees quantitative testing as a vital step towards validating designs with real human behavior:

What people say and what they actually end up doing is not always the same. While opinions are important and you should listen to them, behavior is what matters in the end.

“You have four riders in the pipeline waiting to be accepted.”

Gerard would often leave the scenario at just this, as he wanted to observe the order in which users perceive each element of the design without sending them in a direction with a goal.

When testing early versions of designs, leaving the usability test scenario open lets you find out whether users naturally understand the purpose of the screen without prompting.

To generate a conversational and open atmosphere with participants, Gerard starts with open questions that don’t invite criticism or praise from the user:

  • What do you see on the screen?
  • What do you think this is for?

He then moves on to asking about specific elements of the design:

  • What information do you find is most valuable?
  • Are pictures or text more important for you?

By asking users to evaluate individual elements of the design, Gerard invites participants to give deeper consideration to their thought process when activating riders. This yields crucial insights on how the fundamentals of the interface should be designed.

After testing, we realized that people scan the page, look for the name, then check the image to see if it matches. So while we assumed the picture ID should be on the right, this insight revealed that it should be on the left.

Mid-stage usability test example: Typeform

Typeform is a people-friendly online form and survey maker. Its unique selling point is its focus on design, which aims to make the experience for respondents as smooth and interactive as possible. As a result, typeforms have a high completion rate.

Since completion rates are a big deal for Typeform users, being able to see the exact questions where people leave your form was a highly requested feature for a long time. Typeform’s interface asks respondents one question at a time, so this is especially important. The feature is now called ‘Drop-off Analysis’.

Product tip ✨

Before you even start designing a prototype for a usability test, do research to discover the kind of products, features, or solutions that your target audience needs. Maze Discovery can help you validate ideas before you start designing.

Yuri Martins, Product Designer at Typeform, explains the point when his team felt like it was time to test their designs for the new Drop-off Analysis feature:

We had a lot of different ideas and drawings for how the feature could work. But we felt like we couldn’t commit to any of them without input from users to see things from their perspective.

Yuri Martins, Product Designer at Typeform

Yuri Martins , Product Designer at Typeform

Fortunately, they had already contacted users and arranged some moderated tests one or two weeks before this point, anticipating that they’d need user feedback after the first design sprints. By the time the tests rolled around, Yuri had designed “a few alternative ways that users could achieve their objectives” in Figma.

Since the team wanted to iterate the design fast, they tested each prototype, then created a new updated version based on user feedback for the next testing session a day or two later. Yuri says they “kept running tests until we saw that feedback was repeating itself in a positive way.”

Finding participants is often the biggest obstacle to conducting usability tests. So schedule them in advance, then spend the following weeks refining what you’d like to test.

“One of your typeforms has already collected a number of responses. The info you see appears in the ‘Results’ page.”

usability test example typeform

This scenario was designed to be relatable for Typeform users that had already:

Made a typeform Shared it and collected responses Visited the ‘Results’ page to check on their responses. Choosing a scenario that appeals to this group of users ensured the feedback was as relevant as possible, as the people being tested were more likely to use the Drop-off Analysis feature to analyze their typeform’s results further.

Typeform’s Drop-off Analysis prototypes only existed in Figma at this point, which meant that users couldn’t interact with the design to complete usability tasks.

Instead, Yuri and the team came up with broader, more open-ended tasks and questions that aimed to test their assumptions about the design:

  • Tell us what you understand about the information on this page.
  • Describe anything missing that you would need to fully interpret the interface.

After the general questions, they asked questions about specific elements of the design to get feedback where they needed it most:

  • At the drop-off point, what do you understand?
  • What would you expect to see here?
  • Does this information make sense to you?

This example shows that you don’t need a fully functional prototype to start testing your assumptions. For useful qualitative feedback midway through the design process, tweak your questions to be more open-ended .

Maze is fully integrated with Figma, so you can easily upload your designs and create an unmoderated usability test with your Figma prototype. Learn more .

We’d assumed that people would want to know how many respondents dropped off at each question. But by usability testing, we discovered that people were much more concerned with the percentage of respondents who dropped off—not the total number.

Late-stage usability test example: Movista

Movista is a workforce management software used by retail and manufacturing suppliers. It helps its users coordinate and execute tasks both in-store and in the field with a mobile app.

As part of a wider design update on their entire product, Movista is about to launch a new product for communications, messaging, chats, and sending announcements. This will let people in-store communicate with people out in the field better.

Movista’s new comms feature is at a late stage of the design process, so they tested a high fidelity prototype. Product designer, Matt Elbert, explains:

For the final round of usability testing before sending our designs to be developed, we wanted to test an MVP that’s as close as possible to the final product.

Matt Elbert, Product Designer at Movista

Matt Elbert , Product Designer at Movista

By this point, the team were confident about the fundamental aspects of the design. These tests were to iron out any final usability issues, which can be harder to identify during the process. By testing with a higher number of people, they hoped to get more statistically significant results to validate their designs before launch.

The team used Maze to conduct remote testing with their prototype, which included an overall goal broken down into tasks, and questions to find out how easy or difficult the previous step was.

“You have received new messages. Navigate to your messages.”

The usability tests would often begin in different parts of the product, with participants given a clear navigational goal. This prompts people to act straight away—without getting sidetracked by other areas of the app.

Matt advises people to be specific when using testing tools for unmoderated tests, as you won’t be there to make sure the user understands what you’re asking them to do.

The general format of the usability test was giving people a very specific task, then following up with an open question to ask participants how it went.

  • How would you delete the message, “yeah, what’s up?” that you sent to Mark Fuentes.
  • How did you find the experience of completing that task?

Matt and the team would also sometimes ask questions before a task to see if their designs matched users’ expectations:

  • What options would you expect to be available in the menu on the top-right corner of the message?

“Questions like this are super useful because this is such a new feature that we don’t know for sure what people’s priorities are," said Matt. The team would rank people’s responses, then consider including different options if there was consistent demand for them.

Finally, Matt says it’s important to always include an invitation for participants to share any last thoughts at the end:

Some people might take a long time to complete a task because they’re checking out other areas of the product—not because they found it difficult. Letting people express their overall opinion stops these instances from skewing your test results.

Based on the insights we got from final results and feedback, we ended up shifting the step of selecting a recipient to much earlier in the process.

Live website usability test example: Trint

Trint is a speech-to-text platform for transcription and content creation. The tool uses artificial intelligence to automatically transcribe audio and video from different file formats and generate editable and shareable transcripts.

The ultimate goal of any B2B website is to attract visitors and convert them into loyal customers. The Trint team wanted to optimize their conversion funnel, and testing the website for usability was the best way to diagnose problems and find the right solutions.

The product team at Trint was already using quantitative data to understand what was happening on the website. They used Mixpanel to look at the conversion rates at every step of the funnel. However, it was never enough information to make design decisions. Lidia Sambito, UX Researcher at Trint, explains:

We had to use other pieces of evidence like usability testing to learn how people experienced our marketing funnel and how they felt throughout the customer journey before we were in a position to make the right changes.

Lidia Sambito, UX Researcher at Trint

Lidia Sambito , UX Researcher at Trint

Lidia worked closely with the product manager and the designer to identify the research questions and plan the sessions. She then recruited the participants and ran the usability test.

The test was run using Zoom. Lidia asked the participants to share their screens and moderated the sessions while the product designer was taking notes. All the sessions were recorded, and the observers could leave their comments by using the Realtime Transcription feature in Trint.

After each session, there was a 30-minute debrief with the team to discuss key takeaways, issues, and surprises. This helped the team reflect on what happened during the session and lay the groundwork for the larger synthesis.

To successfully synthesize the research findings, Lidia listened to the sessions, transcribed them using Trint, and then coded the data using different tags, such as pain points, needs, or goals. Finally, she held a workshop with the designer, engineer, and data scientist to identify common themes for each page of the onboarding process.

This research helped us understand how potential users move across the acquisition funnel and the most painful points of their experience. We identified the main problems and tackled them through ideation, prototyping, and more research.

"You have many files to transcribe and your colleague mentioned a software called Trint. He suggested you take a look at it."

Lidia and the team wanted to make the scenario as realistic as possible. They decided to use an open-ended scenario, giving participants minimal explanation about how to perform the task. The key was to see how users would spontaneously interact with the website.

During the test, the participants were asked to share their comments and thoughts while thinking out loud. The main tasks were:

  • Walk me through how you would use Trint for the first time
  • Show me what you would do next

Lidia would also ask participants more specific questions to get deeper insights. Here are some examples:

  • What information is helping you determine if Trint is a good fit for your needs?
  • Tell us what you understand about the information on this page
  • Are pictures, videos, or text important for you?

We saw that the participants wanted to see and try out the product as early as possible. Still, it took several screens to get to the product. I recommended removing the onboarding survey. We also worked on the website's content to make it easier for people to understand what Trint is about.

Key usability testing takeaways

The examples above offer a heap of insight into how to conduct your usability test, so let’s end with a rundown of the main takeaways:

  • Conduct usability testing early, and often: Users want to try a product out asap, and while it may be nerve-wracking to send a fresh product out there, it’s a great opportunity to gather feedback early in the design process. But don’t let that be your only usability test! Take the feedback, iterate, and test again.
  • Check your biases, and be open to change: Don’t go into your usability test with opinions and expectations set in stone. Like any user research or testing, it’s a good idea to record your assumptions ahead of time. That way, if something comes up unexpectedly—for example, users don’t navigate the platform in the way you expect—you can run with it and consider new options, rather than feeling stuck in your ways or heartbroken over an idea. Remember, the user should always be at the center of the design.
  • Don’t be afraid of a practice run: Usability tests are most effective when they run smoothly, so iron out any wrinkles by conducting a dry run before the real thing. Use colleagues or connections to double check your test, including any questions or software used. A test run may feel like an additional step, but it’s a lot quicker and cheaper than redoing your real test when an error occurs!

Frequently asked questions about usability testing examples

What is an example of usability testing?

Usability testing is a proven method to evaluate your product with real people by getting them to complete a list of tasks while observing and noting their interactions. For example, if you're designing a website for an e-commerce store that sells beauty products, a good way to test your design would be to ask the users to try to buy a particular hair care product.

By observing how users interact with your product, where they click, how long it takes them to select the specific product, and by listening to their feedback, you will be able to identify usability issues and areas of improvement.

How is usability testing performed?

Typically, during a usability test, users complete a set of tasks with a prototype or live product while observers watch and take notes of their interactions. The ultimate goal is to discover usability problems and see how users experience your product.

To run a successful usability test, you need to create a prototype and write an effective usability testing script to outline the goal of your research and the questions and tasks you're going to ask the users. You also need to recruit the participants, run the test, and finally analyze and report your test results.

What is usability testing?

Usability testing is the process of testing your product with real users, by asking them to complete a list of tasks while noting their interactions. The purpose of usability testing is to understand whether your product is usable for people to navigate and achieve their goals.

How do you carry out usability testing?

Usability testing can be carried out a number of ways. The most common methods of usability testing include utilizing online usability testing platforms, guerrilla testing, lab usability testing and phone/video interviews.

Start usability testing with Maze templates

Usability testing a new product

Usability testing a new product

Validate usability across your wireframes and prototypes with real users early on. Use this pre-built template to capture valuable feedback on accessibility and user experience so you can see what’s working (and what isn’t).

Test mobile app usability

Test mobile app usability

Help deliver a friction-free product experience for users on mobile. Test mobile app usability to discover pain points and validate expectations, so your users can scroll happily (and your Product team can keep smiling too).

11 Usability testing templates to try

website usability test case study

Extract insights from Interviews. At Scale.

Usability testing website example: top 5 case studies.

Insight7

Home » Usability Testing Website Example: Top 5 Case Studies

Usability Testing Case Studies play a crucial role in enhancing the online experience for users. Picture a website that’s visually appealing yet confusing to navigate; this can frustrate users and drive them away. By examining usability testing case studies, we uncover valuable insights into how different design elements impact user behavior. These studies typically highlight what works, what doesn’t, and the rationale behind these findings. Understanding these dynamics can significantly improve website usability and overall customer satisfaction.

In analyzing usability testing case studies, it becomes apparent that user feedback is invaluable. These case studies showcase real-life experiences, illustrating how user interactions with websites reveal opportunities for improvement. By studying various scenarios, web designers and developers can identify common pitfalls and design solutions that prioritize user needs. As we delve into the top case studies, we will explore specific examples that demonstrate effective strategies and lessons learned. Each case offers a unique perspective on optimizing user experience, ultimately improving website performance.

Top 5 Usability Testing Case Studies in Website Design

Usability Testing Case Studies reveal valuable insights that enhance website design and user experience. By examining real-world examples, we can understand how specific testing methodologies can significantly improve user interactions and site efficiency. These case studies often illustrate common challenges faced by users and highlight effective solutions that lead to intuitive design changes.

Several key elements stand out in these case studies. First, user feedback plays a crucial role in identifying usability issues. Observations from actual users help designers pinpoint areas of confusion. Next, the iterative testing processes employed ensure that design adjustments are effective and resonate with the target audience. Finally, the measurable outcomes demonstrate the impact of usability testing on overall site performance, enhancing user satisfaction and engagement. Together, these insights illustrate how usability testing can transform website design into a more user-friendly experience.

Usability Testing Case Study 1: E-commerce Platform Optimization

In this usability testing case study, the goal was to optimize an e-commerce platform for enhanced user engagement and increased conversions. The study began by identifying areas of friction within the user journey, which included confusing navigation and unclear messaging. Participants were observed as they interacted with the platform, providing valuable insights into their behaviors and preferences.

Key issues were documented, leading to actionable recommendations. First, simplifying the checkout process resulted in improved user satisfaction. Second, refining the product categorization made it easier for users to find what they were looking for, significantly reducing dropout rates. Finally, enhancing product descriptions and images led to higher customer confidence and increased purchase likelihood. These changes highlight the vital role usability testing plays in identifying user needs and optimizing online experiences, showcasing the effectiveness of usability testing case studies in informing design decisions.

Usability Testing Case Study 2: Improving User Engagement on a Social Media Website

Improving user engagement on a social media website is crucial for retaining active users. This case study delves into the series of usability testing efforts initiated to enhance user interaction and satisfaction. The primary goal was to identify pain points in the user experience that could hinder engagement. Observations revealed that many users found navigation unclear, which led to a frustrating experience. Furthermore, it became evident that users desired more personalized content, which would encourage prolonged visits to the site.

To address these issues, usability testing involved several key strategies. First, user interviews were conducted to gather insightful feedback on their experiences. This was followed by prototype testing, where changes in the layout and content presentation were evaluated to gauge user reactions. The findings led to improved navigation menus and personalized content features, significantly enhancing overall engagement. This case study exemplifies how effective usability testing case studies can lead to actionable insights that drive user engagement on digital platforms.

Key Takeaways from Usability Testing Case Studies

Usability Testing Case Studies reveal essential insights into improving user experiences across various digital platforms. One key takeaway is the importance of understanding user expectations. Evaluating how users interpret elements like field labels and messaging can significantly enhance the clarity and accessibility of an interface. For example, studies often highlight the need for intuitive designs that align with user mental models, ensuring consistency throughout the experience.

Another critical lesson is the value of engaging real users in the testing process. Observing actual customers interact with the platform provides invaluable feedback that generic testing cannot match. Testing with users who represent your target audience allows you to identify pain points and areas for improvement directly related to their behaviors and preferences. By applying these takeaways from usability testing case studies, organizations can make informed design decisions that lead to better user satisfaction and overall effectiveness.

Common Challenges and Solutions in Website Usability Testing

In the realm of usability testing, various challenges often surface, making it essential to recognize effective solutions. One common difficulty is the recruitment of participants. Many testers struggle to find individuals who accurately represent the target audience. To tackle this, defining clear user profiles and utilizing targeted outreach methods can help assemble a suitable participant pool.

Another challenge relates to interpreting the data collected. Often, testers may face bias in their analysis, leading to misleading insights. Utilizing established frameworks for feedback evaluation can significantly enhance the objectivity of the findings. Moreover, continuous iterations of usability testing can refine the website's design, ensuring it evolves based on genuine user experiences. By addressing these challenges, teams can gain invaluable insights from usability testing case studies that drive website improvements and enhance the overall user experience.

Implementing Usability Testing Results for Better User Experience

Implementing usability testing results is vital for enhancing user experience. By analyzing the findings from usability testing case studies, teams can identify pain points that users encounter while interacting with a website. Once these issues are pinpointed, actionable steps can be established to improve the user interface and experience. This continuous feedback loop creates a more intuitive and user-friendly online environment.

Key steps to implement usability testing results include prioritizing identified issues based on user impact, employing design changes for better navigation, and testing those changes for effectiveness. Another important aspect is integrating user feedback into design iterations to create a more seamless experience. Tracking the success of these implementations through user metrics ensures that improvements are making a noticeable difference. Through this iterative process, websites can transform into platforms that genuinely cater to user needs, resulting in higher satisfaction and engagement.

Conclusion: Lessons Learned from Usability Testing Case Studies

Usability Testing Case Studies provide valuable insights that can significantly enhance user experiences. Through various case studies, we can identify common themes, such as the importance of clear messaging and intuitive interfaces. These elements are crucial in guiding users through the application process, making their journey smoother and more enjoyable.

Moreover, analyzing user feedback is essential for identifying issues. Each case study underscores how different testing methods can uncover specific problems, leading to effective solutions. Ultimately, these lessons highlight the need to prioritize user-centered design, ensuring that websites meet the needs of their audience and improve overall satisfaction.

Turn interviews into actionable insights

On this Page

8 Alternatives to Saturate for Research Analysis

You may also like, 5 ux research platforms for advanced insights.

Insight7

Downloadable User Research Template for 2024

7 steps to conducting a usability study.

Unlock Insights from Interviews 10x faster

website usability test case study

  • Request demo
  • Get started for free

The Ultimate Step-by-Step Guide on Website Usability Testing

website usability testing

16.2% of high-tech companies are turning their priority over to customer engagement. 15.5% plan to increase investment in improvements in customer experience. What does that tell you?

That tells you user experience is leaving the domain of designers and becoming a mainstay for any business with an online presence, whether they’re brand new or have been on the block for years. This is probably why so many big-name companies are turning to usability testing to ferret out problems and improve on their products and websites .

  • Sales Hacker used user feedback to increase engagement, breed loyalty, and improve their content strategies with user-provided insights.
  • An online sports gambling operator, Stan James (now Unibet), used the results of usability testing to double their conversion rate from 1.5% to 3% month-to-month .
  • The list of successes goes on and on. Feel free to check out more case studies .

So if you’re not already on the usability train, it’s time to board. In this guide, we’ll take you step-by-step through everything you need to know to conduct website usability testing.

Step 1: Determine Metrics and Create Task Analyses

Step 2: identify best test type, step 3: find valid participants, step 4: decide when, where, and who, step 5: rinse and repeat.

First, you need to figure out your metrics. Usability testing can unearth a whole host of issues, but if it’s not being targeted to determine specific metrics, it’s not going to be an effective use of your time – or dime.

There are three indicators typically agreed upon in usability testing:

  • Effectiveness
  • Satisfaction

Here’s a quick breakdown of what these metrics usually entail:

Measuring usability

While on its face, these three measures seem simple, they’re each their own nesting doll of questions, and there are no universal answers.

  • What are the user’s goals?
  • What steps must be taken to meet those goals?
  • How is the effort being measured?

Fortunately, there’s an easy way to answer these questions by building what is called a task analysis. Task analyses are popular because they allow you to measure two of the most popular metrics in usability directly: task completion rates and time on task .

Task analysis

Here’s how it works. You begin with an overarching goal. In the below example, the goal is to take daily medication.

Example of hierarchical task analysis

You then break the goal down into the steps needed to complete the objective. These are also called subtasks. No step is too small.

When done, you have a theoretical picture of the path a user takes to complete a goal. You can then use your task analysis to set baseline effectiveness and efficiency metrics. Let’s try making one of our own.

To start with, head over to Lucidchart . Sign up for an account by clicking the “Sign up free” button in either the top right corner or middle of the screen. You’ll be taken to this page and given the opportunity to start a trial. For now, scroll down until you see the blue “start free account” button and select that.

Lucidchart sign up

After completing the sign-up, you’ll come to this dashboard screen. There are a lot of templates to choose from, but for task analysis, it’s often easiest to start with a flowchart. Choose the second option to get a look at the available flowchart templates.

Lucidchart flowchart

Choose the first option, “blank diagram.”

Lucidchart blank diagram

The next page will feature a blank canvas. Use the left-hand panel to add shapes. If you’re not sure which shape to use, hover over the option until you see a pop-up.

In most cases, a task analysis only uses a square (process) and diamond (decision) shape, but as you build more complicated task flows, it may be useful to review the other shapes.

Lucidchart process

Start putting down shapes to build your task. Here, we’re starting with a simple task flow for building this walkthrough. To add text, double-click somewhere within the shape boundaries and start typing.

When you’re ready to connect shapes, click on the white circles on the sides and drag your mouse to the next shape. If it’s a diamond (decision) triangle, it’ll automatically add “yes” and “no” to your lines.

Yes or no in flowchart

Note that if you need to move a shape, the lines will automatically reposition to stay connected, so don’t be afraid to reorganize your chart as you build it out.

If you want to change the color, font, stroke, arrow style, or lines, use the top panel highlighted above.

Flowchart fonts and colors

After you build your task flow, go up to “file” to see your share and download options. You can export your task flow with a transparent background, as a vector graphic ( SVG ), or as a PDF, among other options.

Flowchart download as vector

And there you have it! You now have a task analysis to use when conducting tests and setting up your metrics. Chances are that you’ll amend your task analysis once you’re actually in testing, but having at least two or three tasks ahead of time allows you to guide your usability tests .

Task analysis flowchart

After you’ve built your tasks, it’s time to figure out the best test type for your website.

Usability testing can take many forms and range in terms of difficulty and investment requirement. What type of test is best for your website depends on the metrics and tasks you’ve built out in the first step.

Below, we’ll cover three common types of usability tests and what they’re suited for, as well as some honorary mentions.

1. Card Sorts

By far the easiest and fastest usability test around, a card sort is an instrumental test for site architecture .

Card sort

If you remember playing card matching games, card sorts are similar. Here’s how it works. A card sort can be “open,” in which users create their categories for sorting cards into, “closed,” in which all categories are predefined and inflexible. Alternatively, as seen in the above illustration, a card sort can be a “hybrid” where users are free to add their own categories but also have the use of predefined categories.

Users order the remaining cards, usually individual web pages or steps in a process, under the categories that fit their mental model best. Most experts recommend sticking with 30 to 60 cards . Performing a card sort is excellent for seeing how your users’ mental models match your site’s architecture and task process. It can reveal any significant issues early on in the testing process, as it did for Pottery Barn’s redesign (as seen below).

Card sort example in redesign

You can conduct card sorts remotely or in person. The most significant benefit of using a card sort test is the speed at which the test can be conducted, although the analysis stage itself can be time-consuming .

However, because card sorts only allow for limited user intervention and feedback, they should not be considered for those who want to test the satisfaction or effectiveness of a site. Instead, card sorts should be considered your front line test for efficiency, while other metrics seem better left to more robust types of usability testing, such as the field study.

2. Field Studies

A field study for usability testing is exactly what it sounds like. You travel to the location of your users’ natural use habitat – where they’re most likely to be using your website – and have them walk you through their process while watching their screen in a semi-structured interview.

This study is also called a contextual inquiry .

Contextual Inquiry

Don’t worry, the awkwardness of staring over someone’s shoulder fades pretty quickly. 😉

Field studies are much more time-intensive than card sorts, but they’re ideal for testing tasks and getting direct user feedback. For those who want to dig deepest into usability issues, field studies are the way to go . Contextual inquiries are the first strategy deployed by user experience design studio MELEWI for enterprise or “limited-users” products.

Avik Ganguli, a UX design consultant at MELEWI, explains:

The MELEWI Contextual Enquiry Sprint is designed to get embedded directly in the user’s context: observing what the participants did, what they nearly did, and what they didn’t do.

You can see how MELEWI categorizes and explains the benefits of embedding in the contextual side of a usability study below.

Contextual side

Note that field studies can be conducted remotely, but will often lose out on data richness.  Nicole Fenton and Jamie Albrecht of 18F , the digital services agency within the US government, highlight this point:

…For example, contextual inquiry is most valuable when you can observe people in their typical physical environment. Don’t skip out on face-to-face time between your users and fellow researchers.

3. Eye Tracking

If you’ve ever seen a heat map of a website, you’re already familiar with the output for eye-tracking tests.

Heatmap analysis

Eye-tracking studies are used to determine where a user is looking on the page and in what order. The deeper the color of the heat map, the more time the user spent looking at that section of the screen.

It makes eye tracking great for determining where and when users are disengaging from your website. It also highlights when content is irrelevant, as it did for e-commerce site Pronto .

Eye tracking

By revealing the areas that Pronto’s users cared about most, the eye-tracking study enabled Pronto to redesign a homepage that increased its leads by 24% and click-throughs by 17% .

Like the previous tests, eye tracking can be done in person with specialized equipment or remotely with a web camera. However, remote testing is not without its pitfalls.

Here are the pros and cons of using a webcam for eye tracking:

Pros and cons of webcam-based eye tracking

This type of usability test isn’t as rewarding as a field study of user satisfaction but can yield rich data for efficiency and effectiveness. It can also help point out navigation issues by highlighting where users’ eyes skip over or miss, essentially putting your website through the lens of your users’ eyes . Based on the feedback you can improve your design, site structure, website navigation , call to actions, and so on.

Honorary Mentions

Card sorts, field studies, and eye-tracking aren’t the only usability tests in the game.

Focus groups, A/B tests , and surveys are all viable forms of user testing that can reward feedback but should not be considered for any major redesigns.

Now, after identifying the best types of tests for your goals, you have to find people to test. It may surprise you how many you need.

How many users do you need to conduct a usability test? The industry standard is around five . According to a survey from UserTesting, 33% of companies recruit five or fewer users and 41% recruit between six and ten.

Users per usability study

The key to sourcing users for your usability testing is ensuring they’re valid approximations to real users . Not validating design changes with your unique user base can have some drastic impacts. That’s what the designers behind Icons8 discovered after rolling out a redesign and losing almost half their users .

It’s also why here at Kinsta we made user feedback a direct part of our redesign process .

That being said, there are times when finding representative users is too time-consuming or expensive for the goals of the test. In those instances, internal testers can be useful. Referred to as “dogfooding” (as in, eating your own dog food), this method of testing allowed The Boston Globe to get valuable qualitative feedback about new navigation features. Check this out:

Qualitative feedback

So, internal testing definitely has its uses, but only when your need for quick feedback outweighs concerns about external validity. For valid tests that reward both quantitative and qualitative data, you need users as close to your own users as possible.

One potential way to reach these users is through a quick survey sent out to your business’s email list to screen potential participants.  If you couple the survey with an incentive , even one that isn’t guaranteed like a lottery, your participation rates will go up, and you’ll get a pool of real users of your products to test.

Even just a $5 incentive can seriously boost your participation rates, Gallup research finds.

Web survey response rates

You can use survey platforms like SurveyMonkey and Google Forms , or if you’re using a WordPress site, you can use form builders to capture information from potential participants. We are big fans of Hotjar and use it at Kinsta.

That said, there are many reasons that real website users may not be viable. For that, there are paid platforms where you can source proxy users for a small fee. Let’s take a look at some of them.

UserTesting

Used by some of the biggest names in the design industry and beyond, UserTesting offers a testing platform for usability professionals, marketers, business owners, game developers, and more.

Robust and fully-featured, this testing platform can connect you with user proxies and have results delivered in as little as two hours . Note that this is one of the only platforms where you can arrange for live testing.

UserTesting

Used by Google, Userlytics is another robust platform with middle-of-the-road prices and unparalleled quality. Providing recorded videos, the testing panel for Userlytics is over 200,000 users strong, making it a cinch to find ideal user proxies.

What sets Userlytics apart from its competition is its range of customizability. Uniquely offering branch logic, tests through Userlytics can be structured to carry out multiple, divergent task flows.

Userlytics

Now, it’s time to run your test – which means it’s time to decide where, when, and who will be involved.

You can take a breath at this step: the hard part is over. From here, you just need to make a few more decisions before carrying out your test. These decisions are:

  • Remote or in person?
  • Moderated or unmoderated?

Check out how it breaks down in the industry:

Usability testing

As you can see, remote and moderated testing is used evenly across the industry, though moderated testing is on the decline. If you source your user testers from the platforms mentioned in the previous section, these decisions are already made for you, and you can skip down to step five.

If not, let’s back up for a moment: what exactly is moderated testing, and why should you use it?

Moderated testing refers to having a moderator or tester present who can answer questions and guide the user. This puts more control in your hands, but it adds significant logistical challenges for both the testers and the users, especially if you’re conducting tests in-person.

Because moderating a usability test is its own art form, moderated tests are best carried out with a usability specialist, as you can see below.

Usability specialist

Typically, moderated tests are only necessary for incomplete interfaces or in instances where security is a primary concern. Unmoderated tests, on the other hand, are more flexible, as the user needs only to log in and perform the designated tasks at their convenience.

Note that remote tests can be both moderated and unmoderated, depending on your platform and objectives.

For in-person tests, a designated usability lab or the user’s natural environment (as we saw in the field study) is ideal to avoid artificial conditions.

Use the following free platforms for remote tests:

If you remember MELEWI from our earlier sections, Skype is their tool of choice for remote usability tests. The most significant benefit of Skype is its familiarity to users and native screen-sharing. However, the downside of this platform is the lack of built-in screen recording.

Skype

Google Hangouts

Available for any user with a Google account, Google Hangouts is another free platform with screen-sharing abilities. However, only certain types of Google Workspace accounts can record video natively. The advantages of Hangouts over Skype come down to preference and what your users are more comfortable with. Both perform similar functions and will require a screen recorder if you’re not set up with a Google Workspace Enterprise account.

Google Hangouts

Rounding out our list of platforms for remote usability testing is Zoom . This platform has a significant advantage over either Skype or Hangouts: it provides native screen and video recording. In fact, we use Zoom here at Kinsta.

However, it also has a large disadvantage. If your video meeting involves more than one other user, there’s a 40-minute time-limit for free accounts .

Zoom

Note that you should always record tests whenever possible. You’ll need the recordings to review the findings with your team, and they’re helpful for keeping the focus on the people , rather than the data, at the center of your usability tests.

Time for our last step: iterating.

Iterative testing is the key to great usability , though there’s some argument about its role in innovation. What does iteration mean in a design context? It means your process is never over: after creating a website , you continuously test, tweak, and improve on that website endlessly .

Iterative and Cyclical

So, once you’ve conducted your tests and gathered your results, it’s time to review, implement, and then do it all over again . Enginess, a digital consultancy, illustrates the value of iterative design nicely:

…a living project that you should regularly tweak and improve upon as you go, rather than building it in one fell swoop and being done for good.

With that in mind, how you review and implement the results of your usability testing will vary significantly based on what type of data you’ve gathered and what your original goals were. Unlike chemistry lab, usability testing tends to reward a mixed bag of quantitative (“hard”) and qualitative (“soft”) data .

They’re both important. How they’re used will change from project to project, website to website. That said, qualitative data tends to be the most friendly for data visualization . One of the most popular modes of visualization is through a journey map.

Journey map

A journey map provides a visual overview of the different steps your user testers take throughout their journey on your website. If done well, it includes emotional parameters as well as usability issues, but that, like your review process, will ultimately be determined by the type of test you’ve conducted.

Journey maps are especially useful at the conclusion of usability tests because they help unveil hidden insights . After creating a few journey maps, the trends start to become more apparent, and areas of potential improvement are easier to visualize.

Plus, having a unified piece of information to share among team members makes the iteration process much easier. So, what are you waiting for? Head back to the first step to keep pushing your website’s usability higher. The sky’s the limit.

Usability testing is an absolute must-have for any business with an online presence, but it’s a broad field. The three metrics generally agreed upon for usability testing are satisfaction, efficiency, and effectiveness . Which of these metrics matter most to you will determine the best type of usability test to run for your website.

After you narrow down the type of test that suits your needs best, it’s time to find valid participants. Avoid internal testing if you can and look for close proxies if your own customers aren’t a viable solution. Next, decide between moderated and unmoderated, then between remote or in-person testing. Each has its own uses and restrictions, so consider your options carefully.

Once your usability testing concludes, go back and review the results to see what changes your website needs. Then implement those changes and do it all over again: iteration is the difference between “meh” usability and face-melting usability stardom.

The next time you’re considering a redesign or want to improve your users’ experience, refer back to this guide for the ultimate step-by-step to website usability testing.

website usability test case study

Brian has a huge passion for WordPress, has been using it for over a decade, and even develops a couple of premium plugins. Brian enjoys blogging, movies, and hiking. Connect with Brian on Twitter .

Related Articles and Topics

website usability test case study

Powerful Managed WordPress Hosting

website usability test case study

How to Reduce Bounce Rate on Your Site (18 Tips)

  • Application Development

Gravatar for this comment's author

Preston here from UserTest.io. Really appreciate the feature – thank you!

If you (or your readers) ever want to discuss anything about usability testing, please feel free to reach out at any time.

Gravatar for this comment's author

5 users is the industry standard? I’m no statistician, but that seems like an awfully small sample size. And nowadays with a variety of inexpensive different user testing websites it’s pretty easy to get more than 5. I would say 10 – 20 minimum to get results that you can actually use to make decisions about your product.

Gravatar for this comment's author

You are right Bryan 5 is a pretty small sample size. If you can definitely test your website with more users 30 or 50 would be a good number.

Gravatar for this comment's author

Great set of tools. The main thing in the field of usability analysis, at least for me, is various heatmap software. Which one could you recommend besides pronto ( i`ve already tried the most popular of them – Hotjar, Plerdy and Crazzy Egg) – wanna try smthn new.

Leave a Reply Cancel reply

Comment policy: We love comments and appreciate the time that readers spend to share ideas and give feedback. However, all comments are manually moderated and those deemed to be spam or solely promotional will be deleted.

Your email address will not be published. Required fields are marked *

By submitting this form: You agree to the processing of the submitted personal data in accordance with Kinsta's Privacy Policy , including the transfer of data to the United States.

You also agree to receive information from Kinsta related to our services, events, and promotions. You may unsubscribe at any time by following the instructions in the communications received.

Usability Testing: Everything You Need to Know (Methods, Tools, and Examples)

As you crack into the world of UX design, there’s one thing you absolutely must understand and learn to practice like a pro: usability testing.

Precisely because it’s such a critical skill to master, it can be a lot to wrap your head around. What is it exactly, and how do you do it? How is it different from user testing? What are some actual methods that you can employ?

In this guide, we’ll give you everything you need to know about usability testing—the what, the why, and the how.

Here’s what we’ll cover:

  • What is usability testing and why does it matter?
  • Usability testing vs. user testing
  • Formative vs. summative usability testing
  • Attitudinal vs. behavioral research

Performance testing

Card sorting, tree testing, 5-second test, eye tracking.

  • How to learn more about usability testing

Ready? Let’s dive in.

1. What is usability testing and why does it matter?

Simply put, usability testing is the process of discovering ways to improve your product by observing users as they engage with the product itself (or a prototype of the product). It’s a UX research method specifically trained on—you guessed it—the usability of your products. And what is usability ? Usability is a measure of how easily users can accomplish a given task with your product.

Usability testing, when executed well, uncovers pain points in the user journey and highlights barriers to good usability. It will also help you learn about your users’ behaviors and preferences as these relate to your product, and to discover opportunities to design for needs that you may have overlooked.

You can conduct usability testing at any point in the design process when you’ve turned initial ideas into design solutions, but the earlier the better. Test early and test often! You can conduct some kind of usability testing with low- and high- fidelity prototypes alike—and testing should continue after you’ve got a live, out-in-the-world product.

2. Usability testing vs. user testing

Though they sound similar and share a somewhat similar end goal, usability testing and user testing are two different things. We’ll look at the differences in a moment, but first, here’s what they have in common:

  • Both share the end goal of creating a design solution to meet real user needs
  • Both take the time to observe and listen to the user to hear from them what needs/pain points they experience
  • Both look for feasible ways of meeting those needs or addressing those pain points

User testing essentially asks if this particular kind of user would want this particular kind of product—or what kind of product would benefit them in the first place. It is entirely user-focused.

Usability testing, on the other hand, is more product-focused and looks at users’ needs in the context of an existing product (even if that product is still in prototype stages of development). Usability testing takes your existing product and places it in the hands of your users (or potential users) to see how the product actually works for them—how they’re able to accomplish what they need to do with the product.

3. Formative vs. summative usability testing

Alright! Now that you understand what usability testing is, and what it isn’t, let’s get into the various types of usability testing out there.

There are two broad categories of usability testing that are important to understand— formative and summative . These have to do with when you conduct the testing and what your broad objectives are—what the overarching impact the testing should have on your product.

Formative usability testing: 

  • Is a qualitative research process 
  • Happens earlier in the design, development, or iteration process
  • Seeks to understand what about the product needs to be improved
  • Results in qualitative findings and ideation that you can incorporate into prototypes and wireframes

Summative usability testing:

  • Is a research process that’s more quantitative in nature
  • Happens later in the design, development, or iteration process
  • Seeks to understand whether the solutions you are implementing (or have implemented) are effective
  • Results in quantitative findings that can help determine broad areas for improvement or specific areas to fine-tune (this can go hand in hand with competitive analysis )

4. Attitudinal vs. behavioral research

Alongside the timing and purpose of the testing (formative vs. summative), it’s important to understand two broad categories that your research (both your objectives and your findings) will fall into: behavioral and attitudinal.

Attitudinal research is all about what people say—what they think  and communicate about your product and how it works. Behavioral research focuses on what people do—how they actually do interact with your product and the feelings that surface as a result.

What people say and what people do are often two very different things. These two categories help define those differences, choose our testing methods more intentionally, and categorize our findings more effectively.

5. Five essential usability testing methods

Some usability testing methods are geared more towards uncovering either behavioral or attitudinal findings; but many have the potential to result in both.

Of the methods you’ll learn about in this section, performance testing has the greatest potential for targeting both—and will perhaps require the greatest amount of thoughtfulness regarding how you approach it.

Naturally, then, we’ll spend a little more time on that method than the other four, though that in no way diminishes their usefulness! Here are the methods we’ll cover:

These are merely five common and/or interesting methods—it is not a comprehensive list of every method you can use to get inside the hearts and minds of your users. But it’s a place to start. So here we go!

In performance testing, you sit down with a user and give them a task (or set of tasks) to complete with the product.

This is often a combination of methods and approaches that will allow you to interview users, see how they use your product, and find out how they feel about the experience afterward. Depending on your approach, you’ll observe them, take notes, and/or ask usability testing questions before, after, or along the way.

Performance testing is by far the most talked-about form of usability testing—especially as it’s often combined with other methods. Performance testing is what most commonly comes to mind in discussions of usability testing as a whole, and it’s what many UX design certification programs focus on—because it’s so broadly useful and adaptive.

While there’s no one right way to conduct performance testing, there are a number of approaches and combinations of methods you can use, and you’ll want to be intentional about it.

It’s a method that you can adapt to your objectives—so make sure you do! Ask yourself what kind of attitudinal or behavioral findings you’re really looking for, how much time you’ll have for each testing session, and what methods or approaches will help you reach your objectives most efficiently.

Performance testing is often combined with user interviews . For a quick guide on how to ask great questions during this part of a testing session, watch this video:

Even if you choose not to combine performance testing with user interviews, good performance testing will still involve some degree of questioning and moderating.

Performance testing typically results in a pretty massive chunk of qualitative insights, so you’ll need to devote a fair amount of intention and planning before you jump in.

Maximize the usefulness of your research by being thoughtful about the task(s) you assign and what approach you take to moderating the sessions. As your test participants go about the task(s) you assign, you’ll watch, take notes, and ask questions either during or after the test—depending on your approach.

Four approaches to performance testing

There are four ways you can go about moderating a performance test , and it’s worth understanding and choosing your approach (or combination of approaches) carefully and intentionally. As you choose, take time to consider:

  • How much guidance the participant will actually need
  • How intently participants will need to focus
  • How guidance or prompting from you might affect results or observations

With these things in mind, let’s look at the four approaches.

Concurrent Think Aloud (CTA)

With this approach, you’ll encourage participants to externalize their thought process—to think out loud. Your job during the session will be to keep them talking through what they’re looking for, what they’re doing and why, and what they think about the results of their actions.

A CTA approach often uncovers a lot of nuanced details in the user journey, but if your objectives include anything related to the accuracy or time for task completion, you might be better off with a Retrospective Think Aloud.

Retrospective Think Aloud (RTA)

Here, you’ll allow participants to complete their tasks and recount the journey afterward . They can complete tasks in a more realistic time frame  and degree of accuracy, though there will certainly be nuanced details of participants’ thoughts and feelings you’ll miss out on.

Concurrent Probing (CP)

With Concurrent Probing, you ask participants about their experience as they’re having it. You prompt them for details on their expectations, reasons for particular actions, and feeling about results.

This approach can be distracting, but used in combination with CTA, you can allow participants to complete the tasks and prompt only when you see a particularly interesting aspect of their experience, and you’d like to know more. Again, if accuracy and timing are critical objectives, you might be better off with Retrospective Probing.

Retrospective Probing (RP)

If you note that a participant says or does something interesting as they complete their task(s), you can note it and ask them about it later—this is Retrospective Probing. This is an approach very often combined with CTA or RTA to ensure that you’re not missing out on those nuanced details of their experience without distracting them from actually completing the task.

Whew! There’s your quick overview of performance testing. To learn more about it, read to the final section of this article: How to learn more about usability testing.

With this under our belts, let’s move on to our other four essential usability testing methods.

Card sorting is a way of testing the usability of your information architecture. You give users blank cards (open card sorting) or cards labeled with the names and short descriptions of the main items/sections of the product (closed card sorting), then ask them to sort the cards into piles according to which items seem to go best together. You can go even further by asking them to sort the cards into larger groups and to name the groups or piles.

Rather than structuring your site or app according to your understanding of the product, card sorting allows the information architecture to mirror the way your users are thinking.

This is a great technique to employ very early in the design process as it is inexpensive and will save the time and expense of making structural adjustments later in the process. And there’s no technology required! If you want to conduct it remotely, though, there are tools like OptimalSort that do this effectively.

For more on how to conduct card sorting, watch this video:

Tree testing is a great follow up to card sorting, but it can be conducted on its own as well. In tree testing, you create a visual information hierarchy (or “tree) and ask users to complete a task using the tree. For example, you might ask users, “You want to accomplish X with this product. Where do you go to do that?” Then you observe how easily users are able to find what they’re looking for.

This is another great technique to employ early in the design process. It can be conducted with paper prototypes or spreadsheets, but you can also use tools such as TreeJack to accomplish this digitally and remotely.

In the 5-second test, you expose your users to one portion of your product (one screen, probably the top half of it) for five seconds and then interview them to see what they took away regarding:

  • The product/page’s purpose and main features or elements
  • The intended audience and trustworthiness of the brand
  • Their impression of the usability and design of the product

You can conduct this kind of testing in person rather simply, or remotely with tools like UsabilityHub .

This one may seem somewhat new, but it’s been around for a while–though the tools and technology around it have evolved. Eye tracking on its own isn’t enough to determine usability, but it’s a great compliment to your other usability testing measures.

In eye tracking you literally track where most users’ eyes land on the screen you’re designing. The reason this is important is that you want to make sure that the elements users’ eyes are drawn to are the ones that communicate the most important information. This is a difficult one to conduct in any kind of analog fashion, but there are a lot of tools out there that make it simple— CrazyEgg and HotJar are both great places to start.

6. How to learn more about usability testing

There you have it: your 15-minute overview of the what, why, and how of usability testing. But don’t stop here! Usability testing and UX research as a whole have a deeply humanizing impact on the design process. It’s a fascinating field to discover and the result of this kind of work has the power of keeping companies, design teams, and even the lone designer accountable to what matters most: the needs of the end user.

If you’d like to learn more about usability testing and UX research, take the free UX Research for Beginners Course with CareerFoundry. This tutorial is jam-packed with information that will give you a deeper understanding of the value of this kind of testing as well as a number of other UX research methods.

You can also enroll in a UX design course or bootcamp to get a comprehensive understanding of the entire UX design process (to which usability testing and UX research are an integral part). For guidance on the best programs, check out our list of the 10 best UX design certification programs . And if you’ve already started your learning process, and you’re thinking about the job hunt, here are the top 5 UX research interview questions to be ready for.

For further reading about usability testing and UX research, check out these other articles:

  • How to conduct usability testing: a step-by-step guide
  • What does a UX researcher actually do? The ultimate career guide
  • 11 usability heuristics every designer should know
  • How to conduct a UX audit

IMAGES

  1. Website Usability Testing: All You Need to Know to Make a Great Website

    website usability test case study

  2. How to Conduct a Website Usability Test

    website usability test case study

  3. Important Notes to Perform Effective Ecommerce Website Testing

    website usability test case study

  4. How to perform live website usability testing

    website usability test case study

  5. The Beginner's Guide To Website Usability Testing

    website usability test case study

  6. Website Usability Testing

    website usability test case study

VIDEO

  1. How to Test Images in your Usability Study

  2. User Two

  3. limit test case study #snsdesignthinkers #snsinstitutions #designthinking

  4. Case Study:Blackbaud Website Exploration Study

  5. Usability Test mit Screen Aufnahme Testperson 3

  6. Usability Test mit Screen Aufnahme Testperson 1

COMMENTS

  1. 6 Usability Testing Examples & Case Studies - Analysia

    These are tried-and-tested case studies from well-known companies that showcase the true power of a successful usability test. Here are the usability testing examples and case studies we’ll be covering in this article: Ryanair. McDonalds. SoundCloud. AutoTrader.com.

  2. 5 Real Usability Testing Examples & Methods to Take Away - Maze

    Creating your first usability test? See what an actual test looks like with these five, real-life examples from Shopify, Typeform, ElectricFeel, Movista & Trint.

  3. Usability Testing Website Example: Top 5 Case Studies

    Usability Testing Case Studies reveal valuable insights that enhance website design and user experience. By examining real-world examples, we can understand how specific testing methodologies can significantly improve user interactions and site efficiency.

  4. The Ultimate Step-by-Step Guide on Website Usability Testing

    In this guide, we’ll take you step-by-step through everything you need to know to conduct website usability testing. Step 1: Determine Metrics and Create Task Analyses. Step 2: Identify Best Test Type. Step 3: Find Valid Participants. Step 4: Decide When, Where, and Who. Step 5: Rinse and Repeat.

  5. Guide to usability testing from Markswebb, with case studies ...

    Usability testing is a methodological approach used to evaluate a product or service by testing it with representative users. The primary purpose of usability testing is to identify usability issues, collect qualitative and quantitative data, and determine the overall user satisfaction with the product.

  6. A Guide To Usability Testing: Tools, Methods & Examples

    1. What is usability testing and why does it matter? Simply put, usability testing is the process of discovering ways to improve your product by observing users as they engage with the product itself (or a prototype of the product). It’s a UX research method specifically trained on—you guessed it—the usability of your products.