• Get Upto 65% off

  • FREE SHIPPING ON ORDERS $50+

  • GET YOUR $20 BONUS REWARD

  • Get Upto 65% off

  • FREE SHIPPING ON ORDERS $50+

  • GET YOUR $20 BONUS REWARD

  • Get Your site a free demo audit

  • Competitor - to - client comparison audit

  • quarterly ux audit

UX Research Methodology

TPUXs Approach to UX Research

TPUXs Approach to UX Research

We redefine possibility by combining imagination with precision to shape extraordinary experiences.

We redefine possibility by combining imagination with precision to shape extraordinary experiences.

We redefine possibility by combining imagination with precision to shape extraordinary experiences.

The image featured in the middle of the about us page
The image featured in the middle of the about us page
The image featured in the middle of the about us page

Our Approach

This section outlines the core methodologies behind TPUX’s extensive
UX research program, developed and refined over more than 120,000 hours of practical investigation and real-world testing. Our research framework is built on a combination of qualitative and quantitative methods designed to uncover deep, actionable insights.



This section outlines the core methodologies behind TPUX’s extensive UX research program, developed and refined over more than 120,000 hours of practical investigation and real-world testing. Our research framework is built on a combination of qualitative and quantitative methods designed to uncover deep, actionable insights.



This section outlines the core methodologies behind TPUX’s extensive UX research program, developed and refined over more than 120,000 hours of practical investigation and real-world testing. Our research framework is built on a combination of qualitative and quantitative methods designed to uncover deep, actionable insights.


How We Work

Join us in exploring a digital creative process where simplicity enhances the beauty and efficacy of every design endeavor.

Join us in exploring a digital creative process where simplicity enhances the beauty and efficacy of every design endeavor.

Join us in exploring a digital creative process where simplicity enhances the beauty and efficacy of every design endeavor.

01

01

01

Moderated Usability Testing

Moderated Usability Testing

Moderated Usability Testing

We’ve conducted 79 iterative rounds of one-on-one usability testing, involving over 3,800 individual user–site sessions. These sessions followed the “Think Aloud” technique, where users articulate their thoughts while completing tasks. Testing was carried out across a diverse range of participants from the US, Canada, Australia, UK, France, Italy, Spain, Philippines, Vietnam, Russia, Ukraine and India to ensure broad representation.

02

02

02

Benchmarking Against UX Standards

Benchmarking Against UX Standards

Benchmarking Against UX Standards

Across 61 benchmarking rounds, we have manually evaluated 590 of the highest-performing e-commerce platforms in the US, Europe, Canada and India. Each site was assessed against a comprehensive set of 3,200+ UX criteria. This process has monitored more than 587 different parameters in real-world implementation and over 315,000 individual UX performance data points.

03

03

03

Eye-Tracking Studies

Eye-Tracking Studies

Eye-Tracking Studies

In controlled lab environments, we applied eye-tracking technology to study how users visually interact with digital interfaces. This offers valuable insight into attention patterns, navigation behaviour, and layout effectiveness.

04

04

04

Quantitative Surveys

Quantitative Surveys

Quantitative Surveys

Our research also includes 29 large-scale quantitative studies involving 23,685 participants. These surveys were designed to validate user behavior trends, preferences, and pain points at scale.

05

05

05

Heatmap Testing & Analysis

Heatmap Testing & Analysis

Heatmap Testing & Analysis

To supplement our insights, we utilize heat map visualization tools that reveal how users engage with content and interact across various page elements — helping us pinpoint areas of friction or opportunity.

The following sections dive deeper into each research stream, detailing how each method contributes to the broader UX insights we provide.

The following sections dive deeper into each research stream, detailing how each method contributes to the broader UX insights we provide.

The following sections dive deeper into each research stream, detailing how each method contributes to the broader UX insights we provide.

1. Individual Moderated Usability Testing
(Think-Aloud Method)

1. Individual Moderated Usability Testing (Think-Aloud Method)

1. Individual Moderated Usability Testing
(Think-Aloud Method)

A central part of our research approach involves one-on-one, qualitative usability testing, conducted over 60 separate test rounds with more than 9,000 sessions across real users and real websites.

These studies were carried out in the US, Canada, Australia, UK, France, Italy, Spain, Philippines, Vietnam, Russia, Ukraine and India. By using the Think-Aloud technique, participants are encouraged to express their thoughts as they navigate different digital experiences — revealing how users interpret information, make choices, and interact with interfaces.

Although qualitative in nature, this method provides statistically meaningful insights. A binomial model shows that just 30 test users can help uncover 96% of usability problems that affect 35% or more of users. While our aim isn’t to produce exact figures for individual issues, this method helps us detect the kinds of challenges a significant portion of users are likely to face — and more importantly, identify the solutions that consistently support a smoother experience.

1.1. Real-World Testing. Real User Behavior.

1.1. Real-World Testing. Real User Behavior.

1.1. Real-World Testing. Real User Behavior.

Participants in our sessions were given practical, scenario-based tasks that reflect everyday shopping behavior, such as:

  • Finding seasonal outerwear

  • Managing account settings (e.g. changing a password)

  • Searching for a product that fits their device

  • Browsing for an outfit for an upcoming event


Each session lasted around one hour and included 2–5 tasks based on complexity and pacing. Users were instructed to behave naturally — they could abandon the site, visit a competitor, or search for help externally if they wanted to. This allowed us to observe how users behave in real-world conditions, not just in a controlled setting.

To better understand their experience, open-ended prompts were used throughout testing:

  • “Why did you click there?”

  • “What did you expect to happen?”

  • “Why did you close the tab?”

  • “What are you thinking at this point?”


If users became stuck, moderators observed how they tried to recover before offering minimal assistance to continue. Tasks were marked as failed if users couldn’t proceed, required intervention, or misinterpreted product details due to unclear design — for example, misunderstanding the size or feature of a product based on the way it was presented.


Across all sessions, users encountered over 90,000 distinct usability issues. These findings were analyzed and distilled into over 3,800 UX guidelines — all based on recurring patterns and validated through repeated testing.

To protect privacy, any personal data shared during sessions was anonymized or replaced in our reports and visuals. We also rotated test sites across participants to ensure balanced exposure and reliable results.


This moderated testing process continues to serve as a core input into our UX audits, benchmarking studies, and product design strategies.


1.2. Over 90,000 Usability Issues. 3,800+ Guidelines.

1.2. Over 90,000 Usability Issues. 3,800+ Guidelines.

Another foundational element of our research approach is an in-depth UX benchmarking study. This involves 61 structured rounds of manual evaluation across 590 of the highest-performing e-commerce sites in the US and Europe. Each site was assessed against 3,200+ user experience guidelines — all derived from our large-scale qualitative testing — to create a robust, comparative data set.

These evaluations are conducted as heuristic reviews, using our full set of UX principles as scoring criteria. Each guideline is treated as an individual heuristic, and every site is rated using a 7-point scale. To ensure accurate representation, each score is weighted according to how frequently a given issue appears in testing and the severity of its impact on the user experience.

As a result, we’ve developed a large-scale benchmark database containing more than 315,000 manually assigned UX performance ratings. In addition, over 48,000 hours of real-world lag detection was detected.

The overall UX score assigned to each site reflects how well it performs in delivering a smooth and intuitive experience for a first-time user, based on the full set of 3,200 + UX criteria.

All reviews were conducted by TPUX researchers and were approached from the perspective of a new user. To simulate a first-time experience, reviewers used clean sessions with no login history, except when specifically benchmarking logged-in areas like account features or self-service flows.

Where necessary, location-based testing was applied — using US, Canada, Australia, UK, France, Italy, Spain, Philippines, Vietnam, Russia, Ukraine and India addresses for the corresponding sites. When a platform offered multiple regional or language versions, the US or UK version was selected for consistency in analysis.

Each review involved examining 25 to 40 supporting pages from the site to fully assess the relevant UX guidelines. These were also used to produce annotated screenshots that visually document the strengths and weaknesses observed.


Additional notes on specific benchmarking types:

  • Mobile Web & App Testing: Mobile benchmarks were carried out using the latest version of iOS at the time of testing to reflect current device behavior and standards.


  • Product Page Evaluation: For product detail pages (PDPs), each site was assessed across 111+ UX parameters. Between 5 to 12 product pages — chosen from top-selling or prominently featured items — were used as the basis for evaluation and screenshot documentation.


  • Checkout Flow Analysis: Only the shortest possible path for a new user (typically the guest checkout option) was used for evaluation to ensure a consistent and relevant comparison across platforms.


This benchmark framework helps us not only measure UX performance in a structured way, but also identify design patterns and solutions that are consistently associated with top-performing experiences.


2. UX Benchmarking Methodology

2. UX Benchmarking Methodology

3. Eye-Tracking Research

3. Eye-Tracking Research

As a complement to our moderated usability testing using the Think-Aloud method, we also incorporated eye-tracking studies to gain deeper insight into user attention and visual behavior.

This part of the study involved 44 participants, each tested in a controlled setting environment using a Tobii eye-tracking system. A moderator was present during the sessions to provide support with technical issues or task clarification only, ensuring that the users’ interactions remained as natural as possible. Each session lasted between 15 to 40 minutes.

Participants were asked to interact with five different e-commerce websites - to complete realistic shopping tasks. A typical prompt might be: “Browse the product list, choose a pair of  accessories you like, and go ahead with purchasing it.” This setup helped us observe how users visually scan product listings, evaluate items, and engage with key interface elements.

At the beginning of the session, participants could choose whether to use their real personal information or a temporary ID provided by the research team. Most participants chose the latter. Any identifying information captured during testing was either anonymized or replaced with placeholder data in all report visuals and audit materials.

These eye-tracking studies offered valuable supporting data, allowing us to cross-reference users’ gaze patterns with their decision-making and navigation behavior — helping to further validate our broader usability findings.


As a complement to our moderated usability testing using the Think-Aloud method, we also incorporated eye-tracking studies to gain deeper insight into user attention and visual behavior.

This part of the study involved 44 participants, each tested in a controlled setting environment using a Tobii eye-tracking system. A moderator was present during the sessions to provide support with technical issues or task clarification only, ensuring that the users’ interactions remained as natural as possible. Each session lasted between 15 to 40 minutes.

Participants were asked to interact with five different e-commerce websites - to complete realistic shopping tasks. A typical prompt might be: “Browse the product list, choose a pair of  accessories you like, and go ahead with purchasing it.” This setup helped us observe how users visually scan product listings, evaluate items, and engage with key interface elements.

At the beginning of the session, participants could choose whether to use their real personal information or a temporary ID provided by the research team. Most participants chose the latter. Any identifying information captured during testing was either anonymized or replaced with placeholder data in all report visuals and audit materials.

These eye-tracking studies offered valuable supporting data, allowing us to cross-reference users’ gaze patterns with their decision-making and navigation behavior — helping to further validate our broader usability findings.

4. Heat Map Visualization Testing

4. Heat Map Visualization Testing

Heat maps provide critical insights into how users interact with web pages by visualizing their gaze patterns in response to interface elements. This testing method was used to understand how different signifiers affect user attention and behavior.

The study involved modifying real-world web pages to create two distinct versions of each: one with strong signifiers and one with weak or absent signifiers. These versions were carefully designed to retain the same layout, content, and visual style, ensuring that the only variable was the strength of the visual cues on interactive elements such as buttons, links, tabs, and sliders.

Heat maps are powerful visual tools that aggregate eye fixation data, revealing where users focus their attention. The test involves atleast 1,200 participants, providing a sufficient sample size for reliable results. The heat maps use color coding to highlight areas based on the intensity and duration of eye fixations: red areas received the most attention, while orange, yellow, and purple zones had less, and areas with no color were not viewed at all.

In this experiment, nine live web pages were selected, and both modified versions were tested. The study served two main purposes:

  1. To evaluate the existing designs: By measuring the attention users gave to specific elements, we were able to assess the effectiveness of the original live pages.


  1. To compare design modifications: The study aimed to test whether stronger or weaker signifiers in the modified designs resulted in better user engagement. This comparison offered valuable insights into which design approach could outperform the original live versions and potentially serve as a more effective solution in future A/B testing scenarios.


Through this testing, we gained valuable data on how different visual cues can shape user experience and inform design decisions that lead to higher engagement.

The final key component of TPUX’s comprehensive research methodology is our extensive quantitative studies. These studies form a critical part of our data-driven insights and consist of 29 large-scale quantitative studies, engaging a total of 23,685 participants.

The purpose of these studies was to gather actionable data on specific user behaviors and perceptions, focusing on areas that directly influence the user experience and conversion rates. The studies addressed various topics, including:

  • Reasons for Checkout Abandonment and Privacy Concerns: Four separate studies involving 8,324 participants from the US aimed to investigate why users abandon their shopping carts, privacy-related concerns, and issues related to CAPTCHA error rates. Participants were recruited through Google Consumer Insights and SurveyMonkey Audience, ensuring they closely matched the demographic profile of the US internet user base.


  • Trust in Site Seals and SSL Logos: Twelve studies with a total of 9,128 participants explored the impact of trust signals like site seals and SSL logos on user confidence. Participants were recruited from the US, with demographic targeting to mirror the broader US online audience.


  • A/B Testing of Free Shipping Design Options: This study involved 10,882 participants, split into two groups to test and compare two different versions of ‘guest checkout’ designs. Like the previous studies, participants were US-based and recruited via Google Consumer Insights.


  • Self-Service Account Features: A study involving 6,211 participants from both the US and UK explored which self-service account features users rely on most. Participants were selected to match the demographic characteristics of online users in these regions, using SurveyMonkey Audience for recruitment.


  • Follow-up Studies: In addition to the specific tests mentioned above, we conducted five follow-up studies to gather deeper insights into these topics. These studies included 3,200 to 5,896 participants each, all recruited and incentivized through SurveyMonkey Audience, with demographics matched to the internet population of the US.


Each of these studies was designed to address critical questions about user behavior and trust, helping us better understand the elements that impact conversion, engagement, and overall user satisfaction.


5. Quantitative Research Studies

5. Quantitative Research Studies

The image featured in the middle of the about us page
The image featured in the middle of the about us page
The image featured in the middle of the about us page

Our UX performance benchmarking is based on heuristic evaluations of 12,000 e-commerce sites. Unlike traditional methods that rely on 30 to 40 general usability principles, we apply 3,200 + highly detailed and weighted usability guidelines derived from over 120,000 hours of large-scale qualitative testing conducted by TPUX.

Each of the 3,200 + guidelines is evaluated based on its observed impact during usability testing, ensuring that the performance score reflects real-world user interactions. To provide a comprehensive assessment, every site is graded on a 9-point scale across all 3,200+ guidelines.

The performance scores for specific themes and topics are calculated using a multi-parameter weighted algorithm with self-healing normalization

This approach guarantees that our benchmarking process not only accounts for the evolving nature of the e-commerce landscape but also adapts to the increasing expectations of users. 

The scoring system is updated multiple times a year to reflect these changes, ensuring that our evaluations remain current and accurate.

Graph Benchmarking & Scoring Methodology

Graph Benchmarking & Scoring Methodology

The image featured at the bottom of the about us page
The image featured at the bottom of the about us page
The image featured at the bottom of the about us page

Designing Success

See how we help founders and businesses uncover gaps and failure points — and better understand the thinking behind successful product designs that truly make an impact.

See how we help founders and businesses uncover gaps and failure points — and better understand the thinking behind successful product designs that truly make an impact.

See how we help founders and businesses uncover gaps and failure points — and better understand the thinking behind successful product designs that truly make an impact.

  • Get Your site a free demo audit

  • Competitor - to - client comparison audit

  • quarterly ux audit

  • Get Your site a free demo audit

  • Competitor - to - client comparison audit

  • quarterly ux audit