Angelika Zerbe


Measuring and analyzing products using quantitative tests

We at coeno have a lot of experience in qualitative test methods, such as the usability test. In addition to this, we now want to further expand our knowledge and skills for quantitative testing methods. Quantitative tests have the great advantage that a large number of users can be surveyed remotely and without spending a lot of time. Also, the result is significant: it provides tangible figures. The result can be analyzed and used for comparison.

Our experiment

To study questionnaires, we started an experiment: We want to create a comparative UX benchmark for a branch of our choice. First we have to decide what and how we want to test. But let's start at the beginning, what is a benchmark anyway?

How benchmarks work

In a quantitative study, data would be collected first. This is mainly done with a questionnaire that is sent to the test persons and is filled out unmoderated. A benchmark is primarily an extension: More data is collected there so that data can subsequently be compared with one another. In concrete terms, this means: 

An operator of a website for beauty products uses a questionnaire to measure whether customers would recommend his website to others. He does the same with customers of a competing website. The answer is converted into a number. Questionnaires usually offer a calculation for this. So at the end of his study, he has two numbers and knows which website is currently better received. If his performs worse, he can choose a test method in the next step that helps him find out why it is perceived worse. 

If the website owner does not know of a competing product, there is also a second option for a benchmark. He could test his product in fixed cycles and compare the results. He can use the time between the trials to change the website.

We want to deal with the first point. Before I can go into that further, there is one more important and interesting question. How do we even measure?

Choice of questionnaires

There is a wide range of different questionnaires. These are established and meet various quality criteria. The questionnaires can be divided into three categories, I would like to introduce one of each and briefly name others in this category.

Usability questionnaires

System Usability Scale (SUS)

“Quick and dirty”, despite this suspicious-sounding announcement by SUS in 1986, it is probably the most widely used questionnaire. It can be completed quickly with just ten questions.

A statement is:

Essentially, the learnability and transparency of a product is queried. The result is a single total of 0-100. For this there is a ranking online, how the number should be classified. The questionnaire can be used free of charge.

Other well-known usability questionnaires are:

  • Software Usability Measurement Inventory (SUMI)
    The questionnaire is chargeable and some statements are a bit old. It tests the dimensions of efficiency, affect, support, controllability and learnability.
    As the name suggests, it tests the dialogue principles of ISO 9241. The formulations are sometimes difficult for UX laypeople, so it is better to have UX experts fill them out.
    Questionnaire for User Interaction Satisfaction (QUIS)
    5 different scales are queried: learnability, terminology and system status, system function, screen and overall assessment. Use is chargeable.

User experience questionnaires

meCUE 2.0

The meCUE is a further development of the CUE model and was published in 2013. This model is based on the fact that the overall judgment of the user experience of a user arises from the perception of task-related (pragmatic) qualities and the perception of non-task-related (hedonic) qualities. MeCUE consists of 34 questions, which are divided into five modules:

  1. Perception of task-related quality: usefulness & usability
  2. Perception of non-task-related quality: visual aesthetics, status, attachment
  3. Emotions: Emotional Response
  4. Consequences: Product loyalty and intention to use
  5. Global: overall judgment

One statement is:

Other well-known user experience questionnaires are:

  • AttrakDiff 2
    Became known as the first UX questionnaire. Use is chargeable. It is not intended for products that serve to achieve work goals.
  • User Experience Questionnaire (UEQ)
    Tests the scales of attractiveness, efficiency, transparency, controllability, stimulation and originality. The questionnaire is also available in simplified German, which is necessary when children are being tested. In principle, it is also very suitable for products that serve to achieve work goals.
  • User Experience Questionnaire Plus (UEQ +)
    This is a further development of the UEQ. There are 20 scales available (including the six from the UEQ) from which any number can be selected.
  • Standardized User Experience Percentile Rank Questionnaire (SUPR-Q)
    This questionnaire was until recently the only questionnaire that depicts the dimension of trust. With its 8 questions it is a very short questionnaire.

Special Questionnaires

Positive and Negative Affect Schedule (PANAS)

The PANAS is a questionnaire for recording emotional states, created in 1988. The questionnaire provides 20 adjectives, ten of which are negative and ten positive. The user enters the intensity of the adjectives.

A mean value is then calculated for the two dimensions (positive aspects and negative aspects). PANAS can always be used when human states of mind and emotions are to be measured.‍

Other well-known special questionnaires are:

  • Visual Aesthetics of Website Inventory (VisAWI)
    VisAWI allows the measurement of the visual aesthetics of web pages by displaying diversity, simplicity, colourfulness and artistry.
  • MS ProductReaction Cards
    There are 118 attributes on cards. From these, the user chooses those who describe the product well. This test helps to get an impression of UX perception.
  • Game Experience Questionnaire (GEQ)
    With three modules, the user experience of a game can be measured. The first module tests the user's experience during the game. The second one relates to playing with other users, i.e. social presence. The third module asks about experiences immediately after the end of a game session.

How do we continue?

Due to the qualitative impression of the UEQ and the fact that it is available in 37 languages, it was immediately shortlisted. The modular further development of UEQ + is a wonderful step, as it allows the questionnaire to be varied depending on the industry. This is also available in a large number of languages which is important because a user should always fill out a questionnaire in his native language. UEQ + is available free of charge and offers an Excel-file with the evaluation. This is very thankful, because with paid questionnaires, the calculation of the result is often not visible and therefore harder to analyze. Last but not least, UEQ + meets the quality criteria that have been set up for UX questionnaires. So we start with UEQ + for testing. How the test goes with it will be covered in a blog article coming soon.


Schrepp, M. (2018). User Experience Mit Fragebögen Messen.

Breyer, Bianka & Bluemke, Matthias. (2016). Deutsche Version der Positive and Negative Affect Schedule PANAS (GESIS Panel). 10.6102/zis242.

Lewis, James & Sauro, Jeff. (2018). Item Benchmarks for the System Usability Scale. 13. 158-167.

IJsselsteijn, W. A., de Kort, Y. A. W., & Poels, K. (2013). The Game Experience Questionnaire. TechnischeUniversiteit Eindhoven., last accessed on November 1st, 2020, last accessed on November 1st, 2020, last accessed on November 1st, 2020, last accessed on November 1st, 2020, last accessed on November 1st, 2020

Angelika Zerbe

UX Konzepter & NN/g UX certified


Stay up to date!

With the newsletter, which is published 4 times a year, you will receive the latest news from our agency every 3 months. We will not use your e-mail address for any other purposes and you can easily unsubscribe from the newsletter at any time via a link in the mail footer.

Thank you! Your submission has been received!
Oh! Something didn't work. Please try again.