After Anja explained the dialogue principles in her last contribution, I would now like to discuss how we incorporate them into our daily work. On the one hand, it is an occupational disease that we repeatedly notice violations of the Dialogue Principles in the various programs we work with and that these violations actively hinder us. On the other hand, there is the so-called expert evaluation as a method of testing products - in our case interactive applications such as apps, websites, etc. - for their usability.
The goal of such an expert evaluation is to identify potential usability problems of the application. In order to be able to do this plausibly and objectively, it requires defined usability criteria against which the product is evaluated. The dialogue principles are our measure of all things here. Their unambiguous definition enables us to evaluate the application in its context in every respect.
When do you need an expert evaluation?
The execution of an expert evaluation offers itself, if:
- no usability measures have yet been implemented for an application
- a prototype exists, which is not yet interactive
- wireframes are available and the concept should be checked
In all cases it is cheaper and faster if a usability expert looks at the current status and identifies obvious shortcomings. Users in a usability test find it difficult to imagine how an application reacts if no interactions are possible yet. Furthermore, users are then not confronted with obvious problems that may cause frustration.
What are the procedures?
An expert evaluation should always be carried out with the four-eyes principle to ensure objective assessment. There are then various options available in terms of procedure and scope:
The experts assess the application together vs. each expert makes an individual assessment, which is then merged
All functions of the product are examined vs. the main functions are specifically scrutinized
A free exploration of the site takes place vs. the experts proceed on the basis of pre-defined tasks, which correspond to those of the potential users
Regardless of which approach you decide on, you need a good basis for your assessment. As already mentioned, we work according to ISO 9241-11 and take the dialogue principles into account. Alternatives include heuristic analyses or self-defined usability checklists.
What is the result?
The result is a list of all potentially critical usage scenarios based on the defined usability criteria:
On which view (screen, state) the problem occurs.
Potentially critical usage situation
Description of the situation and where the problem lies.
Violation of the dialogue principle
Mention of the principle of dialogue that is violated
By which conceptual or visual adjustment the problem can be solved.
The optimization suggestions are then integrated as far as possible in a new version of the tool. However, in order to ensure the usability of the application, it is important to check the application after integration with real users in the context of a user test.
Why are we talking about potentially critical usage scenarios?
In general, an expert evaluation should not forget that it is conducted by a usability expert. We are experts in the field of usability and can identify usability violations independent of context and task. We can only consider ourselves as task experts for a few very general applications. However, most of the applications we assess are very specific and require a certain expertise. In these cases, we are not able to put ourselves in the position of the individual user with his experience, characteristics and special knowledge. Therefore, we write potentially critical usage scenarios because we cannot say for sure that it will really be a problem situation.
Since this was all very theoretical, Maxi will give you a practical example in the next article. For this we have chosen a tool that we use daily. In this case we are not only usability experts but also task experts: Axure.