Heuristic Evaluation

Description

A usability evaluation method in which one or more reviewers, preferably experts, compare a software, documentation, or hardware product to a list of design principles (commonly referred to as heuristics) and identify where the product does not follow those principles.

Required Skills

It is recommended that someone experienced with the method lead a Heuristic Evaluation. However, with training non-experts are able to identify usability problems. A domain expert is needed to assess technical applications or products.

Recommended Uses

Heuristic reviews can be used as part of requirements gathering (to evaluate the usability of the current/early versions of the interface), competitive analysis (to evaluate your competitors to find their strengths and weaknesses) and prototyping (to evaluate versions of the interface as the design evolves).

Outcomes

  • List of heuristic violations that represent potential usability issues.

Limitations

A Heuristic Evaluation is not a substitute for a usability test, as the two methods often uncover different types of usability issues.

Benefits

Heuristic evaluation falls within the category of usability engineering methods known as Discount Usability Engineering (Nielsen, 1989). The primary benefits of these methods are that they are less expensive than other types of usability engineering methods and they require fewer resources (Nielsen, 1989). The beneficiaries are the stakeholders responsible for producing the product – it costs less money to perform a heuristic evaluation than other forms of usability evaluation, and this will reduce the cost of the project. Of course, the users benefit from a more usable product.

Advantages

  • Inexpensive relative to other evaluation methods.
  • Intuitive, and easy to motivate potential evaluators to use the method.
  • Advanced planning not required.
  • Evaluators do not have to have formal usability training. In their study, Nielsen and Molich used professional computer programmers and computer science students.
  • Can be used early in the development process.
  • Faster turnaround time than laboratory testing.

Disadvantages

  • As originally proposed by Nielsen and Molich, the evaluators would have knowledge of usability design principles, but were not usability experts (Nielsen & Molich, 1990). However, Nielsen subsequently showed that usability experts would identify more issues than non-experts, and “double experts” – usability experts who also had expertise with the type of interface (or the domain) being evaluated – identified the most issues (Nielsen, 1992). Such double experts may be hard to come by, especially for small companies.
  • Individual evaluators identify a relatively small number of usability issues (Nielsen & Molich, 1990). Multiple evaluators are recommended since a single expert is likely to find only a small percentage of problems. The results from multiple evaluators must be aggregated.
  • Heuristic evaluations and other discount methods may not identify as many usability issues as other usability engineering methods, for example, usability testing.
  • Heuristic evaluation may identify more minor issues and fewer major issues than would be identified in a think-aloud usability test.
  • Heuristic reviews may not scale well for complex interfaces. In complex interfaces, a small number of evaluators may not find a majority of the problems in an interface and may miss some serious problems.
  • Does not always readily suggest solutions for usability issues that are identified.
  • Biased by the preconceptions of the evaluators.
  • As a rule, the method will not create “eureka moments” in the design process.
  • In heuristic evaluations, the evaluators only emulate the users – they are not the users themselves. Actual user feedback can only be obtained from laboratory testing or by involving users in the heuristic evaluation.
  • Heuristic evaluations may be prone to reporting false alarms – problems that are reported that are not actual usability problems in application.

1. Usability Body of Knowledge: Heuristic Evaluation. www.usabilitybok.org/heuristic-evaluation. UPA International. Accessed: June 18, 2018.

Appropriate Uses

Heuristic evaluation can be used throughout the design life cycle – at any point where it is desirable to evaluate the usability of a product or product component. Of course, the closer the evaluation is to the end of the design lifecycle, the more it is like traditional quality assurance and further from usability evaluation. So, as a matter of practicality, if the method is going to have an impact on the design of the interface (i.e. the usability issues are to be resolved before release) the earlier in the lifecycle the review takes place the better. Specifically, heuristic reviews can be used as part of requirements gathering (to evaluate the usability of the current/early versions of the interface), competitive analysis (to evaluate your competitors to find their strengths and weaknesses) and prototyping (to evaluate versions of the interface as the design evolves).

Heuristic evaluation is not limited to one of the published lists of heuristics. The list of heuristics can be as long as the evaluators deem appropriate for the task at hand. For example, you can develop a specialized list of heuristics for specific audiences, like senior citizens, children, or disabled users, based on a review of the literature.

Procedure

  1. As originally proposed by Nielsen and Molich, the evaluators would have knowledge of usability design principles, but were not usability experts (Nielsen & Molich, 1990). However, Nielsen subsequently showed that usability experts would identify more issues than non-experts, and “double experts” – usability experts who also had expertise with the type of interface (or the domain) being evaluated – identified the most issues (Nielsen, 1992). Such double experts may be hard to come by, especially for small companies.
  2. Individual evaluators identify a relatively small number of usability issues (Nielsen & Molich, 1990). Multiple evaluators are recommended since a single expert is likely to find only a small percentage of problems. The results from multiple evaluators must be aggregated.
  3. Heuristic evaluations and other discount methods may not identify as many usability issues as other usability engineering methods, for example, usability testing.
  4. Heuristic evaluation may identify more minor issues and fewer major issues than would be identified in a think-aloud usability test.
  5. Heuristic reviews may not scale well for complex interfaces. In complex interfaces, a small number of evaluators may not find a majority of the problems in an interface and may miss some serious problems.
  6. Does not always readily suggest solutions for usability issues that are identified.
  7. Biased by the preconceptions of the evaluators.
  8. As a rule, the method will not create “eureka moments” in the design process.
  9. In heuristic evaluations, the evaluators only emulate the users – they are not the users themselves. Actual user feedback can only be obtained from laboratory testing or by involving users in the heuristic evaluation.
  10. Heuristic evaluations may be prone to reporting false alarms – problems that are reported that are not actual usability problems in application.

Participants and Other Stakeholders

The basic heuristic inspection does not involve users of the product under consideration. As originally proposed by Nielsen and Molich (1990), the heuristic review method was intended for use by people with no formal training or expertise in usability. However, Nielsen (1992) and Desurvire, Kondziela, and Atwood (1992) found that usability experts would find more issues than non-experts. For some products a combination of usability practitioners and domain experts would be recommended.

The stakeholders are those who will benefit from the cost savings that may be realized from using a “discount” (i.e. low cost) usability methods. These stakeholders may include the ownership and management of the company producing the product and the users who will purchase the product.

Materials Needed

  • A list of heuristics with a brief description of each heuristic.
  • A list of tasks and/or the components of the product that you want inspected (for example, for a major Web site, you might designated 10 tasks, plus 10 pages that you want reviewed).
  • Access to the specification, screen shots, prototypes, or working product.
  • A standard form for recording violations of the heuristics. (link to our form)

Who Can Facilitate

Heuristic evaluations are generally organized by a usability practitioner who introduces the method and the principles, though with some training, other members of a product could facilitate.

Data Analysis Approach

The data are collected in a list of usability problems and issues. Analysis can include assignment of severity codes and recommendations for resolving the usability issues. The problems should be organized in a way that is efficient for the people who will be fixing the problems.

Common Problems

  • Insufficient resources (too few evaluators) are committed to the evaluation. As a result, major usability issues may be overlooked.
  • Evaluators do not fully understand the heuristics.
  • Evaluators may report problems at different levels of granularity (for example, “The error messages are bad” versus “Error message 27 does not state how to resolve this problem”).
  • Some organizations find heuristic evaluation such a popular method that they are reluctant to use other methods like usability testing or participatory design.

1. Usability Body of Knowledge: Heuristic Evaluation. http://www.usabilitybok.org/heuristic-evaluation. UPA International. Accessed: June 18, 2018.