How Do You Evaluate A DAM System’s User Interface?

Users will steer clear of an interface that is confusing or inefficient to use (Thurow, 2009). Conducting usability tests are a great way to uncover areas that may stand in the way of user adoption. A heuristic evaluation measures an interface’s compliance with widely recognized principles (Kalbach, 2007). Currently, the two most popular sets of standards are the 10 Usability Heuristics for User Interface Design (Nielsen, 1995) and the First Principles of Interaction Design (Tognazzini, 2003). I often turn to this inexpensive and expeditious method when I am in the design phase of a new DAM system interface.

The areas I focus on are derived from user-centered studies by Kalbach (2007), Saffer (2010), and Shiri (2012) and include several measurable criteria. For example:

  • Efficiency: Is important functionality buried?
  • Ease of learning: Are features sufficiently explained? Are icons or function names consistently displayed?
  • System response: Is the feedback appropriate?
  • Clear labels: How visible are the features? Are the functions of the interactive elements obvious
  • Orientation & Navigation: Can the user find their way around?
  • Exploration: Does the interface allow the user to freely explore system?
  • Retrieval: How are the results displayed? Can the user sort, filter, or reformulate their search without having to start over or navigate to a new screen?
  • Information use: Can settings, searches, and results be saved and shared?

Usability tests can produce qualitative and quantitative data. For example, we can measure the number of clicks and time on task during search and navigation processes. However, heuristic evaluations are qualitative in nature and rely on inferences made by the evaluator. Once I have found a problem, I use a scale to gauge its severity:

0—No problem at all
1—Cosmetic issues only
2—Minor problems present for some users
3—Major problems are present
4—Catastrophe and unusable for nearly all users

(Kalbach, 2007)

This type of evaluation is subjective and may lead to biased results. Engaging several professional and impartial reviewers to conduct the assessment will offer you the most objective data to analyze. Although a heuristic evaluation is meant to get quick feedback about how well an interface stands up, other tests should be used to evaluate and validate your work.

What sort of usability tests do you conduct? How well have they performed for you?

REFERENCES

Kalbach, J. (2007). Evaluation Methods. In Designing Web navigation. Beijing; Sebastopol: O’Reilly.

Nielsen, J. (1995, January 1). 10 Heuristics for User Interface Design. Retrieved September 18, 2013, from http://www.nngroup.com/articles/ten-usability-heuristics/

Saffer, D. (2010). Designing for interaction: creating innovative applications and devices (2nd ed.). Berkeley, CA: New Riders.

Shiri, A. (2012). Powering search: the role of thesauri in new information environments. Medford, New Jersey: Published on behalf of the American Society for Information Science and Technology by Information Today, Inc.

Thurow, S. (2009). When search meets web usability. Berkeley, CA: New Riders.

Tognazzin, B. (2003). First Principles of Interaction Design. Retrieved September 18, 2013, from http://www.asktog.com/basics/firstPrinciples.html

Leave a comment

I’m Ian Matzen

Welcome to Tame Your Assets: a blog about digital asset management. I am a Senior Manager (Automation Programs) with a Master of Library and Information Science degree and experience working in higher education, marketing, and publishing. Before working in DAM I post-produced commercials, episodic television, and corporate videos. Recently I wrapped up an automation project for Coca-Cola.

Let’s connect