Some time ago I wrote about the similarities and relationships between testing and UX/usability techniques. But there’s another question: how do we integrate them in the software development process? Here’s an approach:
Listen Nassim Taleb in this interview (minute 4) claiming that humans are bad designers:
In his most popular book, “The Black Swan” (nothing to do with the movie), Taleb explains that many disciplines allegedly scientific like sociology, meteorology, politics and especially economics, are so complex and are so hugely affected by single events impossible to foresee (“black swans”), that making valid predictions is useless in most cases. Worse still, we are unaware of how bad we are making predictions.
Obviously, Taleb is not talking specifically about interface design, but it’s inevitable to come to the same conclusion because it’s also true that we are not good at designing interfaces. That’s why every approach to User-Centered Design is iterative: we know we are not going to find a suitable solution at first, so we keep trying and refining until we reach a valid design (because “we are good at discovering things”).
And what are our black swans? Users: it’s impossible to foresee how users are going to react in front of an interface (anyone who has performed or watched a usability user test has realized).
A lot has been written about the new GMail interface; most of it is a matter of opinion, but I’m afraid Google has committed an obvious mistake: using icons (in buttons) that don’t have a clear single meaning.
Let’s take a look at two of the buttons; isn’t this your first guess when you see them?
Wrong. The real functions of those buttons are:
Using fancy original self-designed icons is a common mistake made by novice interface designers; icons are hard to memorize, and users usually recognize just a few of the most common. Many times, the best way to describe a function is simply a text label.
I’m surprised Google has fallen into that error; maybe they have been paying too much attention to people complaining about their ugly interfaces. Anyway, Google, please, give me back my text labels for actions!
User Centered Design (UCD) seems to be growing in popularity, and it’s not strange. Who could be against the user being the center of the design process? But looking beyond this popularity it turns out that there is not a consensus about what UCD is.
Formal definitions like in Wikipedia or ISO 13407:1999 describe it vaguely using terms as “design philosophies”, “models”, “general guidelines”, “recommendations”, … All those are positive instructions, but they aren’t really useful when you face a real project developing a real interface.
User Centered Design is something as generic as this figure, according to ISO 13407.
What do we actually have?
Actually UCD refers almost always to a set of techniques that may be applied along all the life-cycle of a software application; the only thing those techniques have in common is that users are their main roles (at least theoretically). The number of techniques included may vary from six (like in this Webcredible article) to several tens (like in this interactive table at UsabilityNet). Those techniques may be as different between them as focus groups, user testing and interface prototyping.
Oddly some UCD techniques don’t include real users in their carrying out: for example, heuristic evaluations of usability.
Heuristic usability evaluations are a discount usability engineering method for quick, cheap and easy evaluation of interfaces; but if you can’t or don’t dare to apply usual heuristics, here’s an alternative: ‘top lists’.
Heuristic evaluation is one of the most popular usability techniques; it basically consists of reviewing an interface and check if it fulfills some well-known guidelines and principles (the “heuristics”).
Once you overcome the fear of performing a task with such a fancy name, the following step is obvious: choosing the heuristics (guidelines) to use. There are some popularheuristics lists, but there are some risks when using them for a usability evaluation:
If the heuristics are too generic, they don’t help you to identify real issues.
Otherwise, if the heuristics include detailed checkpoints, you may concentrate on small or very specific issues while overlooking the important ones.
Consequently I suggest using alternative heuristics: the ‘top lists’.
With ‘top lists’ I am referring to lists similar to these by Jakob Nielsen: