From Research To Design

(If you’ve come to this from Twitter, I’m just testing my new Twitter WP plugin with this article)

Shortly after I wrote up some thoughts on test-driven UX, I happened to notice “Bridging User Research into Design” over on UX Matters.

In the article, 11 of the great and the good offer their thoughts on essentially the same thing as I was thinking about in my post: how to use research to create something you think is better than if you hadn’t done any.

The opinions offered are mostly about qualitative research, whereas I was focussing on quantitative in the form of multi-variate testing. However, I was surprised that the role of hypothesis was given very little consideration. In fact almost all seemed to ignore it altogether.

For example, Tobias Komischke’s statement that sums up the problem to avoid: “There’s nothing worse for a UX professional than standing in front of an empty, white wall“. But rather than concluding that the designer can avoid this scenario by having a design hypothesis (preferably based on research) before standing in front of the wall in the first place, he says:

“I just start with the first design alternative that comes to mind: think brainstorming with a pencil. Don’t constrain yourself, just sketch out as many ideas as you have. You can eliminate those that don’t make sense later. If you can show many design alternatives to your stakeholders, they can identify the good and not-so-good characteristics of each of them.”

Tobias isn’t talking about a situation in which no research has been done yet, so how can he be sure that his unconstrained (and why unconstrained?) ideas aren’t just his own opinion? What’s it going to be based on? Worse, how can stakeholders then “identify the good and not-so-good characteristics of each of them”? Isn’t that what a UX designer is supposed to help them do? On the strength of his opinions, I’m not going to be hiring Tobias any time soon!

Most of the other prescriptions offered by others in the article talk about the benefits of different types of research, and what the practice of good research must do in the general. This is all good, but sadly rather irrelevant to Tobias’s “white wall” problem. Doing research is easy – it’s what you do with the results that’s hard.

When it comes to offering advice on how to avoid the “white wall”, most have nothing to say, or descend to meaningless hand-waving: Adrian Howard’s “feedback loops”, Leo Frishberg’s “mapping a taxonomy”. Whitney Quesenbery starts with the laziest answer possible: telling readers to just read some books on the subject! But to her credit, she later hand-waves with the best of them like a true consultant.

Somewhere in the middle is Leo Frishberg, who describes a promising start, only to fizzle out at the end. He says he uses research to construct what is essentially a scenario-cum-persona. The “bridge” then appears from this as a “goal” (here from a persona-like construction describing somebody who needs to produce financial reports):

“Produce ad hoc or customer-specific reports—from scratch and by combining elements of existing reports—without having to go through Engineering.”

But given such a goal, isn’t the designer still staring at Tobias’s “white wall”? What hypothetical thing(s) do we hold true about the user’s motivations, their desires or their problems in using the interface? These things need to dictate whether they should see a sliding panel, have a long scrolling page, use voice input or command-line, etc. What is the direction we need to follow, then subsequently test? To this end, it would be better to have an hypothesis in this case like:

“Users hate creating reports by navigating through lists of elements. Constructing reports using natural language allows them to concentrate on the output of the report rather than their construction.”

Such an hypothesis may well be wrong. The resulting UI may be too unfamiliar, technically impractical, or have another problems. But it provides a design direction that can then be followed and tested (preferably according the outline in my earlier blog post).

The only person who has anything tangible to say in terms of actually bridging the gap into the activity of design itself seems to be Dana Chisnell at the end of the article. In her Step d. of “process for bridging research into design” she says that the team should, after discussing what was observed during a research session,

“… develop theories about what to do to solve the observed issues, creating a design direction that the team can then prototype and test. And the cycle starts again.”

It would have been nice of her to give an actual example of a “design direction” like I did above, but I’ll give her the benefit of the doubt on that.

So, 11 luminaries, 1 decent piece of practical advice for those staring at the blank wall while their heads are spinning with research findings.

If that’s a measure of the vitality of practice in the field of user experience design today, then I might consider coming out like Ryan Carson!