Perhaps the Only Way is Up

Lately I’ve been rather depressed about the state of user experience design. Both my own (management overheads, inability to sweat the details, lack of self-belief…) and that of the wider community. So it didn’t help that one Cameron Chapman delivered a further kick in the teeth the other day with 10 Usability Tips Based on Research Studies.

This is a truly awful article and a good example of some of the things I feel are eroding the field of UX design into a shapeless idiocracy of self-congratulating muppets. It’s a prime example – sadly among many – of what seems to be a near total disregard for the limitations of research, while also trying to present arguments as rigorous. Ignorance of the principles of statistical graphics also does her no favours. All this is topped off by what now seems to be the obligatory blizzard of ridiculously unconditional praise. God I’m depressed.

As a final flourish, she also chose not to publish my (surprise!) negative comment about all this on her article. At least, I posted what follows here on the 19th, and I see there have been several posts since then. No sign of mine though. Of course, it’s her stuff after all and she can publish what she wants.

But information wants to be free, so here’s here what I said:

While all research is good research to the extent that it yields information, it is simply not the case that because a single study comes up with a finding, that finding can be taken to be the truth. Nor is it the case that because a study exists, its findings are statistically reliable.

I’m sure the authors of many of these studies would be the first to point out that in order for even a moderately safe conclusion to be made about something as complex as whether whitespace of text affects readability, or the effect of response times on user behaviour, you need to be able to replicate the finding with either the same, or preferably a different, test for the same thing. Until then, it’s just one data point, and as such is theoretically next to useless. I would say that for every study cited here, you could conduct another study that found something different. I don’t mean that these studies are invalid (although see below!), but it is highly likely there are other factors that need to be investigated and understood. The relationship between correlation and causation, hidden bias and many other things need to be uncovered before anyone can reliably say that people scroll beneath the fold, or content is better placed on the left.

I’d also like to make a couple of points on a specific study mentioned here: “7. Whitespace of Text Affects Readability”. The statistical graphics immediately look suspicious because their scales do not start at zero. This means they are exaggerating the visual comparison, making the winner look much better than it really is. The chart for “Effect of margins on reading speed” looks like it’s showing about a 50% difference, when in fact it’s only 7.82%. However, on further investigation of the data given about “The effects of margins on comprehension”, a far more serious issue comes to light. Of a sample size of 20, the “margin” data shows a score of 63.749% (5.1 out of 9), and the “no margin” data a score of 55.625% (4.45 out of 8). This is not by any means a statistically significant variation – it’s well within the expected random fluctuation and therefore the conclusion made in the study is unsafe. Try it for yourself at http://www.prconline.com/education/tools/statsignificance/index.asp. Again, the study may in fact be accurate, but we have no way of knowing that until it’s supported by another study into the same thing (one with a bigger sample size, for one thing).

With all due respect to the author, anyone can cite a research study to support a hypothesis. The trouble is, we’re not supposed to be just anyone – we’re asking to be paid for our services. Please stop making the UX community look naive.