In Praise of Assumptions
Whether or not you think that “user-centred design” is generally a good way of designing a web site, most would agree that before doing any real design work, you first need to listen. Ideally, you should listen to the people who will be using your site. At the very least, you should listen to some or other form of research that can give you ideas about suitable design directions to follow. When it comes to design, selflessness is the goal. Alan Cooper has based a large part his career on this idea. Love you, Alan.
The trouble is, it’s practically impossible to keep your own opinion out of the picture when coming up with solutions to design problems. No matter how much research you do, personas you create, or lab sessions you run, research alone cannot tell you exactly what to do in terms of the detail of the design itself. So the practical effect of research is to lead you make assumptions. Of course, the hope is that these assumptions are correct. On the other hand, some people make a virtue of not trying to listen too much, and instead relying mainly on their personal opinions to produce good designs. Apple, 37Signals and I’m sure various others, are among these. What they do is simply bring assumptions out into the open.
I have often regarded these two positions as opposites. User-centered, research-led design verses the “usage-centred”, benevolent design dictator. Both clearly have their merits and have produced successes. Interestingly however, it appears rather tricky to come up with a clear example of success for the user-centred approach, and when it comes to “great design”, it is Apple that is invariably cited. The assumption is that Apple don’t do UCD, and instead champion the benevolent dictator approach: Steve Jobs as God, Jonathan Ive as his representative on earth.
Recently, however, I have been thinking about a “third way” on this. Can we combine the strengths of research with the pixie dust of opinionated design, and the clear design triumphs this is said to produce? I think the answer is that we can. The key to this is to bring out your assumptions first, then use research to validate those assumptions over time.
The basic technique is simple. Write down all the assumptions you have about your customers, the way they act, what they want, what they don’t want, what things work and what things don’t. Try to catalogue every assumption you have. Do this regardless of whether you have anything to actually prove these things or not. Feel free to throw in a few opinions you think others may have but that you think are wrong. Anything and everything that has been lurking in your head about the design overall. Remember, these are just assumptions – at least at first.
Quite quickly you will have a list of many things like this:
- Showing lots of discounted products and sale items on the home page raises conversion.
- People don’t scroll beyond the fold.
- Too many 3rd party banner ads put people off.
- Price is not the only determinant of conversion.
- Most people come to the site from Google.
Now, using whatever means necessary, seek to prove or disprove these assumptions over time. Each time you do any research, or discover anything that relates to an assumption, you give that assumption a plus or a minus point. If you think people don’t scroll below the fold, but see them doing just that without comment in a user test, note that fact against the assumption.
Of course, simply observing one thing that validates or invalidates an assumption isn’t enough. You have to corroborate, triangulate and pile on the data. Eventually, you will decide that you have enough supporting evidence to say confidently that something is a valid or invalid assumption. In doing so you will be given ammunition for design directions, ideas for further research, and of course stimuli for further (or just more granular) assumptions to validate.
The primary advantage to this approach is that it aides continuity. No longer will you commission research which you then pick over to see if anything seems interesting. Instead, that research can be directed to further the validation process around the assumptions you have about that home page. Assumptions, you will remember, that were the reason the home page looks like it does in the first place.
So – I’m going to assume that the above is a great way of doing things until I get some feedback that convinces me it’s not. I’m lucky that very few, if any, UX professionals read this blog – so I may be blissfully ignorant for some time.
If you’re working with an existent site, Multivariate testing (and even monovariate testing) can be a valuable way of testing out assumptions and gaining insight.
It isn’t a magic bullet, however. Although MVT is a powerful way to find out that X is better than Y or Z in the context of C, it doesn’t tell you why it’s better – or whether it’s better in context D also.
Yes. To an extent, I wouldn’t care too much about the method by which the assumptions were verified, but multivariate (or “MVT”) tests are usually pretty conclusive. Conclusive if (and only if) you can confidently rule out technical bugs in one or other design that is, which is something that’s plagued me in the past with such tests.
Incidentally, I’ve recently modified my ideas on this such as the assumptions are either part of, or replaced by, “scenarios.” Might write a bit more about that later.
Jonathan, you would be surprised who’s snooping on yourr site :)
The approach you describe is how I have approached travel for the last 5 or so years. But, every now and then some quirk upsets my philosophical applecart and I have to re-assess.
Nevertheless the assumptions I made early on in this process through user research I still find very hard to mentally challenge even when faced with evidence to the contrary (say, from analytics, with it’s cold hard approach to “the facts”).
In travel (and I am sure this is true elsewhere)… as you get into the evidence it becomes more and more difficult to hold on to anything particularly strongly due to:
– seasonal dynamic
– economic context
– the interface and experience presented modifying users’ desired behaviour (ie satisficing)
– competitors’ interfaces and experiences influencing users’ expected behaviour
– sampling and segmentation of the users you’re analysing
Basically, I’ve been coming down to Steve Jobs style lowest common denominator JFDIs as a result!
(No change there)
Cheers
DJ
Wow. Not only a UX professional, but a UX pro in my own industry sector!
Good points about the “variables” that make assumptions hard to pin down. Coincidentally, today I was having a quasi-philosophical argument (the best kind!) about just this topic with a colleague who has been helping me execute my idea about assumptions. It boiled down to the effect of contexts. If an assumption is “People don’t scroll beneath the fold”, you may well find that for every example of that being true, you find another where it isn’t true. This might indicate the presence of a complicating factor. So, you’d review the evidence and “branch” the assumption in some way (eg in a parent/child relationship) so that in effect it was qualified by an “unless” statement, and attach evidence to that too.
All this was in relation to a database we’re building to house the assumptions and their evidence.
Incidentally, I didn’t mention the concept of “strengths” in my post. I’ve decided an important aspect of the idea overall is to attach a strength to the evidence you find. The man on the Clapham omnibus’s anecdote is a 1, while Jakob Neilsen’s massive research project is a 5. the system would then take the average of the evidence for and against and produce an overall “strength” for the assumption (which might well be negative if there’s more evidence against it than for it, for example.)
Oh, and I’ve thrown out the idea of scenarios – too difficult to maintain. This has got to be dead easy or it won’t fly.
Sounds like you are trying to codify an answer for everything somehow. That may be the wrong approach. People still have to do some legwork themselves imho.
E.g. Jakob’s research project will have a context (or lack of in many cases!) It’s probably entirely irrelevant to the much more detailed and nuanced problem you’re trying to solve. You wouldn’t do user testing on an About Us page now would you?!?!?
I come back to one of my old bosses truisms: “experience counts”.
Still no shortcut to forming your own assumptions, and bringing your team along with you on the journey.
I like the idea of documenting failures though, in order to not make the same mistakes twice.
Cheers
DJ
Ah no – fundamental point this – it’s not about answers. It’s not even about being right. It’s only about assumptions. There will be lots and lots of things that we will be unable to provide strong evidence either for or against, or that we would even want to test anyway – even assuming such tests would be sensible (and many will not, as you point out).
My idea is to simply expose the assumptions that we have. The bottom line is of course “experience counts” – but the trick is convincing others of that fact. So I’m hoping that listing assumptions will allow me to speak a language that everyone in the business will understand, because every day the business says things to itself like “The more banner ads, the better”, “Showing prices exclusive of tax is the right thing to do”, “Lots of small photos are better than a few big ones” and so on. I also say to myself that people don’t read error messages if they’re not expecting them, or that they’ll fill in all the fields in a query form given half a chance and get no results. These are just as valid or invalid.
So – stack ’em up and hose ’em down with research and evidence of all and any kinds. Then see what comes out in the wash. Not rocket surgery by any means, but I concede some brain science might be needed.
Oh, just thought of a good analogy for what I’m trying to do.
In 1999, most people assumed that the market for portable music players was not very exiting. In 2001, Apple launched the iPod. The rest is history.
Now, I don’t know anything about how Apple actually made that leap, but I think that if you have an assumption that says “Nobody wants to listen to music when they are walking around”, then the simple fact that you have articulated that assumption allows you to challenge it.