On Design as Prediction

by on May 5, 2022

On my LinkedIn profile, I say the following:

I predict the future. Not flying cars or robot pets, but whether any given design intervention will raise, lower or have no effect on your KPI. I do this through researched hypotheses and experimentation to become progressively less wrong. By understanding people’s behaviour and what motivates them to do things in a given domain, my design ideas have the power of prediction. Everything I create, across all projects and platforms, aims to strengthen that power.

The reason I emphasise prediction in UX, and not the more traditional “voice of the customer” or some other quality like consistency or usability, is because I think being predictive of customer behaviour (and thereby product outcomes) is the best way of showing the value of design.

Take an example from my current domain at an online retailer:

Suppose we think (by researched evidence, or maybe just gut feeling) that the majority of our customers see shopping as a form of entertainment. Call this the “entertainment hypothesis”.

We might be wrong though – perhaps most of them just want to get in and out as fast as possible. Call this the “speed hypothesis”.

We see evidence of both motivations, but we don’t know which is stronger from a business perspective. Both hypotheses have nuances and variations. And of course there may be some different motivators that we haven’t detected yet (trust, anger, family, etc.). But let’s take these two for now.

Now let’s say we’re given a project to do called “Fast Checkout”. The product manager wants to do this because Amazon has it and it’s gaining market share, or something – PMs have many reasons for doing things. They neither know (nor really care) if customers are motivated by speed. All that matters is the outcome. And when this project is done, they will have another one to do according to business needs and metrics that need fixing.

In common with most projects passed down by product management, Fast Checkout is a great opportunity for us to test an hypothesis we have about what motivates customers to do things. This time, it clearly speaks to our speed hypothesis.

So, we do our magic, create a fantastic UX for Fast Checkout, and it ships. Sales go up. Great! But can we say speed will increase sales more than a fun or entertaining experience?

This issue is called the “local maximum” question. Remember, we have observed in our research that both speed and entertainment are possibly strong factors in motivating people to shop with us. The only way we can really know whether speed is better than entertainment (or something else) is to experiment into both those ideas.

Remember also that Fast Checkout could have brought conversion down. Or it might not have done anything at all. There are always three possible futures to a design intervention.

But we don’t have to experiment with Fast Checkout to test our entertainment and speed hypotheses (and anyway, we can’t – Fast Checkout is now Done). The beauty of having high-level “motivational” hypotheses is that you can test them in various different ways across projects and teams.

A digression on hypotheses: Sadly, many “hypotheses” you see written down are not very useful beyond the contexts of the immediate experiment. For example, say that your hypothesis is that if you show the delivery date before the checkout page, your customers will be more inclined to order. There is no “motivational” (“why?”) aspect of that though. So once you’ve run the test you can’t say you’ve learned anything about customers (other than they like to see delivery dates before they check out). That isn’t very useful for design in the context of, for example, things on the user’s account page.

Rinse, repeat

So now, another project comes along called “New Account Page”. This is another opportunity to test whether fun things work better than fast things. So we think up some entertaining ways of doing that in the account as well as some versions that speak to speed. We are, after all, fans of the “double diamond” method of design practice.

This time, we test both approaches, and entertainment seems to do better (in whatever KPIs we are being asked to use) than speed.

A digression on KPIs: In business, there are all sorts of “measures that matter”. For the purposes of this discussion, I assume what “matters” means whatever the PM says matters. It’s not a designer’s place to name the KPIs, because in design all you really want to know is whether people are happy. Or at least whether they use your stuff without feeling sad. Everything else the business is interested in (churn, conversion, ARPU, LTV, NPS, whatever) will eventually flow from that.

So now we might say the score is 1:1 for speed vs entertainment.

Then we get another project, and test that in a similar way. And another and another until it looks like entertainment usually wins out as a general approach. Or we find it doesn’t, or has no effect against other hypotheses.

Of course, this will take a long time. Years, at least. But over that time we can become predictive of any given project. We can come to say that if a proposed intervention is not in some way entertaining, we don’t predict it will do as well as it could. Until we think of another hypothesis that might break out of the local maximum. And so the cycle starts again on that one…

So if business stakeholders see designers as predictive, and that their design approaches speak to the things that motivate customers to do things the business wants them to do, then they will see the value of design. Otherwise, if their guess is as good as yours on those things, they will just want us to do UI. And who can blame them?

But there’s more

For a designer, several other things also flow from the idea of prediction:

You don’t have to believe in your designs, only in the hypotheses. This means you can spend time designing something you think will do badly, simply because if it bombs that will strengthen your hypothesis. You can still have fun doing the best design you can for it though – why not?

So no need to complain about how product managers give you badly briefed or worthless design work. How do you know it’s worthless? And if it’s badly briefed – even better – you can supply the brief based on what you want to test. All work is good work because it’s there to test your prediction: will things go up, down, or exhibit no change?

There is no failure. It follows that if you work for 6 months on a project that did nothing for (or tanked) conversion – and you predicted it would – then that’s a win (for you) because you now have a stronger hypothesis.

So no more carping about “wasted effort”. All work is valuable. In fact even if it doesn’t ship and never gets tested live, you can still get valuable design practice on your hypotheses. But you MUST state your hypothesis early and often to anyone who will listen (and in practice you should probably not be the only person advocating it).

You don’t really need to do prior research. If you (and your team) feel something is true, then why not test that with a design intervention if the PM will let you? It’s fine to execute on a hunch that you research later. Risky perhaps, but fine.


But perhaps the most important aspect of having a high-level, strategic design hypothesis like “shopping is a form of entertainment” is that it’s for the very long term. Literally everything you do as a designer needs to be aligned to supporting or weakening your overall hypotheses about people’s motivations. And it will take years.

There are exceptions of course – some businesses (well, Apple and Facebook mainly) just happen to have hit upon a strong, non-researched hypothesis about what motivates people from day one. They simply take that as their design direction all the time because they know it works. For them, the local maximum is their business. Moving away from it would mean they’d become a different company.

But for everyone else, unless your designs are predictive you may as well be a product manager. They have very different reasons for being.

 

Leave a Reply

Your email address will not be published.