On Design and Research

What is design? How do we make the most of research? These are two questions that seem at first unrelated, but are in fact strongly connected.

The following thoughts came from trying to  make sure research activity is used, appreciated, and understood. Along the way, it revealed an approach to design that may help solve a recurring issue in the digital design world: what is design, exactly?

The proposed method may also help turn product management towards more user-centric thinking for building features and functions and informing road maps. This is particularly useful in businesses where a tendency exists to create “feature factories” that see UX as essentially a production role to create interfaces.  But of course, mileage may vary when things need to be framed in ways that are initially uncomfortable to some people.

Firstly, some assumptions

  1. An ever-growing collection of detailed research findings, no matter how searchable or efficiently tagged/indexed will always demand too much work from those who wish to benefit from it. So it will be mostly ignored.
  2. We will, for the foreseeable future, never have enough research resources.
  3. We mostly trust our own expertise and experience in the problem domain (in this case, services for employers and job seekers).
  4. Thinking about why something happened is more important for everyone in the long term than knowing what happened.

The main problems the approach seeks to solve

  1. Becoming data rich and information poor.
  2. Fixating on the last piece of research performed in a given area. Research findings are simply single data points which need to accrete to provide defensible evidence.
  3. Fixating on validating granular design ideas (AKA “weaponising research” to prove a point)
  4. Product roadmaps directing research, not the other way around.
  5. Confusion about what we’re trying to do, and why.

The method in summary

  1. Start with high-level assumptions about what motivates our audiences to do things. Over time, hang all research findings of any kind (exploratory, validatory, primary, secondary, quant, qual) on these assumptions.
  2. Get Product, UX and Engineering (if possible) to use those assumptions to come up with hypotheses to test.
  3. Do those tests and feed the results back into the assumptions, modifying, removing or enhancing the assumptions over time. The goal being that they (and not the research data that supports them) become both the store and the engine of our product and UX knowledge, learning and designing activity.

Some design theory

Let’s imagine we know nothing about job seekers or employers. If an individual or a team then makes a statement about what motivates audiences to do things (“Job seeking is scary”, “Employers derive quality from quantity”, etc.), then that statement is neutral despite the initial strength of their belief in it. The function of research is then to “de-neutralise” that belief and point product and design towards exploiting the belief to business advantage, or showing that we need not pursue that belief anymore.

I suspect that is what happened at Apple. Steve Jobs started with a deeply-held (but unsupported) belief that people respected high technology to the extent that they found it viscerally desirable. Applying that to computers meant that people would pay large amounts for computers designed as luxury items. At the time, the idea of a computer as a luxury item was unheard of.  Computers were seen as simply tools, like hammers or tractors. But Job’s belief was validated by market successes. So as such, Apple only ever had one hypothesis (and so did Facebook: people are obsessed  with what their friend are up to). We might have a number of beliefs that could help us do something similar to Apple or Facebook. But power is nothing without control. We need to use research to “control” our beliefs.

So at first you don’t need to have any research. You just need to act. And as long as we collectively believe (and repeat and discuss these beliefs with each other), then everything we do is automatically correct. That is, until we agree that the evidence we have modifies our belief to do something else. This means there cannot be failure, only learning.  And the worst thing you can do  (at first) is delay action until you have enough evidence to proceed. This is good, because we’re unlikely to have enough research resources, and the business currently values validation over exploratory research anyway.

In practice

Brainstorm up some assumptions based on what you believe to be true about the audience  (“Price is the most important thing”, “Job seekers are bad at searching for jobs“, etc.).  Over time, cite supporting or weakening research findings for each assumption.

Here’s a mockup of a possible UI with which to do that (it’s a bit brittle – interactive bits are highlighted).

The work of researchers outside of actually doing research might be to “groom” the assumptions. From time to time they might split assumptions apart (for example by demographic or usage context), re-word, or deprecate them in favour of new ones. All this would then need to be communicated to the product and design teams (who might also join in the grooming work by helping to interpret research).

Note that this approach doesn’t care about the type of research or the artefacts used, but it does care about the “strength” of the evidence (which can be either supporting or detracting). This means, for example, that if we want to link to persona documents, we can, but do we think they’re strong evidence? If we want to link to a Forrester report, we can, but it’s not research we ourselves did. If somebody hears someone say something on a bus that strengthens or weakens an assumption, put that in too. But it’s just one person.

This approach implies a score is applied. I propose a six-point scale: from minus 3 to plus 3, so that evidence can count for or against something. This is of course subjective, but it’s better than nothing, and can be revised later if need be. The strength of each assumption needs therefore to be derived from the aggregate scores of the research linked to it. Strength of evidence is a continually contentious point, so let’s embrace that in the way we collect it.

Note that people don’t have to read the research if they don’t want to. Just as you don’t have to read the citations in a book. Their presence is enough. The interesting part for stakeholders is what all that research means for the assumption they are going to use to create tests and product ideas from.

A worked example

Any examples I can give are of course self-serving, but let’s suppose you are tasked with the redesign of a booking form for an appointment.

If the above method has been effective, this work will have been guided by one or more high-level assumptions (in effect forming the design brief).  Let’s say it’s “People don’t mind signing up, as long as they see the value in doing so” (currently +2 on the scale, aggregated from multiple research projects across various products and contexts), and “People are bad at managing their diaries” (a +1).  These are both rather naive assumptions at this stage . They will probably be refined once more evidence accumulates, but part of the purpose of all of your design work is to test and feed back into those assumptions. You may also have some “KPIs” such as increased usage, or less cancellations in this case.

So you come up with a feature that shows the user’s current diary appointments once they log in. If anyone asks you why you have made those changes, you can say the stated assumptions made you do it (and point them to the evidence behind those if need be).

So it’s not about your own beliefs, it’s about a higher authority. You don’t even have to agree with the assumptions to do the design. But that’s what design should be: guided by researched assumptions because those are all you should have.

It’s not a huge leap to realise that what I’m describing is in fact the scientific method, but for design. I make no apology for that, other than to say that I don’t wish to over-dignify the approach.

Other thoughts relating to all this

  1. On personas: The primary purpose of personas is to present research findings in a format which attempts to avoid my first assumption. But personas aren’t good for this because they have a poor signal to noise ratio and tend to get “fossilised”. We also aren’t using them as they were intended by the inventor (What are we to make of things like multiple personas displaying contradictory attributes, for example?). They also contain no indication of the strength of evidence behind them. We can still continue to use them of course. I just don’t think they deliver much value.
  2. We don’t do much exploratory research because roadmaps are currently constructed by executive fiat not customer need. So the research we do is mostly validation research. This has far less ROI potential than exploratory research.
  3. Validation research tends also to be hard to design well enough to produce defensible findings.
  4. Bad (that is, biased) research is worse than none at all because it has the potential to fool us into wasting money. I don’t think we can avoid doing bad research (in the short to medium term, at least), but I hope the above proposal will lessen the impact of poor research design by putting it into a context of continuous learning.

Lastly, here are some details on test-driven design that also informed some of the above thinking, and some thoughts on creating testable hypotheses from high-level assumptions.