Every project is unique, so I don't believe in silver bullet processes. However, over the years I've arrived a high-level approach inspired by various sources and experiences. I try to use it as a guide, particularly for solving product problems often within the context of a sprint.

The following is just an overview of some of the techniques I've used in the past. Its always evolving, and its application would completely depend on the nature of the problem, and the size and stage of the project or feature.



1.1 Goal

The goal of the research phase is to try and understand:

  • What the problem is that you're trying to solve?
  • Who are you solving it for?
  • And, how will you measure success?

1.2 Diagnosis

Analyse and diagnose the problem. Is it even a real problem? What are the underlying mechanics at play? Who is the end-user? What are the business or product goals?

1.3 Audience

To solve any design problem, understanding who you're solving the problem for is critical. What are their motivations and typical behaviours? What questions might be running through their mind at each stage in the journey? In what context are they using the product? How have they solved their problem historically?
Audience research can involve everything from: user interviews, looking into any existing data, running consumer insights and quick experiments, creating data-informed personas, testing the original product, and competitor analyses.

1.4 Constraints

What are the technical and business constraints surrounding the project? I like to work very closely with stakeholders, developers, business and product people to try and understand the parameters. It's important to triage the problem in the wider context of the roadmap — is this the most important thing and can we avoid repetition, or creating extra work?

1.5 Articulation

Sometimes it's nice to wrap everything up in a clear, concise problem statement. While this might evolve organically, it can serve as a helpful north star. This is often the point at which a testable hypothesis can be formed, and the key measurable metrics can be defined.



2.1 Architecture

Map out the information architecture and sort the information into logical categories. Understand the user journeys and what questions a user might have at each point in the journey.

2.2 Sketching

Brainstorming with pen and paper, whiteboarding, PostIt®s. This is a continuous part of the process.

2.3 Wireframing

I don't tend to wireframe unless the project it is particularly complex, or expansive. But nevertheless there is a time and a place for wires. If a design system is in place — it can often be just as quick (and even more useful) to create the beginnings of a rough and ready set of visual designs using a symbol library, that can be refined later down the line and form the building blocks for the UI.

2.4 Visuals

I use Sketch (and Photoshop for bitmap images). I have an unhealthy obsession with file organisation, and try to create a symbol library wherever possible.

2.5 Prototyping

Prototypes are usually critical. I use Marvel, Protoio, Invision, Principle, or code. These can then be tested carefully with the target audience.

2.6 Iteration

Iterating based on any feedback received is critical. The prototype can be updated and tested again with the audience on loop, until it feels right.



3.1 Phasing

I often try and design beyond the bounds of the MVP to future-proof a design. I then dial everything back into phased releases starting with an MVP, to try and harmonise with the product roadmap and technical constraints.

3.2 Specification

Prototypes often solve a lot of this problem. But particularly for complex projects, it can be well worth writing a clear specification for how the product, interactions, and nuances should work.

3.3 Build & QA

I like to work collaboratively with developers and help out wherever possible. Design, after all, is a conversation and an ongoing process of balancing iteration and trade-offs.

3.4 Experiment

Provided there is sufficient traffic, I like to try and ship new features in an experimental format — usually an A/B test. Where appropriate, experiment-led product development really is the only way to scientifically understand whether or not a particular feature or product is performing.

3.5 Learn

Once the experiment has reached significance, the null hypothesis can either be validated or falsified, and the new variant shipped or discarded accordingly. It's always good to monitor experiments closely to watch for any major issues, but equally important to know when to let them run.

3.6 Rinse & Repeat

The whole process, or indeed parts of it, can be repeated based on the learnings.