Agile Prototyping and the Scientific Method

Blog / October 8, 2013 / Permalink

Punchkick's office

Bringing an idea to life takes a lot more than just having the idea. Getting a digital product from a high-level concept to shipped and ready for use is a process that requires careful planning, testing, and validation. Without proper testing, software can suffer from familiarity-induced blindness–that nasty problem that occurs when a designer doesn’t catch usability issues due to knowing the product inside and out.

You are not your users. Don’t forget that.

The best way to validate design ideas is to get them in front of users. If they work, hooray! If they fail, try again. Try until it works, it eventually will, I promise.

We, as lovers of science, like to think back to the scientific method when designing our products. Whether we’re redesigning a website, adding features to an existing app, or working on a brand new creation, our process ensures that the products we ship crush the goals they were designed to meet. Crush. Like, goal dust is all that’s left (we’ve tried harvesting the dust, but have found no practical application). So what’s a high level glimpse of our science-driven process?

Question

Are we working on a brand spanking new project? Great, because while conducting stakeholder interviews, the bulk of the questions will surface. Are we iterating on an existing product? Great, because we know what to improve.

This is the ‘what are we trying to do’ step. Do we want to increase engagement on social media, do we need more prospective hires to submit an application, maybe we just want to sell more products. At the end of the day, this is our KPI and it’s the question we’re trying to answer.

Hypothesis

We eat, breathe, and sleep mobile, so we have a lot of knowledge about what works and what doesn’t. Our hypothesis is where we get to flex that knowledge. What features do we include, and how do we present them? Unvalidated IA and UX ideas live in the hypothesis, waiting to prove themselves as the answer.

Prediction

These are the metrics we’ll measure. If we predict an increase in traffic and we don’t see it, we’ve failed. That means this step is doubly important because if we’re not measuring the right metric, we might not see our own success.

Experiment

Launch! Whether we’re testing with paper or pushing to stage, this is where the magic happens. With users in front of our designs, we can start collecting data about our product’s performance. Once we have enough data, and often in tandem with experimentation it’s time for…

Analysis

Let’s look at the data. Did our KPI’s move? If so, terrific! But can we improve that even further? Or maybe they didn’t and we have to go back to the drawing board. There’s nothing wrong with failing as long as you fail fast, and try again. That’s the beauty of iteration–you only lose if you stop trying. Assuming we moved the needle where it needed to be moved, we can move on to the next step.

Post-Process

With the testing and iteration process complete, it’s time to do it all again. And again. And again. This process ensures that products stay on the cutting edge. Where stagnation lurks in the waterfall process, innovation thrives in agile. It helps us keep products fresh, and users happy.


Nothing Worked?

So nothing worked, eh? There’s always this.

Comments

Your email address will not be published. All fields are required.

Let’s Talk Mobile.

We love talking shop. Ask us anything.

Reach Out