For help with the *Hypothesis Kit and Power Analysis* click here.

Based on [quantitative/qualitative insight].

We predict that [product change] will cause [impact].

We will test this by assuming the change has no effect (the null hypothesis) and running an experiment for [X week(s)].

If we can measure a Y% statistically significant change in [metric] then we reject the null hypothesis and conclude there is an effect.

The number of people visiting your site that *could* see the change on a given day.

This is the percentage of your eligible visitors that currently complete your goal (e.g. purchasing or signing up).

You need to run your experiment for at least 1 full business period.

This is the percent by which you predict you will change your base conversion rate. Use the minimum that would be worth the effort, or play with different values.

If you are 95% confident in a result, this implies that with 5% probability the observed difference is in fact due to chance.

All eligible traffic (10000 visits per day)

A (10%),

B (10%),

Unallocated (90%)

Experimentation Hub was created by Rik Higham, who is a Senior Product Manager at Skyscanner.

Read Rik's Medium posts on experimentation and Product Management here.

Copyright © Rik Higham 2016 - 2017

The Hypothesis Kit was developed by Rik Higham and Colin McFarland, with contributions from David Pier, Lukas Vermeer, Ya Xu and Ronny Kohavi. “Design like you’re right, test like you’re wrong” props to Jane Murison and Karl Weick. Original Hypothesis Kit from Craig Sullivan.

Power analysis calculation based on Experiment Calculator by Dan McKinley, adapted by Rik Higham.