Experimentation is a vital component of product development. The ability to hypothesize, test and report on product changes of varying size and complexity helps ensure that you are building the right product. But to get to this point, you need to foster a culture in which experimentation is valued and resourced at a company level. In a recent Product Club event, we heard from Ashit Kumar, Web Analytics & Experimentation Lead at Spotify and Philip Knape, Senior Product Manager at Einride, who provide digital, electric and autonomous shipping technology. Both speakers shared their thoughts and experiences in creating such a culture by showing the value both within the immediate product team and being able to share that to the business beyond. Here is a roundup of some top learnings.
How to prove value
Product teams are well acquainted with the value of experimentation , but proving this value to the wider business can be a challenge. It is important this is addressed as otherwise it can be difficult to get the necessary resources needed to expand your testing and ultimately get to the KPIs you are chasing. For Ashit the key to proving this value is to create a feedback loop. His team runs A/B testing programs to optimize the conversion funnel across multiple markets with different permutations. While the team currently uses an internal product to run experiments, they plan to move to an external platform. By running tests and continually feeding back clear metrics, you can gain more trust to expand your programme, creating a loop of results and greater expansion.
For Philip at Einride it is important to be able to share his results with leadership in a way that makes his interpretation of the data clear. To do this he uses Amplitude’s Notebook feature, where he can easily share key metrics along with in-depth analysis,
When to use experimentation?
There will always be costs involved when choosing to experiment. Whether that is the time of your immediate team or the extra resources that you need. For Ashit, if the change is very obvious it should be implemented straight away, as waiting also presents a cost. But in many if not most cases there will be debate as to what needs to be changed, and what those changes should be. This is where experimentation is key.
For Philip, user interviews can explain why someone does something, however self reporting can also contain inaccuracies evident when measured against the data. Qualitative methods of collecting data, such as user interviews can add a layer of granularity that data may not provide — and these methods certainly have their place, especially when trying to understand more visual aspects of a user’s experience. Person A might believe they are compelled to shop due to certain colors or pricing but when looking at the data we see another story.
In many cases there is a balance to be struck between user feedback and data to make decisions. Ashit suggests a 75% science and 25% art approach, where everything you do is data – backed by statistical evidence. However, visual aspects, such as design changes, must be evaluated based on what’s best for the customer.
What to prioritize?
While it can be tempting to make many changes at once, and make big bets, this strategy should be used with caution. It has its place where product market fit needs to be reestablished. But to really understand your impact, incremental change is often best.
For Philip, the starting point should be linked to the overall goals and vision for your company. Where are you trying to get to? How does that align with the experiment you want to run? This way you can be sure that you are moving your product in the right direction.
Ashit emphasizes the importance of choosing one experiment to run at any given time. This can be challenging so to choose what to focus on first, he uses the ICE method: impact, confidence, effort. He points out that if too many changes are launched at one time it becomes impossible to isolate the change that actually had impact.
Reporting on failure
As part of a healthy experimentation culture it is important to get comfortable with failure and inconclusive results. On average you might expect around ⅓ of your experimentation to be conclusive. So what can you learn, if anything, from the rest?
One powerful example of learning from failure is when Ashit tested adding explanatory text at checkout to explain why we needed customers’ credit card details. Their hypothesis was that this would increase conversion rates, but the test was unsuccessful – the text did not lead to any significant increase in trial conversions. While the text did provide assurance to customers, it wasn’t the primary factor influencing their decision to convert or not. This failure taught them that implementing this change across all 180 of their markets would not lead to any revenue gains.
Fostering a culture where experimentation can flourish will enable product teams to make calculated and incremental changes that show results. To do this you need to know when to use experimentation, how to prioritize what to experiment on, and how to share your results in a way that makes the value to the rest of the business clear.