Using Product Data to Usher In a Period of High Growth

Get expert data insights from a Senior Product Manager at mobile payments platform, Satispay.

December 18, 2024
Senior Product Manager at Satispay
Shelley Yoo is a Senior Product Manager at Satispay

There’s a difference between having a great idea and great execution. You need both to succeed, and at , where our team has doubled over the past year, we’ve figured out how to hit that balance. We couldn’t have achieved the major growth we did (we now serve over 5 million consumers in Europe!) without knowing how to use our data.

I’m Shelley Yoo, Senior Product Manager at Satispay. I recently sat down with Amplitude’s Regional Marketing Manager, Alexandra Szynarski, to discuss the role of analytics in company growth.

Watch our full conversation on demand——or read on for some of the lessons and rules that guide my approach to using data at work.

Key Topics

4 ways to build your data program

I was the first product manager hired at Satispay, so I came into a company with very little in the way of data-driven development. We had data, but it needed a lot of work before it was useful. Here’s what I learned from my initial efforts to make data a driving force in our workflow.

1. Give yourself time (and support) to set things up

We didn’t implement right away, and when we did, I didn’t expect it to be a one-day task. Our front-end team came in to help, and I relied heavily on their technical knowledge.

Once they’d done their part, I gave myself time to learn Amplitude and set it up so we could use it at full capacity. There was definitely a learning curve, but I was patient and stuck with it. I focused on setting up and we still use those today to track our KPIs.

2. Get creative to gather more data

Regarding , many PMs and other data workers see focus groups as the gold standard. However, focus groups have their problems: They’re expensive, they’re not very scalable, and since they’re a small sample, they can be biased.

There are other ways to get data about users’ interactions with your product. I like a tool called , which collects open-source information from locations like app stores, ticketing systems, and the like. I also implemented a program where our support team would take notes on why people were calling them. We coded that data into rough themes, which helped us find a big problem with our funnel. We saw a roughly 70% improvement in our funnel using the data we got from these exercises.

3. Choose your KPIs and check them regularly

Set solid goals by related to desired business outcomes and OKRs. When you have access to a lot of data, it’s tempting to look at everything just in case you find an important insight. I try to limit myself to three main KPIs to prevent confusion.

Once you’ve chosen your focus KPIs, check them regularly. I still do it almost every day. There are a few reasons for this: First, you want to know your baseline because that will be an essential data point when you start testing. Second, you never know what could change post-launch that might affect your conversion rate or other important metrics. And third, you might find a bug in your tracking system and want to catch it early. If you want to stay on the pulse of your product and company, your focus KPIs will help you do that.

4. Use data (and intuition) strategically to find wins

Proving the worth of your data program to stakeholders is important if you want it to grow. Your job is to understand the strategy from a business perspective and use that to guide your day-to-day.

After choosing my main KPIs, I built a funnel to start understanding the problematic areas. However, you must remember that data won’t tell you the whole story. You have to fill in some of the details yourself, which means you have to know how customers interact with every step of your funnel.

When you can’t pinpoint the reason your data is telling you something, your product intuition comes in. Mine helped me realize that the issue wasn’t with the registration flow itself but with how the mobile viewport displayed it. On mobile, the two pages weren’t clearly differentiated due to the viewport size, leading to confusion. By adopting a more mobile-first approach and addressing this, we saw a significant improvement in the registration spike. This muscle, intuition, is one product managers need to exercise and trust.

Lessons on running A/B tests

is a great tool for product managers, but the concept sounds simpler than it is. Many small mistakes can reduce the usefulness of your data. Here’s how I design tests and act on what I learn from them.

3 essential rules of A/B testing

No matter our needs or the situation, I always start with these three rules when planning an A/B test.

1. Test one variable at a time

Keep it simple! It’s very tempting during an A/B test to find the variable you’re going to change and then make a couple of other tweaks that you think won’t have an impact. The more you do this, the less conclusive your data will be. You never know if that one small change could impact somewhere else in the funnel.

If you’re changing more than one thing, you must keep them separate. Do an A/B/C/D test if that’s what you need. Or, run two A/B tests that are completely split, meaning they use different audiences, and the two streams don’t cross. The only exception here is if you’re sure you’re testing variables that are completely independent in terms of the customer journey.

2. Test toward improvement

This sounds obvious, but all of your tests should be run with the goal of improving your product. Two things go into that. One is being thoughtful about what you test rather than throwing the proverbial spaghetti at the wall. There’s an opportunity cost associated with A/B testing, so don’t use your time on something you don’t believe will help.

Second, believe in what you’re doing. If you’re testing something new, remove the old feature! You should already know your baseline performance from your KPI tracking. Don’t use your test to measure that baseline; put something new in and see if you can improve upon it.

3. Use the right type of test

There are different types of A/B tests, and the best one for any scenario depends on your situation and needs. If you’re working within an app, you can randomly split your test groups beforehand using the device ID to balance how many people you want in the test group versus in the control. If you’re in a situation where you can’t select your audience beforehand, like you’re depending on inbound traffic, you’ll need to learn how to do that balancing post-campaign.

Another example is regional testing: It’s not a bad idea, but there are risks. Things like local politics can affect ad campaigns, for example. Or, if you’re running tests to see if you can increase foot traffic to a physical store, a big snowstorm might affect your results. That’s completely out of your control.

There’s no type of A/B test you should never do, just like there’s not one you should always do. It’s about understanding all your tools and choosing the one that will return the best results.

How long to run A/B tests for

My rule of thumb for A/B testing is to keep it under four weeks—ideally, my maximum is three. Why? Because those who complete the actions quicker or sooner are your most engaged and loyal users. The longer a test is active, the conversion rates between the two groups can start to converge, making it harder to identify meaningful differences.

I work with my data analyst to determine the necessary and timeline for a test. If we’re outside four weeks, I find ways to modify the test so it can be shorter.

What to do with inconclusive results

When you think about A/B testing like a scientist, inconclusive results mean your hypothesis did not pass. It’s tempting to think it’s okay to make the change if you don’t see a difference one way or the other, but that’s not the smart move.

Take a closer look at your data instead. Are there any areas of opportunity within those results? What do you think went wrong? This goes back to exercising that intuition again, so it may feel unscientific, but this is how all hypotheses are built. Your next step is to take what you learn from that time of reflection and attempt another test.

Making data matter to everyone

The ins and outs of my job don’t matter to everyone else in my pod as long as I can guide our work and answer questions about product use. But I’m a big champion of data and how it can be used to keep teams connected to the work they’re doing. After every new feature launch, I’ll do a big merchant data presentation to show my team the impact of their work. It gives them more ownership and a sense that they’re contributing to the business.

It’s always a challenge to make stakeholders and people in non-data roles care about data, but if you’re in a product manager role, you know why it matters. Hopefully, you can take some of these tips back to your day-to-day to help build up the role of data at your company and on your team.

Learn more about how Shelley uses data to support her company by streaming our .

About the Author
Senior Product Manager at Satispay
Shelley Yoo is a Senior Product Manager at Satispay, where she leverages her expertise in A/B Testing and agile product management to drive impactful results. With 8+ years of experience in product management and data-driven strategy, Shelley has a proven track record of launching innovative products, optimizing user experiences, and leading cross-functional teams to deliver measurable results.

Tags
More Customer Stories
January 7, 2025
Data Product Manager, SafetyCulture
December 20, 2024
Director of Customer Advocacy and Community
December 19, 2024
Former Director of Product at TourRadar
December 16, 2024
Director of Product Marketing
Platform
Resources
Support
Partners
Company
© 2025 Amplitude, Inc. All rights reserved. Amplitude is a registered trademark of Amplitude, Inc.