One of the most pervasive myths in product management is that products spring fully formed from the brains of PMs. They identify a customer pain point, think about it, then ship something incredible that fixes it.
The reality is very different. In product, we get better through iteration. We have an idea, we start to build a digital product, and then we hone it. But to make the right adjustments, it’s important to learn what your customers need and how they interact with the features you build.
- The benefits of continuous learning
- Diversifying your experimentation toolkit with more tests
- Catering to different types of users
Read on for a recap of the workshop, or watch the event—Learning Loops: Your Secret Weapon to Drive Growth.
Why it’s hard to build, measure, and learn
- Build a feature.
- Measure how users interact with it.
- Based on the response from users, decide to double down on something or avoid it.
It’s a simple concept that sounds ideal. Wil explains that the build-measure-learn framework would work—if PMs were only responsible for one feature in their entire careers. But most PMs are responsible for a combination of features at any given time.
Initially, what happens when you try to build, measure, then learn is that learning gets slightly deferred.
“You're like: ‘Okay, I'm going to track everything. I'm going to put measurements in everything,’” Wil says. “But users need to use your features. So you don't really learn right away. Maybe you'll come back after a month or…maybe even a couple of days, depending on what your traffic is as a company. But learning comes later.”
Wil explains that as you grow in your career as a PM, those deferred learnings merge with quarterly planning. “You don't really end up looking at your charts and your dashboards until probably a month out of the next piece because you want to use learning to plan what to do next,” says Wil.
At this point, learning isn’t about refinement. It’s a way to decide what to work on next and where to allocate engineering resources. Different factors come into play, such as asks from executives or high-revenue customers that might help bump revenue or close sales deals.
Eventually, you reach the point where your roadmap (plans for the future) and your backlog (tasks to do) are synonymous. Instead of planning how to refine your product based on user behavior, you respond to demands for different features.
How to learn as you ship
Instead of hoping you’ll learn how users interact with your product in the future, aim to learn in real time. Here are some ways to do that.
Experiment beyond A/B tests
Sometimes, A/B tests are the right choice. But they’re not the only way to learn. By expanding your experimentation repertoire, you learn from customers throughout the development process.
A/B tests are great when you already have an experience you want to enhance. But what about when it’s a new feature?
Feature flags are like having a “choose your own adventure” inside your app. Wil explains that feature flags enable engineers to dictate user experiences based on specific conditions.
Let’s say you want to introduce a new feature. With feature flags, start by rolling out that feature to a small percentage of users. If that feature gets a positive response, roll it out to more users. From there, you keep increasing the percentage of users who access the feature as long as the response is positive. If something goes wrong with the new feature or if the feedback isn’t good, you can roll it back.
Feature flagging also enables you to do targeted rollouts based on specific user segments. Let’s say you have a new feature that’s perfect for power users but might overwhelm new users. You can ensure each segment gets the experience that’s right for them.
Some products have a group of early adopters who are happy to use more experimental builds so they can access the newest features. Start your rollout with these users first, gather feedback, and then perfect the feature for everyone else.
Feature flags enable product teams to have “more control over who sees what experience without bugging engineering to ship lots of different features at different times,” says Wil.
Learn more about feature flags in the DevOps Transformation Handbook.
With A/B tests, you’re usually looking to test a hypothesis about improving a core metric. Guardrail or “do no harm” testing works inversely. With guardrail testing, you say, “All I want to make sure is that the new feature that I'm shipping is not meaningfully worse than what I have today,” explains Wil.
Say you want to introduce a new technology that’ll enable your engineering teams to work faster. At the same time, you don’t want to do something that’ll make your app buggy and worsen the user experience.
With other tests, you often have to wait to get statistical significance (having enough data to trust the results). That means maintaining two versions of an app for several weeks or months.
Guardrail testing helps you learn faster. If the new technology or feature dips your guardrail metric by the threshold you set, you know it’s time to roll it back. “You're comparing two groups of people in real time, and they're telling you what's worse or better,” says Wil.
“[Guardrails are] an example of how you can use experimentation to avoid bad outcomes as well as maximize good ones.”—Wil Pong, Head of Product, Amplitude Experiment
Painted door tests are a way to gauge demand for a feature before you invest resources in building it. You create a version of the feature that looks real but isn’t operational to see if people want it.
Wil points to his experiences at Netflix. The first product he managed was in customer service when they were trying to decide if they should support chat and phone calls.
“These call centers were really high quality. We had great people working for us. But it was really expensive. And we had a lot of volume because we were growing,” says Wil.
Chat was a more cost-effective alternative, but Netflix didn't know if customers would prefer chat over calls. Contracting a new support center specialized in chat and training the agents was a big upfront investment. So they created a painted door test.
Wil and another engineer quickly built a chat button. The button didn’t direct customers to chat with an agent. It triggered a message asking customers to share why they wanted to chat and letting them know the chat feature was coming soon. Based on the clicks it garnered, the team gauged demand for the chat feature.
Know there's more than one right answer
Your user base is not all the same. Take insights you learn from your experiments and apply them to create different experiences for different users.
When you craft capabilities for different customer segments, “Your product no longer has to be one thing. It could be many things,” says Will.
For example, new users are different from power users. “Your power users hate your guided workflows, by the way. They hate your popups, they hate your checklists, they close them all,” says Wil. Power users would rather reach out to you only when they encounter an issue, whereas new users appreciate more guidance.
When you understand your users through data, you can offer different product variants with feature flags. Wil holds up Amplitude as an example.
“We work with a lot of gambling companies, and online gambling is not legal in all 50 states in [the United States],” he explains. Amplitude helps the companies offer two distinct experiences: In states where gambling is legal, users can directly place bets. The rest have a “fantasy” experience where they can play without using real money.
The experience helps the companies drive brand awareness. When “fantasy” customers go to states where gambling is allowed, the companies can bring in more revenue.
Continuously learn what your users crave by getting started with Amplitude. Or tune into the full webinar—Learning Loops: Your Secret Weapon to Drive Growth.