Using Experimentation to Drive Product-led Growth Featuring Forrester & Elena Verna

Use experimentation to determine which products or features are worth further investment and will drive sustainable growth for your business.

September 6, 2022
Global Content Marketing Manager, Amplitude
Using Experimentation to Drive Product-led Growth Featuring Forrester & Elena Verna

, Head of Growth at Amplitude, and guest speaker , Principal Analyst at , recently discussed and the role experimentation plays in helping businesses determine what products or features are worth further focus and investment.

In their webinar, Elena and Chris touched on several topics, including:

  • Worthy products have quietly perished in their distribution ecosystems, while subpar products with well-thought-out growth models have conquered markets.
  • As experimentation moves deeper into the developer space, powerful analytics are essential to success.
  • If product-led growth was more widely adopted, entire industries could be improved—and customers would receive greater value.

Product as the revenue center

Elena opened the webinar by highlighting how the ultimate goal of any business is growth. “Growth levers” come in the form of acquisition, attention (or activation), and monetization. While sales and marketing-led growth is still useful and can be a complementary effort, product-led growth has distinct advantages—largely because it’s predictable, sustainable, and competitively defensible.

  • In product-led growth, users acquire other users, effectively acting as marketers. They’re led to purchases or upgrades without a human touch, and their usage can trigger more usage. This is often because the product is attuned to their interests or needs, which leads to increased personalization or value.
  • In marketing-led growth, lead generation originates on third-party platforms or properties where in-market prospects are already active. Marketing then works to build awareness of purchase options or pipelines. After conversion, marketing educates users to trigger their continued or expanded usage.
  • In sales-led growth, outbound communications pull in new buyers, and contracts are often manually closed. Periodic check-ins present value for , either timed with quarterly business reviews or as part of customer success outreach.

It’s important to note that all of these forms of growth are usually necessary. But the prioritization or sequencing of these areas depends on unique business needs and varies from case to case.

In B2B, marketing- and sales-led growth often rely on customer interviews. They also often have fewer customers and a smaller addressable market, so optimizing for becomes a business imperative.

In product-led growth, acquisition, retention, and monetization pressures apply to the product itself. As part of this shift, intuition can only take you so far, and experimentation is non-negotiable.

While customer interviews can bridge the knowledge gap in marketing- and sales-led growth, you need the right data, methodology, and experimental framework when making changes to the product itself.

Keeping users engaged

In a sense, product-led growth is about investing in your users versus marketing. It’s about keeping the people who interact with your product engaged.

The focus in B2B is finally shifting as companies move toward products that can “sell themselves.” B2B is perhaps “10 years late to the party,” Elena estimated, but now experimentation is a must. Those who don’t experiment are more vulnerable to disruption because it widens their perception vs. reality gap.

Elena explained that qualitative feedback from customer interviews tends to overly emphasize feedback from the loudest voice in the room. “We focus on the most vocal people; we focus often on power users because they are most vocal,” she elaborated. This can happen at the expense of the core use case that comprises most of the market.

Chris noted that product-led growth involves changing the product and determining the usefulness of features or advanced settings. It allows companies to get feedback from real end-users to test hypotheses, compare different versions, and ultimately determine “what’s important to the customer?” Sometimes, the answer is counterintuitive, which is why guesswork isn’t a sufficient method when trying to assess market fit.

PLG reduces costs & meets core business needs

Making the right investments is especially important during an economic downturn. Fortunately, product-led growth can cut down on costs in a multitude of ways and address your core business needs.

Fewer dollars go into digital ad vacuums

Product-led growth is becoming popular partly because it’s cost-efficient. Investing in PLG allows you to reallocate expensive and often fluctuating Google and social ad spend to growth-driving research and development.

It speeds up the release cadence and reprioritizes the pipeline

Change is also necessary because, for many companies, their product release cadence is stalled. “Too many wasted technical-debt features are being pushed through the pipeline that have no value,” Chris observed, instead of incremental testing to find the winning features and orient around product-led growth.

The product-led model eliminates those low-value backlogged items that don’t align with user needs. Moreover, by changing what’s put into the pipeline, individual and organizational productivity increases.

The burden on customer support gets reduced

Improved might cut backend costs for service inquiries and customer support. The product becomes easier to use, and the value can be quickly extracted as users reach the “aha” moment.

Customers spend more

Enhancing customer experience tends to result in customers spending more across industries. In the webinar, Chris provides a table from Forrester showing that even marginal gains can have a significant revenue impact when multiplied across large customer bases.

It’s easier to retain customers

There’s also the risk of losing customers by not undertaking product-led growth. Ironically, a fear of failure can inhibit experimentation and lead to making the same mistakes repeatedly until the customer gives up on the product and churns.

Who can experiment & why should they?

Some organizations, such as banks, may think experimentation is off-limits. But experimentation can be a more controlled approach to testing features or variations internally or with smaller subsets, followed by progressive rollouts to your user base later on. According to Chris, this “gives you control over the deployment versus control over the release, rather than holding the rest of the product team hostage to the release cycle.”

Also, the organization can maintain its inertia over incremental changes and lower deployment costs. This approach is far more sustainable than heavily focusing on one single release that might not have market alignment, then abandoning the investment six months later.

Elena listed some of the top benefits of experimentation for product teams:

  • Close the growing perception vs. reality gap.
  • Find the best solution at scale.
  • Derisk big bets.
  • Break down black-box decision-making.

Seeking complete and uniform agreement or an executive decision from the highest-ranking person in the room can have detrimental effects—either creating an unoriginal or late-to-market product in the former or lowering team morale in the latter. After all, consensus on a proposed product isn’t going to spur innovation, and no one likes to hear that their viewpoint doesn’t matter.

It’s much more accountable and inclusive to use objective customer data to refine intuition, guide experimentation, and empower product-led growth. The result is a winning customer experience centered around a helpful product, not arbitrary project requirements.

How to run experiments & measure what’s working (or not)

Multi-functional teams are now starting to run experiments on a deeper level. Product-led growth represents a move away from complex architectures with expanding microservices and end-user guessing games, where teams speculate about what users want versus getting it right the first time (or, at the very least, earlier). Internal testing or tightly scoped minimum viable product experiments can optimize the customer experience without affecting the entire user base, and companies can introduce telemetry to establish baselines for .

As Chris stated, to break away from organizational rigidity, it’s at least worth experimenting to determine whether a new version, feature, or product delivers the same level of service, usability, and functionality as what it’s meant to replace. It’s possible that the new and supposedly superior version removed some of the functionality users appreciated. With any experiment, failure can be highly instructive.

Sometimes, companies expect a big launch or release to lift their metrics, but it doesn’t. Identifying that perception vs. reality gap can be valuable for driving organizational change and consensus. Companies need to learn how to quantify failures and determine their reason —essentially, “learning how to learn.” This ultimately leads to more predictable wins and repeatable successes.

Experiments can strive to simplify, enhance, reorder, restructure, add, or reinvent; optimizations aim to reduce friction; innovations potentially expand value.

During the webinar, Elena and Chris provided the following guidance on experimentation:

Do NOT experiment if:

  • The experiment cost is too high.
  • There’s not enough data to gather learning.
  • There’s no clear hypothesis.
  • The test is not strategically aligned with business outcomes.
  • There are no learnings to be had.
  • There’s no possible downside.

After deciding to experiment, remember to:

  • Start with a strong hypothesis (teams may have hunches, but they need to be put to the test).
  • Be careful when creating experiments with too many variables.
  • Have adequate technological resources for the task, and be aware that homegrown tools may break data governance.
  • Avoid confirmation bias. In science, the goal is to figure out why something worked; in business, there’s often a hesitation to touch something that is working or go deeper into the explanation despite the value of doing so.
  • Recognize the importance of change management.
  • Build the culture to be experimentation first.

It’s important to design an experiment per assumption to make it more tangible and easier to break down larger problems into smaller tests. A company in its early stages might have a low data volume, making statistical significance seem more out of reach. But even this doesn’t preclude some form of experimentation. Organizations can still do beta tests, a pilot, or adjust their testing methods accordingly. Arguably, this is the most opportune time to assess ideas to see what sticks.

Getting started with experimentation

Experimentation ensures that customers are seeing the right features and receiving value that creates loyalty—ultimately helping expand your business. It gets executives and teams engaged and coordinated around an effective release process that improves time-to-value and embraces agile methodology.

If you have the resources, it’s worth laying the groundwork for an experiment-based culture in your company. You’ll be able to confidently remove the guesswork in rolling out a product while simultaneously fostering an internal environment of collaboration and accountability.

If you enjoyed this recap, learn more about product-led growth and experimentation by .


Webinar Q&A

The webinar was followed by a Q&A session where we further explored the role of experimentation in product-led growth. Read on to see our answers:

1. How do you apply product-led growth beyond your core users? Some companies have core target audiences who buy their products. How do you apply product-led growth to other personas who may be just viewers or consumers?

When we talked about product-led growth in the webinar, we looked at three core goals: retention, acquisition, and monetization. When other people are consumers of your product (like executives who look at reports or dashboards that your product puts out) but don’t use it daily, that’s still helping with your overall retention.

Your core users get a lot of value from your product because it helps them do their job better, making your product stickier. But it’s important to consider not just those core users but the entire ecosystem around them and how your product can incrementally drive improvements to influence adoption from within.

2. What performance data metrics should our company orient itself on to understand whether we’re product-led?

As discussed in the webinar, Elena looks at product-led growth as a culmination of retention, acquisition, and monetization. She also looked at a growth matrix across product, marketing, and sales leads.

For example, if your acquisition is product-led, you’ll see metrics like how many users invite other users to the product. But if it’s sales-led, you’ll have larger numbers of outbound activities to new buyers and users.

3. The value of experimentation is quite clear, but convincing an entire organization to adopt experimentation is challenging. Any recommendations on how to tackle this?

We covered this more in the webinar, but it’s worth noting the general theme. If you’re having difficulty with buy-in across the organization and up to leadership, start much smaller. Start with a single team and have them run experiments.

The trick is to then share your findings, both good and bad. Display the good results that showed positive lifts and outcomes and those that showed that releasing a feature would have been harmful (risk mitigated).

Eventually, other teams will be interested in how they can get such valuable data to show that what they shipped matters. Social proof is a powerful tool!

4. How do you approach experimentation in the context of B2B products where traffic is much lower?

This is a tricky area for a lot of companies. Typically, this will require different statistical analyses where you can reach statistical significance on an experiment without requiring a large sample size. Sometimes, these may be more directional, but you still see a great lift. Maybe you’re not quite hitting true stat sig, but from a business-decisions context, it’s directionally good enough to still release the feature. Stay tuned as we have some exciting updates on the way that will help solve this.

5. What minimum number of users is required to consider something a true or valid experiment?

There isn’t a straightforward answer for this where we can give you an exact number to hit. It comes down to a few factors, such as how big of an impact you expect your experiment to make, how confident you want to be in your results, what false positive rate you’re willing to accept, and what type of statistical analysis you’re conducting.

What you’re likely looking for is a sample-size calculator. We have one built into our product to help with this question, but there are many manual and automated calculators online to help get to a recommended sample size.

6. How do you get customer feedback for each of the A/B tests? Interviews? Feedback forms?

This is the hallmark role of a product manager at most companies, and feedback on what to build comes from several places. It can be 1:1 customer interviews or calls, or it can come from sales calls, analyst feedback, market surveys, or better yet, real customer data.

We’re partial to the last one because it’s a big reason people use . With our analytics product, you can see how users move throughout your product, what features they’re engaging with, their conversion rates at various stages of product usage, etc. You can find places where users are disengaging, dropping off, or getting confused. You’ll know which users this is happening to, so you can send targeted surveys or trigger in-app feedback when they get to that friction point.

You can rapidly identify a few ways to make that part of the experience more seamless and implement them. Throw the new experience behind a feature flag and experiment to see if it gives you the lift in engagement you’re hoping to fix. If it does, roll it out to everyone. If it doesn’t, scrap it with no harm done and try something new.

7. Does running experimentation with a pool of current users affect the reality of the data?

It absolutely can, both positively and negatively. In some cases, you could release a change that feels jarring to your users, and they’ll refuse to engage with it simply because it’s something new. In other cases, you’ll release a change that will feel like a “shiny new object.” More people will engage with it, but the effect may be short-lived.

In both cases, you’re still going to get valuable information. If something is jarring because it’s new, it may be good to look at how to softly introduce new features to users so they don’t feel as surprised. If you suspect the “shiny new object” problem, you may consider running your experiments for longer to see how the effect size normalizes as people get used to the new feature.

With a tool like Amplitude, you can also explore the cohort of users in the experiment more closely through . Are they repeatedly using the new feature, or did they engage with it only once because it was a “shiny new object?”

8. Experimentation has been around for a while in the marketing world, with things like changing landing page images, button colors, or ad copy. What is holding back product teams from embracing experimentation?

The simplest answer is that they’re just different, primarily in how changes get made to the experience (landing page copy vs. functionality in a product). The overall impact of a change may have a greater threat (real or perceived) on the product. If you make a wrong decision about a change in the product, even if it’s an experiment, you risk losing a user or customer altogether. If you make a bad decision about ad copy, you won’t have as many inbound leads as you hoped.

But in a high-risk and high-reward environment, like a product in production, it’s better to lose a couple of users to bad feature decisions rolled out through an experiment than to lose most of your users to a bad feature rolled out to everyone. It’s common to overestimate our ability while also being highly risk-averse. Experimentation inherently has a hidden meaning of “we’re going to test something we’re unsure of”—and to a product team, being unsure is scary.

9. How do you deal with issues when experiments take a long time to run because you’re waiting for users to give you feedback?

This is where an actual experimentation platform can help because you’re not relying on users to say whether they liked it. Instead, you monitor and measure once they’ve interacted with the experiment.

For example, let’s say you run an experiment tracking a new checkout flow for your ecommerce store. If users in the new checkout flow are checking out faster and completing more transactions, you don’t need verbal or written feedback to know the new checkout flow is better. That’s the power of experimentation and analytics combined.

10. How do you recommend phrasing and forming a hypothesis? Do we start with a baseline? What should be our criteria for an “improvement?”

We structure our hypothesis in our product in a relatively common way, where you set a goal for the experiment expressed as a product metric you want to impact and how much you hope to impact it. For example, “we believe that variant A will increase checkout completion rates by 8% over the control.” This is where a powerful analytics tool like Amplitude comes into play to ensure you have that historical baseline on how the metric has been performing. If you don’t know how you were doing before, you won’t be able to know if you’re doing better or worse with the new experience.

In terms of what improvement should look like, that largely depends on the context of your business. Some companies will need to see a 10-15% lift in a metric to get a meaningful business outcome, such as higher revenue, engagement, or retention. For other companies—typically larger ones with highly optimized products and many users—a 1% lift is enough for them to have large business outcomes.

11. Can you recommend some best-in-class experimentation tools companies can start with today?

This will be a bit self-serving because we sell a best-in-class experimentation platform, , that is arguably the best in the market for product experimentations. Our product provides everything you need in an all-in-one platform, where you can plan, configure, monitor, and analyze product experiments in one place. We also have a fully mature to deliver those experiments and automatically track them for you.

Tie that together with our analytics platform for robust metric tracking and deep insights into customer behavior outside of experiments, and you’ll know how to turn those insights into actions that affect real business outcomes.

12. Can you give examples of non-UI based experiments?

A lot of people are familiar with experiments focused on conversion optimization in the marketing world. These look like changing text, images, or colors to see which ones users prefer. You can also do all of that with Amplitude, but we primarily focus on making changes to the product experience—think checkout flows, user onboarding, navigation changes, application page structures, configuration options, and so much more. Anything that has to do with how your users interact with your product to get their job done, you can impact with experimentation.

13. As product managers, sometimes we build a sixth sense or a gut feeling about a change in the product. Should we allow ourselves to go with our gut feeling even in the face of data that says otherwise?

Data is just that—data. There are things data doesn’t know about your business that may skew how much weight you put into what it says. There may be things about the direction your company strategy is going or how the market is shifting as a whole that experimentation data in the moment can’t tell you.

Typically, though, our gut feelings as product managers come from something, but we may not be able to verbalize what that is. Use experimentation and data for decision-making in a way that makes the most sense for you at that time. Sometimes, that may mean leaning fully into the results and deciding based on that, or it may mean going in a slightly different direction because you have compelling evidence elsewhere that the data can’t tell you.

14. What is a common pitfall after starting your first experiment that could send the whole program in a downward spiral?

In many cases, when experimentation becomes an embedded part of a company’s culture, many experiments will fail. Put differently, the outcomes of the experiments will tell you that the product change will not have the lift you were hoping to see—or in some cases, would hurt the product if fully rolled out.

If the first experiment you run happens to be one of those experiments and you don’t get a big win to celebrate, that can feel very demotivating for teams who did the work, put the code out, then see data that tells them not to move forward with the release. It’s one of the reasons Elena pointed out that a critical step to building this culture is “learning how to learn.” This means you’re gaining valuable knowledge and insight into your users even with “failed” experiments.

If you had to choose between knowing which of your features would hurt your product and which ones your customers would love, it’s better to know about the former. That’s a huge mindset shift and is uncomfortable for many. Acknowledging and even striving for a “failed” experiment can be healthy and helpful in not having that downward spiral.

15. Who should own the product-led growth strategy in a company? Should it be the product team? Or the growth marketing team? Or both?

Growth is everyone’s job in a company, and multiple factors can make product-led growth successful. Some of that is conversion optimization and getting more users to move from free to paid or awareness to trial. Other parts are ensuring you have a rockstar product that people can’t live without. Ultimately, it’s about aligning on the goals and understanding how, as a combined force, you’re affecting retention, acquisition, and monetization for stronger growth.

About the Author
Global Content Marketing Manager, Amplitude
Noorisingh Saini is a data-driven content marketing manager and Amplitude power user. Previously, she managed all customer identity content at Okta. Noorisingh graduated from Yale University with a degree in Cognitive Science, specializing in Emotions, Consumer Behavior, and Behavioral Decision Making.

Tags
More Perspectives
December 9, 2024
Startup Programs Manager
November 12, 2024
CEO & Co-founder
October 29, 2024
Principal Solutions Engineer
October 25, 2024
Community Manager, Amplitude
Platform
Resources
Support
Partners
Company
© 2024 Amplitude, Inc. All rights reserved. Amplitude is a registered trademark of Amplitude, Inc.