This article helps you:
Understand the default statistical preferences in your experiment's results
Understand when to modify the default settings
The final step in creating your experiment is to specify any advanced settings you want. These settings encompass:
Exposure settings are the configuration rules that define when and how a user is marked as exposed to an experiment or feature. These settings determine the logic that triggers an exposure event, such as whether a user is considered exposed the first time they qualify for an experiment, the first time they actively interact with a feature, or under more custom criteria.
In your Experiment Design options, click Advanced and then click Exposure Settings to specify the settings you want. You can modify any of the following:
An exposure event is the moment when a user becomes eligible for a particular experiment variant or feature. They're shown the experiment variant regardless of whether they actively interact with it. This event serves as the anchor point for experiment analysis and ensures that all downstream behaviors and outcomes are accurately attributed to the correct variant. By logging exposure events, Experiment prevents biases such as double counting or misattribution. It also establishes a consistent link between user actions and the experiment they were exposed to.
You can specify:
Statistical preferences are the configurable settings that determine how experiment results are analyzed and displayed. These preferences let teams choose parameters such as:
You can modify the Stats Preferences at any step of an experiment, however, they're most beneficial for the final analysis after the experiment has ended.
Controlled-experiment using pre-existing data, also known as CUPED, is an optional statistical technique meant to reduce variance in Amplitude Experiment. Toggling CUPED on means that Amplitude Experiment will account for possible varying treatment effects for different user segments. There are situations where CUPED would not be the best choice for your experiment, such as targeting only new users in your test.
The random bucketing process can sometimes deliver unbalanced groups of users to each variant. This is known as pre-exposure bias, and it’s one of the things CUPED is meant to address. If you don’t use CUPED for your experiment, this bias will persist. This is why you may notice differences in the mean-per-variant when running the same experiment with and without CUPED.
For a more technical explanation, go to this detailed blog post.
Read more about CUPED and how it can affect your experiment results in this blog.
Amplitude Experiment uses the Bonferroni correction to address potential problems with multiple hypothesis testing. Although a trusted statistical method, there are situations where you may not want to use it when analyzing your experiment results. One might be if you want to compare results with those generated by an internal system that doesn't support the Bonferroni method. In this case, and if you're willing to accept higher false positive rates, toggle the Bonferroni Correction off.
Select which statistical method you want to use:
The confidence level measures how confident Experiment is that it would generate the same results for the experiment if you were to roll it out again and again. The default confidence level of 95% means that 5% of the time, you might interpret the results as statistically significant when they're not. Lowering your experiment’s confidence level will make it more likely that your experiment reaches statistical significance, but the likelihood of a false positive goes up. You shouldn't go below 80%, as the experiment's results may no longer be reliable at this point.
Specify how you want bucketing to work in your experiment. You can specify:
April 30th, 2024
Need help? Contact Support
Visit Amplitude.com
Have a look at the Amplitude Blog
Learn more at Amplitude Academy
© 2025 Amplitude, Inc. All rights reserved. Amplitude is a registered trademark of Amplitude, Inc.