The Duration Estimator helps you figure out which experiment ideas are viable before you build anything. Use it to avoid running tests that may never reach statistical significance, and to prioritize experiments that can deliver results in a reasonable timeframe.
In your experiment setup, select Estimate Duration to open the Duration Estimator.
When you first open it, you see an empty state. After you add your traffic event and success metric, the tool automatically calculates how long your test needs to run.
Select + Add Event, and choose the event that represents traffic where you run your experiment.
For example, if you test your homepage, select Page Viewed, and add a filter for your homepage URL.
The Duration Estimator automatically pulls the last 29-30 days of traffic data from Analytics, and shows Users per day in the results panel.
If you don't have the right event, select Enter Manually to input your own total daily traffic estimate. Traffic is total traffic, not for each variant.
Select + Add Metric, and choose the conversion metric you want to improve with this experiment.
A success metric is the visitor action you're trying to change with your experiment. Think about what you want more visitors to do because of your changes.
Common success metrics:
How to choose:
What action do you want more visitors to take?Conversion of...) or a metric with an official blue badge.
For example, if you test your homepage hero banner and want more visitors to enroll in a course, select Conversion of registration: course enrolled.
The Duration Estimator automatically calculates your current conversion rate from the last 29-30 days of Analytics data and shows it in the results panel (for example, 78.8% -> 82.8%).
If you don't see the metric you need:
The relative MDE is the smallest improvement you want to detect. The default is 5%, which means you test whether you can improve your baseline by 5%.
For example, if your baseline conversion is 78.8% and you set a 5% MDE, you test whether you can reach 82.8%.
How to think about MDE:
If you don't have historical data, select Enter Manually to input your own baseline conversion rate.
After you add your traffic and success metric, the Estimated Duration panel shows:
~130 days).If your duration is very long, you see a Long Duration warning badge. Use the Duration Scenarios table to explore different scenarios.
The Duration Scenarios table is the most important part of the Duration Estimator. It shows exactly how your choices affect test duration, so you can make smarter decisions about what to test and when.
Rows (confidence level):
Columns (lift size/MDE):
The table highlights your selected combination and shows durations for all other scenarios.
Your confidence level is the risk you're willing to take with your results. Choose based on what's at stake.
95% confidence: Use when the cost of being wrong is high.
90% confidence: Use when you want balance between speed and reliability (default).
85% confidence: Use when you need a directional signal.
MDE reflects the expected impact of your experiment idea. Ask: How much lift do I realistically expect this change to drive?
Large MDE (8%+): Use for bold changes with dramatic impact.
Medium MDE (3-5%): Use for meaningful but not dramatic improvements.
Small MDE (1-2%): Use for subtle tweaks, or when tiny gains are valuable.
If your estimate shows ~130 days at 5% MDE and 90% confidence, review the table:
~51 days.~102 days.Decision framework:
You have three test ideas in your backlog:
~51 days at 90% confidence.~130 days at 90% confidence.~632 days at 90% confidence.Decision: The hero redesign is viable and can deliver results quickly. The CTA change may be worth running if you lower to 85% confidence (~102 days). The footer change takes over a year, so it isn't worth testing now.
Use the Duration Scenarios table to create a balanced mix:
If you test a low-traffic page and durations are very long across all scenarios, you may need to:
The Duration Scenarios table makes these trade-offs visible, so you can prioritize experiments that fit your traffic and timeline constraints.
Select Advanced Settings to access additional controls:
Most teams don't need to adjust these settings. The defaults work well for standard A/B tests.
If your estimated duration is longer than your timeline allows, use these options.
The biggest factor in test duration is the size of the change you're trying to detect. Bold changes produce larger lifts and resolve faster.
For example, moving from 5% MDE to 8% MDE can reduce duration from ~130 days to ~51 days.
Ask:
Large-impact ideas resolve faster. Small-impact ideas take longer, but can add up in mature, high-volume products.
Dropping from 90% to 85% confidence reduces duration, but increases false-positive risk (calling a winner when there isn't one).
For example, at 85% confidence, the same 5% MDE test takes ~102 days instead of ~130 days.
Ask:
Don't lower confidence for:
Low traffic is a common reason tests take too long.
Ask: Can you run this test on a higher-traffic page, or choose a more frequent conversion event?
If you only expose 50% of visitors to the experiment, increasing to 100% can reduce duration by about half.
Testing four variations takes much longer than testing two. Consider multiple sequential tests instead of one large multi-variant test.
Sometimes a test isn't feasible. If the Duration Scenarios table shows hundreds of days across all scenarios, it probably isn't worth building.
The Duration Estimator helps you make this call before you spend time and resources on a test that may never reach significance.
Your test takes a long time to reach statistical significance, often because of low traffic or small MDE. Use the Duration Scenarios table to explore faster alternatives.
Update the timeframe, or select Enter Manually to input your own traffic and conversion estimates. Results are most accurate with at least a few weeks of stable data.
Yes. Adjust the MDE percentage in the success metric section, and the estimate updates automatically. Use this to explore different scenarios before committing to your test design.
Last 29 days offset by 1 meanThis shows the data timeframe the tool uses for calculations. Offset by 1 means the calculation excludes today because today's data is incomplete, and looks at the previous 29 complete days.
No. Many experiments run at medium (90%) confidence, which balances speed and accuracy. Use high confidence when stakes are high, or when you need maximum certainty before a decision.
Start by asking What action do you want more visitors to take? Then choose a metric that matches that action. Conversion metrics (which Amplitude marks as Conversion of...) are often the best choice.
If you're still unsure, search for metrics related to your goal, or select Enter Manually to input your own baseline.
Review Experiment duration estimates to understand the duration estimate that Experiment shows while an experiment runs.
July 11th, 2024
Need help? Contact Support
Visit Amplitude.com
Have a look at the Amplitude Blog
Learn more at Amplitude Academy
© 2026 Amplitude, Inc. All rights reserved. Amplitude is a registered trademark of Amplitude, Inc.