When you notice unexpected spikes or dips in experiment metrics, identify what changed and determine if the data is trustworthy. This guide provides systematic approaches to debug these anomalies.
Start by examining the cumulative exposures chart in your experiment dashboard. This chart shows exposure patterns over time and highlights data quality issues.
Orange dots on the cumulative exposures chart indicate detected anomalies in exposure traffic. These can signal:
Data quality check failures appear when traffic to the experiment has decreased significantly. Review these warnings to understand if the spike or dip relates to exposure problems rather than metric changes.
For more details, see Interpret the cumulative exposures graph.
Check if the experiment's percentage rollout or traffic allocation increased during the period when you noticed the metric spike or dip.
To review recent changes:
You can use the Experiment Management API or Slack Notifications to retrieve versions of flag configurations during a specific time period. Compare configurations to identify changes that might explain the metric spike or dip.
Review whether any code releases, app version updates, or feature deployments occurred during the time window of the metric change.
Releases and annotations in your dashboard can help identify temporal correlations between deployments and metric changes. If you haven't added annotations yet, consider adding them after you identify the cause to document the incident.
Metric spikes or dips often affect specific platforms or app versions rather than all users. Group your analysis by these default user properties:
Amplitude SDKs track these properties automatically, making them reliable dimensions for debugging. See user properties for more information.
To analyze by these dimensions:
Outlier users or data quality issues cause some metric spikes:
Automated bot traffic can skew metrics significantly. Amplitude provides several ways to handle bots:
Rapid-fire events from a single user can indicate instrumentation bugs:
Use the frequency chart to identify users with unusually high event counts. These outliers can distort your metrics and should be investigated.
To filter outliers from your analysis:
Create an event segmentation chart to identify spikes in exposure events for specific flag keys:
[Experiment] Exposure event.flag_key property.This analysis can reveal if a specific experiment suddenly received more traffic, which might explain metric changes.
The Root Cause Analysis feature can help identify which user segments contributed most to a metric change. While you can't filter exclusively by experiment properties, you can:
After identifying the cause of a metric spike or dip:
January 22nd, 2025
Need help? Contact Support
Visit Amplitude.com
Have a look at the Amplitude Blog
Learn more at Amplitude Academy
© 2026 Amplitude, Inc. All rights reserved. Amplitude is a registered trademark of Amplitude, Inc.