Experiment Android SDK
Official documentation for Amplitude Experiment's Client-side Android SDK implementation.
Install
Add to the dependencies in your Android project's build.gradle file.
dependencies {
implementation 'com.amplitude:experiment-android-client:<VERSION>'
}
Quick start
The right way to initialize the Experiment SDK depends on whether you use an Amplitude SDK for analytics or a third party (for example, Segment).
class MyApplication : Application() {
override fun onCreate() {
super.onCreate()
// (1) Initialize the experiment client
val client = Experiment.initializeWithAmplitudeAnalytics(
this, "DEPLOYMENT_KEY", ExperimentConfig()
)
// (2) Fetch variants
try {
// NOTE: The future returned resolves after a network call. Do not
// wait for this future on the main application thread in
// production applications to avoid ANR if the user has a poor
// network connection.
client.fetch().get()
} catch (e: Exception) {
e.printStackTrace()
}
// (3) Lookup a flag's variant
val variant = client.variant("<FLAG_KEY>")
if (variant.value == "on") {
// Flag is on
} else {
// Flag is off
}
}
}
Initialize
Initialize the SDK client in your application on startup. The deployment key argument you pass to the apiKey parameter must live within the same project that you are sending analytics events to.
fun initializeWithAmplitudeAnalytics(
application: Application, apiKey: String, config: ExperimentConfig
)
application- Requirement: required
- Description: The Android
Applicationcontext. Used to persist variants across sessions.
apiKey- Requirement: required
- Description: The deployment key which authorizes fetch requests and determines which flags the SDK evaluates for the user.
config- Requirement: optional
- Description: The client configuration used to customize SDK client behavior.
The initializer returns a singleton instance, so subsequent initializations for the same instance name return the initial instance. To create multiple instances, use the instanceName configuration.
val experiment = Experiment.initializeWithAmplitudeAnalytics(
context,
"DEPLOYMENT_KEY",
ExperimentConfig().apply {
// must match the name you used for your Amplitude Analytics instance
instanceName = "myCustomInstance"
}
)
Configuration
SDK client configuration occurs during initialization.
| Name | Description | Default Value |
|---|---|---|
debug | Deprecated. When true, sets logLevel to Debug. Use logLevel instead. | false |
logLevel | The minimum log level to output. The SDK ignores messages below this level. Options: LogLevel.DISABLE, LogLevel.ERROR, LogLevel.WARN, LogLevel.INFO, LogLevel.DEBUG, LogLevel.VERBOSE. Go to Custom logging. | LogLevel.ERROR |
loggerProvider | Custom logger implementation. Must implement the LoggerProvider interface. Go to Custom logging. | AndroidLoggerProvider() |
fallbackVariant | The default variant to fall back if a variant for the provided key doesn't exist. | {} |
initialVariants | An initial set of variants to access. This field helps bootstrap the client SDK with values rendered by the server using server-side rendering (SSR). | {} |
source | The primary source of variants. Set the value to Source.INITIAL_VARIANTS and configure initialVariants to bootstrap the SDK for SSR or testing purposes. | Source.LOCAL_STORAGE |
serverZone | Select the Amplitude data center to get flags and variants from | ServerZone.US |
serverUrl | The host to fetch remote evaluation variants from. For hitting the EU data center, use serverZone. | https://api.lab.amplitude.com |
flagsServerUrl | The host to fetch local evaluation flags from. For hitting the EU data center, use serverZone. | https://flag.lab.amplitude.com |
fetchTimeoutMillis | The timeout for fetching variants in milliseconds. | 10000 |
retryFetchOnFailure | Whether to retry variant fetches in the background if the request doesn't succeed. | true |
automaticExposureTracking | If true, calling variant() tracks an exposure event through the configured exposureTrackingProvider. If no exposure tracking provider is set, this configuration option does nothing. | true |
fetchOnStart | If true or null, always fetch remote evaluation variants on start. If false, never fetch on start. | true |
pollOnStart | Poll for local evaluation flag configuration updates every minute on start. | true |
automaticFetchOnAmplitudeIdentityChange | Only matters if you use the initializeWithAmplitudeAnalytics initialization function to integrate with the Amplitude Analytics SDK. If true any change to the user ID, device ID or user properties from analytics triggers the experiment SDK to fetch variants and update its cache. | false |
userProvider | An interface used to provide the user object to fetch() when called. | null |
exposureTrackingProvider | Implement and configure this interface to track exposure events through the experiment SDK, either automatically or explicitly. | null |
instanceName | Custom instance name for experiment SDK instance. The value of this field is case-sensitive. | null |
initialFlags | A JSON string representing an initial set of flag configurations to use for local evaluation. | undefined |
EU Data Center
If you use Amplitude's EU data center, configure the serverZone option on initialization to ServerZone.EU.
Integrations
If you use either Amplitude or Segment Analytics SDKs to track events into Amplitude, set up an integration on initialization. Integrations automatically implement provider interfaces to enable a more streamlined developer experience by making it easier to manage user identity and track exposures events.
Fetch
Fetches variants for a user and stores the results in the client for fast access. The function remote evaluates the user for flags associated with the deployment used to initialize the SDK client.
fun fetch(user: ExperimentUser? = null, options: FetchOptions? = null): Future<ExperimentClient>
| Parameter | Requirement | Description |
|---|---|---|
user | optional | Explicit user information to pass with the request to fetch. The SDK merges this user information with user information provided from integrations through the user provider, preferring properties passed explicitly to fetch() over provided properties. |
options | optional | Explicit flag keys to fetch. |
Amplitude Experiment recommends calling fetch() during application start up so that the user gets the most up-to-date variants for the application session. Furthermore, wait for the fetch request to return a result before rendering the user experience to avoid the interface "flickering".
try {
ExperimentUser user = ExperimentUser.builder()
.userId("user@company.com")
.userProperty("premium", true)
.build();
experiment.fetch(user).get();
} catch (Exception e) {
e.printStackTrace();
}
If you're using an integration or a custom user provider then you can fetch without inputting the user.
experiment.fetch(null);
Fetch when user identity changes
If you want the most up-to-date variants for the user, it's recommended that you call fetch() whenever the user state changes in a meaningful way. For example, if the user logs in and receives a user ID, or has a user property set which may affect flag or experiment targeting rules.
For user properties, Amplitude recommends passing new user properties explicitly to fetch() instead of relying on user enrichment before remote evaluation. Remote user-property sync through a separate system has no timing guarantees for fetch(), which can create a race condition.
If fetch() times out (default 10 seconds) or fails for any reason, the SDK client returns and retries in the background with back-off. You may configure the timeout or disable retries in the configuration options during SDK client initialization.
Start
Fetch vs start
Use start if you're using client-side local evaluation. If you're only using remote evaluation, call fetch instead of start.
Start the Experiment SDK to get flag configurations from the server and fetch remote evaluation variants for the user. The SDK is ready when the returned future resolves.
fun start(user: ExperimentUser? = null): Future<ExperimentClient>
| Parameter | Requirement | Description | | --------- | ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------ | | user | optional | Explicit user information to pass with the request to fetch variants. The SDK merges this user information with user information provided from integrations through the user provider, preferring properties passed explicitly to fetch() over provided properties. Also sets the user in the SDK for reuse. | null |
Call start() when your application is initializing, after user information is available to fetch variants. The future resolves after loading local evaluation flag configurations and fetching remote evaluation variants.
Configure the behavior of start() by setting fetchOnStart in the SDK configuration on initialization to improve performance based on the needs of your application.
- If your application never relies on remote evaluation, set
fetchOnStarttofalseto avoid increased startup latency caused by remote evaluation. - If your application relies on remote evaluation, but not right at startup, you may set
fetchOnStarttofalseand callfetch()and await the future separately.
try {
experiment.start().get();
} catch (e: Exception) {
e.printStackTrace();
}
Variant
Access a variant for a flag or experiment from the SDK client's local store.
Automatic exposure tracking
When you use an integration or set a custom exposure tracking provider, variant() tracks an exposure event through the tracking provider. To disable this functionality, configure automaticExposureTracking to be false, and track exposures manually using exposure().
fun variant(key: String, fallback: Variant? = null): Variant
| Parameter | Requirement | Description |
|---|---|---|
key | required | The flag key to identify the flag or experiment to access the variant for. |
fallback | optional | The value to return if the SDK found no variant for the given flagKey. |
When determining which variant a user has been bucketed into, compare the variant value to a well-known string.
Variant variant = client.variant("<FLAG_KEY>");
if (variant.is("on")) {
// Flag is on
} else {
// Flag is off
}
Access the variant's payload
A variant may also include a dynamic payload of arbitrary data. Access the payload field from the variant object after checking the variant's value.
The payload on Android is of type Object (Any?) meaning you must cast the payload to the expected type. Cast JSON object and array types as org.json.JSONObject and org.json.JSONArray respectively.
For example, if the payload is {"key":"value"}:
Variant variant = experiment.variant("<FLAG_KEY>");
if (variant.is("on") && variant.payload != null) {
try {
String value = ((JSONObject) variant.payload).getString("key");
} catch (Exception e) {
e.printStackTrace();
}
}
A null variant value means that the user hasn't been bucketed into a variant. You may use the built in fallback parameter to provide a variant to return if the store doesn't contain a variant for the given flag key.
Variant variant = experiment.variant("<FLAG_KEY>", new Variant("control"));
if (variant.is("control")) {
// Control
} else if (variant.is("treatment")) {
// Treatment
}
All
Access all variants stored by the SDK client.
fun all(): Map<String, Variant>
experiment.all();
Clear
Clear all variants in the cache and storage.
fun clear()
You can call clear after user logout to clear the variants in cache and storage.
experiment.clear();
Exposure
Manually track an exposure event for the current variant of the given flag key through configured integration or custom exposure tracking provider. Generally used in conjunction with setting the automaticExposureTracking configuration optional to false.
fun exposure(key: String)
| Parameter | Requirement | Description |
|---|---|---|
key | required | The flag key to identify the flag or experiment variant to track an exposure event for. |
Variant variant = experiment.variant("<FLAG_KEY>");
// Do other things...
experiment.exposure("<FLAG_KEY>");
if (variant.is("control")) {
// Control
} else if (variant.is("treatment")) {
// Treatment
}
Providers
Integrations
If you use Amplitude or Segment analytics SDKs along side the Experiment Client SDK, Amplitude recommends you use an integration instead of implementing custom providers.
Provider implementations enable a more streamlined developer experience by making it easier to manage user identity and track exposures events.
User provider
The SDK client uses the user provider to access the most up-to-date user information only when needed (for example, when the SDK calls fetch()). The user provider is optional, but helps if you have a user information store already set up in your application. With a user provider, you don't need to manage two separate user info stores in parallel. Separate stores can create divergent user state if the application updates the user store and experiment isn't (or vice versa).
interface ExperimentUserProvider {
fun getUser(): ExperimentUser
}
To use your custom user provider, set the userProvider configuration option with an instance of your custom implementation on SDK initialization.
ExperimentConfig config = ExperimentConfig.builder()
.userProvider(new CustomUserProvider())
.build();
ExperimentClient experiment = Experiment.initialize(
context, "<DEPLOYMENT_KEY>", config);
Exposure tracking provider
Amplitude highly recommends implementing an exposure tracking provider. Exposure tracking increases the accuracy and reliability of experiment results and improves visibility into which flags and experiments a user is exposed to.
interface ExposureTrackingProvider {
fun track(exposure: Exposure)
}
The implementation of track() should track an event of type $exposure (a.k.a name) with two event properties, flag_key and variant, corresponding to the two fields on the Exposure object argument. Finally, the event tracked must eventually end up in Amplitude Analytics for the same project that the [deployment] used to initialize the SDK client lives within, and for the same user that the SDK fetched variants for.
To use your custom user provider, set the exposureTrackingProvider configuration option with an instance of your custom implementation on SDK initialization.
ExperimentConfig config = ExperimentConfig.builder()
.exposureTrackingProvider(new CustomExposureTrackingProvider())
.build();
ExperimentClient experiment = Experiment.initialize(
context, "<DEPLOYMENT_KEY>", config);
Bootstrapping
You may want to bootstrap the experiment client with an initial set of flags or variants when you get variants from an external source (for example, not from calling fetch() on the SDK client). Use cases include local evaluation or integration testing on specific variants.
Bootstrapping variants
To bootstrap the client with a predefined set of variants, set the flags and variants in the initialVariants configuration object, then set the source to Source.InitialVariants so that the SDK client prefers the bootstrapped variants over any fetched & stored variants from before for the same flags.
let config = ExperimentConfigBuilder()
.initialVariants(["<FLAG_KEY>": Variant("<VARIANT>")])
.source(Source.InitialVariants)
.build()
let experiment = Experiment.initialize(apiKey: "<DEPLOYMENT_KEY>", config: config)
ExperimentConfig config = ExperimentConfig.builder()
.initialVariants(Map.of("<FLAG_KEY>", new Variant("<VARIANT>")))
.source(Source.INITIAL_VARIANTS)
.build();
ExperimentClient experiment = Experiment.initialize(
context, "<DEPLOYMENT_KEY>", config);
Bootstrapping flag configurations
You may choose to bootstrap the SDK with an initial set of local evaluation flag configurations using the initialFlags configuration. Experiment evaluates these when you call variant, unless you load an updated flag config or variant with start or fetch.
To download initial flags, use the evaluation flags API
let config = ExperimentConfigBuilder()
.initialFlags("<FLAGS_JSON>")
.build()
let experiment = Experiment.initialize(apiKey: "<DEPLOYMENT_KEY>", config: config)
ExperimentConfig config = ExperimentConfig.builder()
.initialFlags("<FLAGS_JSON>")
.build();
ExperimentClient experiment = Experiment.initialize(
context, "<DEPLOYMENT_KEY>", config);
Custom logging
Control log verbosity with the logLevel configuration, or implement the LoggerProvider interface to integrate your own logger.
Log levels
LogLevel.DISABLE- No logs.LogLevel.ERROR- Errors only (default).LogLevel.WARN- Errors and warnings.LogLevel.INFO- Errors, warnings, and info.LogLevel.DEBUG- Errors, warnings, info, and debug.LogLevel.VERBOSE- All messages including verbose details.
// Set log level to debug
val experiment = Experiment.initialize(
context,
"<DEPLOYMENT_KEY>",
ExperimentConfig.builder()
.logLevel(LogLevel.DEBUG)
.build()
)
Custom logger
Implement the LoggerProvider interface to use your own logging solution.
// Implement the LoggerProvider interface
class CustomLoggerProvider : LoggerProvider {
override fun verbose(msg: String) {
// Send verbose logs to your logging service
myLoggingService.verbose(msg)
}
override fun debug(msg: String) {
myLoggingService.debug(msg)
}
override fun info(msg: String) {
myLoggingService.info(msg)
}
override fun warn(msg: String) {
myLoggingService.warn(msg)
}
override fun error(msg: String) {
myLoggingService.error(msg)
}
}
// Initialize with custom logger
val experiment = Experiment.initialize(
context,
"<DEPLOYMENT_KEY>",
ExperimentConfig.builder()
.loggerProvider(CustomLoggerProvider())
.logLevel(LogLevel.WARN)
.build()
)
Debug flag (deprecated)
The debug configuration flag is deprecated. Use logLevel instead.
// Deprecated: Sets logLevel to Debug
val experiment = Experiment.initialize(
context,
"<DEPLOYMENT_KEY>",
ExperimentConfig.builder()
.debug(true)
.build()
)
// Preferred: Use logLevel instead
val experiment = Experiment.initialize(
context,
"<DEPLOYMENT_KEY>",
ExperimentConfig.builder()
.logLevel(LogLevel.DEBUG)
.build()
)
Was this helpful?