Experiment Evaluation Proxy
The Evaluation Proxy is a Service to enable, enhance, and optimize local evaluation running within your infrastructure.
Enable local evaluation on unsupported platforms: Use remote Evaluation APIs and SDKs to run local evaluation in your infrastructure.
Automatically track exposure events for local evaluations: It deduplicates identical exposure events for 24 hours.
Enhance local evaluation with large cohort targeting: Targeted cohorts are synced hourly to the Evaluation Proxy and added to the user prior to evaluation.
Configuration
The evaluation proxy is either configured via a yaml file (recommended, more configuration options), or using environment variables.
The default location for the configuration yaml file is /etc/evaluation-proxy-config.yaml. You may also configure the file location using the PROXY_CONFIG_FILE_PATH environment variable.
projects(required)configuration(optional).
Recommended configuration
Replace the fields in the configuration with values specific to your account/infrastructure.
projects:
- apiKey: "YOUR API KEY"
secretKey: "YOUR SECRET KEY"
managementKey: "YOUR MANAGEMENT API KEY"
configuration:
redis:
uri: "YOUR REDIS URI" # e.g. "redis://localhost:6379"
Environment configuration can only configure a single project. Environment variable configuration is only considered if the configuration file is not found.
AMPLITUDE_API_KEY- Description: The project's API key.
AMPLITUDE_SECRET_KEY- Description: The project's secret key.
AMPLITUDE_EXPERIMENT_MANAGEMENT_API_KEY- Description: The Experiment management API key. Must be created for the same project as the configured API and secret key. Used to automatically access and update deployments used for the project.
AMPLITUDE_REDIS_URI- Description: Optional. The entire URI to connect to Redis. Include the protocol, host, port, and optional username, password, and path (for example
redis://localhost:6379).
- Description: Optional. The entire URI to connect to Redis. Include the protocol, host, port, and optional username, password, and path (for example
AMPLITUDE_REDIS_PREFIX- Description: Optional. The prefix to connect
AMPLITUDE_REDIS_USE_CLUSTER- Description: Optional. If
AMPLITUDE_REDIS_URIis a cluster URL, pass this astrue. It defaults tofalse.
- Description: Optional. If
AMPLITUDE_REDIS_READ_FROM- Description: Optional. Read routing strategy for Redis Cluster. Options:
REPLICA_PREFERRED(default, prefer replicas) orANY(read from any node in the cluster).
- Description: Optional. Read routing strategy for Redis Cluster. Options:
AMPLITUDE_SERVER_URL- Description: Optional. The server URL, including protocol and host, to fetch flags from.
AMPLITUDE_COHORT_SERVER_URL- Description: Optional. The server URL, including protocol and host, to download cohorts from.
| Field | Type | Description |
|---|---|---|
projects | array | Required. See projects. |
configuration | object | Optional. See configuration |
projects
A required array of objects with the following fields, all which are required.
id- Description: The project's ID. Found in the project settings.
apiKey- Description: The project's API key.
secretKey- Description: The project's secret key.
managementKey- Description: The Experiment management API key. Must be created for the same project as the configured API and secret key. Used to automatically access and update deployments used for the project.
configuration
An optional object of extra configuration.
redis- Description: Optional (Recommended). See
redis. Configure the proxy to use redis as persistent storage.
- Description: Optional (Recommended). See
flagSyncIntervalMillis- Description: Optional. The polling interval to update flag configurations (default
10000).
- Description: Optional. The polling interval to update flag configurations (default
maxCohortSize- Description: Optional. The maximum size of targeted cohorts that the proxy can download (default
2147483647).
- Description: Optional. The maximum size of targeted cohorts that the proxy can download (default
serverUrl- Description: Optional. The server URL, including protocol and host, to fetch flags from. (default
https://api.lab.amplitude.com)
- Description: Optional. The server URL, including protocol and host, to fetch flags from. (default
cohortServerUrl- Description: Optional. The server URL, including protocol and host, to download cohorts from. (default
https://cohort.lab.amplitude.com)
- Description: Optional. The server URL, including protocol and host, to download cohorts from. (default
EU data residency
To use the evaluation proxy with the EU data center, set the serverUrl and cohortServerUrl configurations to hit the EU data center endpoints:
configuration:
# Other configurations...
serverUrl: "https://api.lab.eu.amplitude.com"
cohortServerUrl: "https://cohort.lab.eu.amplitude.com"
redis
Configure the evaluation proxy to use Redis as a persistent storage. Highly recommended to enable the evaluation proxy to run efficiently.
uri- Description: Required. The full URI to connect to Redis with. Include the protocol, host, port, and optional username, password, and path.
readOnlyUri- Description: Optional. Optional URI to connect to read only replicas for high scaling high volume reads to Redis read replicas.
useCluster- Description: Optional. If
uriis a cluster URL, set this totrue. Defaults tofalse.
- Description: Optional. If
readFrom- Description: Optional. Read routing strategy for cluster mode only:
REPLICA_PREFERRED(default, prefer replicas) orANY(read from any node in the cluster).
- Description: Optional. Read routing strategy for cluster mode only:
prefix- Description: Optional. A prefix for all keys saved by the evaluation proxy. Defaults to
amplitude.
- Description: Optional. A prefix for all keys saved by the evaluation proxy. Defaults to
Deployment
The evaluation proxy is stateless, and should be deployed with multiple instances behind a load balancer for high availability and scalability.
For example, a kubernetes deployment with greater than one replica.
Kubernetes
Use the evaluation proxy Helm chart to install the proxy service on kubernetes or generate the files needed to deploy the service manually. The repository also contains an example of running the evaluation proxy on kubernetes locally using minikube.
Helm
Add helm repo
helm repo add \
evaluation-proxy-helm https://amplitude.github.io/evaluation-proxy-helm
Configure values.yaml
Configure the chart values. The recommended approach to configuring and installing the helm chart is using a values.yaml configuration file.
The chart's evaluationProxy value contents exactly match the evaluation proxy's configuration file fields.
evaluationProxy:
# At least one project is required.
projects:
- apiKey: "YOUR API KEY"
secretKey: "YOUR SECRET KEY"
managementKey: "YOUR MANAGEMENT API KEY"
configuration: {}
# redis:
# uri: "redis://redis-master.default.svc.cluster.local:6379"
Install helm chart
helm install -f values.yaml \
evaluation-proxy evaluation-proxy-helm/evaluation-proxy
Docker
You may run the docker image directly. First, create a configuration file, then run the docker image mounting the file as a volume to the expected directory in the container.
docker run \
-v CONFIG_FILE_PATH:/etc/evaluation-proxy-config.yaml \
amplitudeinc/evaluation-proxy
Docker compose example
The evaluation-proxy GitHub repository also contains an example using docker compose to run the proxy alongside a local Redis image.
Evaluation
The Evaluation Proxy exposes remote Evaluation API and SDK endpoints to run local evaluation within your cluster. This is useful to enable platforms and languages which aren't supported by local evaluation SDKs. As an added benefit, fetch requests made to the evaluation proxy can target cohorts of users, and have assignment events tracked automatically to Amplitude.
You must send requests to the service using http on port 3546.
Kubernetes
A Kubernetes deployed Evaluation Proxy service (named evaluation-proxy) running within a kubernetes namespace main is from within the cluster at: http://evaluation-proxy.main.svc.cluster.local:3546
Best practices
Production deployment
Resource requirements
Configure each pod with 4 CPU cores and 9 GiB RAM for a capacity of approximately 5,000 requests each second.
- Minimum replicas: Deploy at least two replicas for high availability.
- Horizontal scaling: Add pods to increase capacity. For example, four pods provide approximately 20,000 requests each second.
JVM heap for large cohorts
If you use local evaluation and download large cohorts greater than 5M users, set a 6 GiB max JVM heap size.
- name: JAVA_TOOL_OPTIONS
value: "-Xms128m -Xmx6144m"
Redis configuration
Option 1: Standalone with replicas
Use for small deployments.
configuration:
redis:
uri: "rediss://primary:6379"
readOnlyUri: "rediss://replica:6379" # Optional, for high-volume read scaling.
Recommended specs: 12+ GiB memory, cache.m7g.xlarge or equivalent.
Option 2: Redis Cluster (recommended for high scale)
Use Redis Cluster when you have 10M+ users or many large cohorts.
configuration:
redis:
uri: "rediss://cluster:6379"
useCluster: true
Recommended specs: 2-3 shards, 1-2 replicas per shard, 12+ GiB per node.
Cluster-based approach
Prefer a cluster-based approach as cohort size and count increase. Test with your data to finalize the configuration.
Performance characteristics
Latency
- Normal: 1-5ms.
- During background cohort refresh: p95 latency can reach up to 50ms. This typically occurs when you add or remove a significant number of users from a cohort, or attach new large cohorts to a flag.
Cold start
Initial startup time scales with the number and size of targeted cohorts.
Set the readiness probe initialDelaySeconds: 600 to prevent pod restarts during startup.
Monitoring and alerts
The proxy exposes metrics at http://proxy:9090/metrics.
Configure these critical alerts:
# Error rate > 1%
rate(http_requests_total{status=~"5.."}[5m]) > 0.01
# P95 latency > 100ms sustained (ignore during cohort refresh spikes)
histogram_quantile(0.95, rate(http_request_duration_seconds_bucket[5m])) > 0.1
# Memory > 85%
container_memory_usage_bytes / container_spec_memory_limit_bytes > 0.85
# Redis errors
rate(redis_errors_total[5m]) > 0
Troubleshooting
| Issue | Solution |
|---|---|
| High latency (sustained >100ms) | Check Redis latency: redis-cli --latency -h <host> |
| Cohorts not loading | Verify managementKey, check logs for sync errors. |
| Proxy won't start | Verify Redis connectivity, check all API keys. |
| Cold start taking too long | Normal for large cohorts (5-10 min), increase readiness initialDelaySeconds. |
Capacity planning
| Load | Pods | Redis |
|---|---|---|
| <10k req/s | 2 | Standalone + replica |
| 10-20k req/s | 3-4 | Standalone or 2-shard cluster |
| >20k req/s | 5+ | 2-3 shard cluster |
Was this helpful?