On this page

Data Subject Access Request API

Considerations

  • Expect about one file per month per app for which the user has data.
  • Each download URL requires the same auth credentials to access.
  • Because the API is asynchronous, you must poll to check the status of the request. Refer to the Rate limits section to select the appropriate polling rate.

This API only returns the behavioral data associated with the user. If you connected Support and Service sources (such as Zendesk or Intercom) under the AI Feedback sources and enabled email mapping, find the data associated with the user in the User Profile on the AI Feedback tab. To open the User Profile, search for the user on the Users -> User Profile page.

Asynchronous operation

To support data volume, this API works asynchronously. Getting user data happens in three steps:

  1. Make a POST request, which returns a requestId.
  2. Make a GET request using the requestId to check the status of the job.
  3. After the job finishes, make a GET request to fetch a list of URLs to get the data files from.

Output

Each file is gzipped, and the contents adhere to the following rules:

  • One line per event
  • Each line is a JSON object
  • No order guarantee

Example Output

plaintext
{"amplitude_id":123456789,"app":12345,"event_time":"2020-02-15 01:00:00.123456","event_type":"first_event","server_upload_time":"2020-02-18 01:00:00.234567"}
{"amplitude_id":123456789,"app":12345,"event_time":"2020-02-15 01:00:11.345678","event_type":"second_event","server_upload_time":"2020-02-18 01:00:11.456789"}
{"amplitude_id":123456789,"app":12345,"event_time":"2020-02-15 01:02:00.123456","event_type":"third_event","server_upload_time":"2020-02-18 01:02:00.234567"}

Rate limits

All DSAR endpoints share a budget of 14.4 K "cost" per hour. POST requests cost 8, and GET requests cost 1. Requests beyond this count get 429 response codes.

For each POST, expect one output file per month per project the user has events for.

For example, if you fetch 13 months of data for a user with data in two projects, expect about 26 files.

To get data for 40 users per hour, you can spend 14400 / 40 = 360 cost per request. Conservatively allocating 52 GETs for output files (twice the computed amount) and 8 for the initial POST, you can poll for the status of the request 360 - 8 - 52 = 300 times.

Given the 3 day SLA for results (4,320 minutes), this allows for checking the status every 4320 / 300 ~= 15 minutes over 3 days. A practical use might be a service that runs every 20 minutes, posting 20 new requests and checking the status of all outstanding requests.

SLAs

  • Request jobs complete within 3 days.
  • Request result expires in 2 days.
  • Users with more than 100k events per month aren't supported.

Example client implementation

python
base_url = 'https://amplitude.com/api/2/dsar/requests'
payload = {
  "amplitudeId": AMPLITUDE_ID,
  "startDate": "2019-03-01",
  "endDate": "2020-04-01"
}
headers = {
    'Accept': 'application/json',
    'Content-Type': 'application/json'
}
auth = HTTPBasicAuth(API_KEY, SECRET_KEY)
r = requests.post(base_url, headers=headers, auth=auth, data=payload)
request_id = r.json().get('requestId')
time.sleep(POLL_DELAY)
while (True):
    r = requests.get(f'{base_url}/{request_id}', auth=auth, headers=headers)
    response = r.json()
    if response.get('status') == 'failed':
        sys.exit(1)
    if response.get('status') == 'done':
        break
    time.sleep(POLL_INTERVAL)
for url in response.get('urls'):
    r = requests.get(url, headers=headers, auth=auth, allow_redirects=True)
    index = url.split('/')[-1]
    filename = f'{AMPLITUDE_ID}-{index}.gz'
    with open(f'{OUTPUT_DIR}/{filename}','wb') as f:
        f.write(r.content)

Create a request for data

curl
curl --location --request POST 'https://amplitude.com/api/2/dsar/requests' \
--header 'Accept: application/json' \
--header 'Content-Type: application/json' \
-u '{org-api-key}:{org-secret-key}' \
--data-raw '{
"userId": "12345",
"startDate": "2020-04-24",
"endDate": "2022-02-20"
}'

Request body

NameDescription
userIdRequired if amplitudeID isn't set. The user ID of the user to request data for.
amplitudeIdRequired if userID isn't set. Integer. The Amplitude ID of the user to request data for.
startDateRequired. Date. The start date for the data request.
endDateRequired. Date. The end date for the data request.

Response

When successful, the call returns a 202 Accepted response and requestId. Use the requestId to poll the job status.

json
{
    "requestId": 53367
}

Get request status

Poll the data request job to get its status.

bash
curl --location --request GET 'https://amplitude.com/api/2/dsar/requests/{request-id}' \
--header 'Accept: application/json' \
-u '{org-api-key}:{org-secret-key}'

Path variables

NameDescription
requestIdRequired. The request ID retrieved with the create data request call.

Response

NameTypeDescription
requestIdIntegerThe ID of the request.
userIdStringThe User Id of the user to request data for.
amplitudeIdIntegerThe Amplitude ID of the user to request data for.
startDateDateThe start date for the data request.
endDateDateThe end date for the data request.
statusStringOne of: staging (not started), submitted (in progress), done (job completed and download URLs populated), or failed (job failed, may need to retry).
failReasonStringIf the job failed, contains information about the failure.
urlsArray of stringsA list of download URLs for the data.
expiresDateThe date that the output download links expire.

Get output files

Download a returned output file.

The download link is valid for two days. Most clients used to send API requests automatically download the data from the S3 link. If your API client doesn't automatically download the file from the link, access it manually using your org API key as the username and your org secret key as the password.

curl
curl --location --request GET 'https://analytics.amplitude.com/api/2/dsar/requests/:request_id/outputs/:output_id' \
-u '{org-api-key}:{org-secret-key}'

Path variables

NameDescription
request_idRequired. Integer. The ID of the request. Returned with the original GET request.
output_idRequired. Integer. The ID of the output to download. An integer at the end of the URL returned in the status response after the job finishes.

Was this helpful?