With Amplitude’s Amazon S3 Import, you can import and mutate event data, and sync user or group properties into your Amplitude projects from an AWS S3 bucket. Use Amazon S3 Import to backfill large amounts of existing data, connect existing data pipelines to Amplitude, and ingest large volumes of data where you need high throughput and latency is less sensitive.
During setup, you configure conversion rules to control how events are instrumented. After Amazon S3 Import is set up and enabled, Amplitude's ingestion service continuously discovers data files in S3 buckets and then converts and ingest events.
Depending on your company's network policy, you may need to add these IP addresses to your allowlist in order for Amplitude's servers to access your S3 buckets:
Region | IP Addresses |
---|---|
US | 52.33.3.219 , 35.162.216.242 , 52.27.10.221 |
EU | 3.124.22.25 , 18.157.59.125 , 18.192.47.195 |
Before you start, make sure you meet the following prerequisites.
insert_id
for each row. This helps prevent data duplication if unexpected issues arise. For more information, see Deduplication with insert_id
.The files you want to send to Amplitude must follow some basic requirements:
INSERT
, UPDATE
, and DELETE
. If you don't provide a mutation type, the process defaults to UPDATE
.Amplitude processes files exactly once. You can’t edit files once you upload them to the S3 bucket. If you do edit a file after you upload it, there’s no guarantee that Amplitude processes the most recent version of the file.
After an S3 import source ingests a file, the same source doesn't process the file again, even if the file receives an update.
For each Amplitude project, AWS S3 import can ingest:
insert_id
For ingestion syncs only, Amplitude uses a unique identifier, insert_id
, to match against incoming events and prevent duplicates. If within the same project, Amplitude receives an event with insert_id
and device_id
values that match the insert_id
and device_id
of a different event received within the last 7 days, Amplitude drops the most recent event.
Amplitude recommends that you set a custom insert_id
for each event to prevent duplication. To set a custom insert_id
, create a field that holds unique values, like random alphanumeric strings, in your dataset. Map the field as an extra property named insert_id
in the guided converter configuration.
Follow these steps to give Amplitude read access to your AWS S3 bucket.
Create a new IAM role, for example: AmplitudeReadRole
.
Go to Trust Relationships for the role and add Amplitude’s account to the trust relationship policy to allow Amplitude to assume the role using the following example.
amplitude_account
: 358203115967
for Amplitude US data center. 202493300829
for Amplitude EU data center.external_id
: unique identifiers used when Amplitude assumes the role. You can generate it with help from third party tools. Example external id can be vzup2dfp-5gj9-8gxh-5294-sd9wsncks7dc
.{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::<amplitude_account>:root" }, "Action": "sts:AssumeRole", "Condition": { "StringEquals": { "sts:ExternalId": "<external_id>" } } } ]}
Create a new IAM policy, for example, AmplitudeS3ReadOnlyAccess
. Use the entire example code that follows, but be sure to update <> in highlighted text.
filePrefix
. For folders, make sure prefix ends with /
, for example folder/
. For the root folder, keep prefix as empty.Example 1: IAM policy without prefix:
{ "Version":"2012-10-17", "Statement":[ { "Sid":"AllowListingOfDataFolder", "Action":[ "s3:ListBucket" ], "Effect":"Allow", "Resource":[ "arn:aws:s3:::<bucket_name>" ], "Condition":{ "StringLike":{ "s3:prefix":[ "*" ] } } }, { "Sid":"AllowAllS3ReadActionsInDataFolder", "Effect":"Allow", "Action":[ "s3:GetObject", "s3:ListBucket" ], "Resource":[ "arn:aws:s3:::<bucket_name>/*" ] }, { "Sid":"AllowUpdateS3EventNotification", "Effect":"Allow", "Action":[ "s3:PutBucketNotification", "s3:GetBucketNotification" ], "Resource":[ "arn:aws:s3:::<bucket_name>" ] } ]}
Example 2: IAM policy with a prefix. For a folder, make sure the prefix ends with /
, for example folder/
:
{ "Version":"2012-10-17", "Statement":[ { "Sid":"AllowListingOfDataFolder", "Action":[ "s3:ListBucket" ], "Effect":"Allow", "Resource":[ "arn:aws:s3:::<bucket_name>" ], "Condition":{ "StringLike":{ "s3:prefix":[ "<prefix>*" ] } } }, { "Sid":"AllowAllS3ReadActionsInDataFolder", "Effect":"Allow", "Action":[ "s3:GetObject", "s3:ListBucket" ], "Resource":[ "arn:aws:s3:::<bucket_name>/<prefix>*" ] }, { "Sid":"AllowUpdateS3EventNotification", "Effect":"Allow", "Action":[ "s3:PutBucketNotification", "s3:GetBucketNotification" ], "Resource":[ "arn:aws:s3:::<bucket_name>" ] } ]}
Go to Permissions for the role. Attach the policy created in step3 to the role.
Complete the following steps to configure the Amazon S3 source:
In Amplitude, create the S3 Import source.
Amplitude recommends that you create a test project or development environment for each production project to test your instrumentation.
To create the data source in Amplitude, gather information about your S3 bucket:
When you have your bucket details, create the Amazon S3 Import source.
In Amplitude Data, click Catalog and select the Sources tab.
In the Warehouse Sources section, click Amazon S3.
Select Amazon S3, then click Next. If this source doesn’t appear in the list, contact your Amplitude Solutions Architect.
Complete the Configure S3 location section on the Set up S3 Bucket page:
com-amplitude-vacuum-<customername>.
This tells Amplitude where to look for your files.Optional: enable S3 Event Notification.
Next, create your converter configuration.
Amplitude continuously scans buckets to discover new files as they're added.
If you add new fields or change the source data format, update your converter configuration
The converter configuration gives the S3 vacuum this information:
\w+\_\d{4}-\d{2}-\d{2}.json.gz
You can import event, user property, and group property data.
Data Type | Description |
---|---|
Event | Includes user actions associated with either a user ID or a device ID and may also include event properties. |
User Properties | Includes dictionaries of user attributes you can use to segment users. Each property is associated with a user ID. |
Group Properties | Includes dictionaries of group attributes that apply to a a group of users. Each property is associated with a group name. |
Profiles | Includes dictionaries of properties that relate to a user profile. Profiles display the most current data synced from your warehouse, and are associated with a user ID. |
Select from the following strategies, depending on your data type selection.
Strategy | Description |
---|---|
Mirror Sync | Directly mirrors the data in S3 with INSERT , UPDATE , and DELETE operations. To keep the data in sync with your source of truth, this strategy deactivates Amplitude's enrichment services like user property syncing, group property syncing, and taxonomy validation. |
Append Only Sync | Imports new rows with Amplitude's standard enrichment services. |
See the following table to understand which data types are compatible with which import strategies.
Data type | Supported import strategies |
---|---|
Event | Mirror and Append Only |
User properties | Append Only |
Group properties | Append Only |
Profiles | Mirror |
When you use mutations, Amplitude doesn't merge INSERT
, UPDATE
, or DELETE
operations to per-row mutations based on your sync frequency. This means that when more than one operation is made to an event during the sync window, they may apply out of order. Each operation also counts toward your event volume. As a result, you may use your existing event volume more quickly than you otherwise would. Contact sales to purchase additional event volume.
Find a list of supported fields for events in the HTTP V2 API documentation and for user properties in the Identify API documentation. Add any columns not in those lists to either event_properties
or user_properties
, otherwise it's ignored.
After you add all the fields you wish to import, view samples of this configuration in the Data Preview section. Data Preview automatically updates as you include or remove fields and properties. In Data Preview, you can look at a few sample records based on the source records along with how that data is imported into Amplitude. This ensures that you are bringing in all the data points you need into Amplitude. You can look at 10 different sample source records and their corresponding Amplitude events.
The group properties import feature requires that groups are set in the HTTP API event format. The converter expects a groups
object and a group_properties
object.
The converter file tells Amplitude how to process the ingested files. Create it in two steps: first, configure the compression type, file name, and escape characters for your files.
Then use JSON to describe the rules your converter follows.
The converter language describes extraction of a value given a JSON element. You specify this with a SOURCE_DESCRIPTION, which includes:
See the Converter Configuration reference for more help.
When your converter is configured, click Save and Enable to enable the source.
April 22nd, 2024
Need help? Contact Support
Visit Amplitude.com
Have a look at the Amplitude Blog
Learn more at Amplitude Academy
© 2025 Amplitude, Inc. All rights reserved. Amplitude is a registered trademark of Amplitude, Inc.