On this page

Privacy and Security

Amplitude's Agents prioritize privacy and security by design. Agents operate inside authenticated Amplitude accounts, isolate each customer’s data and agent context, and never train third‑party models on customer data. When features use an external Large Language Model (LLM), Amplitude enforces contractual no‑training controls and the broader platform remains covered by Amplitude’s Data Processing Addendum (DPA) and independent audits. Customers who aren't comfortable with LLMs can choose to opt out of Amplitude’s AI features. Opting out disables all Amplitude AI features in your account, including Agents. For information about security and privacy for Amplitude AI features, go to Amplitude's Trust, Security, and Privacy page.

How Agents use data

Amplitude Agents pull context from assets that already live in your Amplitude account (for example, analytics charts, experiments, session replays, or guides/surveys) to answer questions and recommend actions. Agents use only the minimum context needed for each task and keep that context scoped to your organization’s tenant.

Tenant isolation and access boundaries

Amplitude isolates each customer’s agent context and keeps it separate from other customers. Agents operate within your authenticated Amplitude organization and honor your existing access controls and data governance, including which projects, dashboards, and datasets users can access.

Third‑party LLM usage

Third‑party LLMs power Agents, including OpenAI, Claude through AWS Bedrock and Google Vertex, and Google Gemini. Amplitude’s contracts with these LLMs prohibit them from using your data to train or improve their models. Instead, Amplitude instructs the models with Amplitude‑authored prompts and best practices, and Agents build customer‑specific context that stays within your tenant. Amplitude keeps that context separate and prevents any cross‑customer use.

How agent data flows

Agent interactions start in the Agents UI and route through Amplitude’s internal services. These services invoke Amplitude tools, such as analytics query endpoints. When applicable, the services call an LLM endpoint. Outputs return to the authenticated user and can notify Slack or email based on your configuration.

Security and Privacy

Amplitude applies the same enterprise-grade security, privacy, and governance controls to Agents that Amplitude uses across the rest of the platform. The LLMs that Agents use, including OpenAI, Claude through AWS Bedrock and Google Vertex, and Google Gemini, sign enterprise-level agreements with Amplitude and must meet security measures at least as protective as Amplitude's.

Amplitude’s DPA covers Agents by ensuring that any of your data an agent processes is done in compliance with applicable privacy laws. Amplitude maintains SOC 2 Type 2, ISO 27001, and ISO 27018 certifications that cover platform services used by Agents.

Data retention and deletion

Amplitude has implemented zero data retention protocols with the LLMs that power Agents. These protocols prevent third-party LLMs from retaining your data outside of Amplitude’s secure AWS environment.

Customers control data retention within their Amplitude account and can delete data from their Amplitude account at any time using Amplitude’s deletion tools. These platform controls extend to Agents so customers can align their Agent usage with their internal data governance.

EU data residency

If Amplitude provisions your organization in Amplitude’s EU data center, your data never leaves the EU when you use Amplitude’s AI features, including Agents. When you use Agents, Amplitude processes the data entirely within the EU and doesn't transfer it to LLMs in the United States.

Was this helpful?