core metrics change over top

Should You Adjust Your Core Metrics Over Time?

Should the core metric you defined when you started building the product be the same 1, 3 or 10 years into its lifetime? On one hand, your product might change in the early phases, but its core purpose is probably going to remain consistent. And your users are still those same people who are attracted to that core value.

Or are you asking for trouble when you cling onto your early metrics for too long?

Spoiler: it’s asking for trouble.

Continue reading

eng-data-integrity

Analytics that Doesn’t Compromise on Data Integrity

At Amplitude, we believe first and foremost in providing the best product analytics. We find the right solution for our users and then figure out how to make it happen on the engineering side. This is in contrast to other analytics services or in-house analytics teams that make compromises on data integrity because it’s easier from a technical perspective. But one of the top reasons that people don’t use analytics to make decisions is that they don’t trust the data. And for good reason — those of us building analytics have historically chosen to sacrifice accuracy when it makes systems easier to build. However, we believe that the role of analytics is changing, and that analytics can and needs to be better than that.

Read the full post on our Engineering Blog to learn about 4 technical problems we solved to ensure data integrity >>

Hackathon - Blog

How Hackathons Can Drive Velocity and Disrupt Your Product Roadmap

Hackathons are a time honored tradition of many tech companies. They’re a time for everyone to break free from their day to day work and innovate. Here at Amplitude, hackathons have been a great way of bypassing the traditional processes of product development to disrupt our own roadmap, as well as an opportunity to foster cross functional teamwork and relationships. We’ve taken to doing a hackathon at the start of every quarter, and are coming hot off of our third with some fresh ideas and ambitious projects.

Check out the highlights from our July Hackathon on the Engineering Blog >>

July-Product-Update

July Product Release: In-Line Behavioral Cohorts, New Group By Visualization, & More!

Happy July! Our Product Development team has been continuously shipping these past few weeks and we’re excited to share with you the new features and improvements we’ve made to Amplitude. In this product update, you will find information on:

  • New Features
    • Chart Transitions
    • In-Line Behavioral Cohorts
    • Percentiles in Formulas
    • Propose Charts to Dashboards
  • Feature Improvements
    • Auto updated top values
    • Custom Events: View definitions in charts
    • Behavioral Cohorts: View definitions in charts
    • Event Segmentation: New group by visualization
    • Event Segmentation: Tabs in bottom module
    • Retention Analysis: Multiple return events
  • Resources
    • User Retention Bootcamp Webinar Series
  • SDK Updates

Continue reading

user-retention-bootcamp-header

Want to Grow Your Business? Focus on User Retention

Do you care about growth? Growth of your business, growth of your customer base and growth of your revenue? Then you should care about user retention.

Without retention, your product is a leaky bucket; you can invest as much as you can into user acquisition, and yet still have no long term users, which means no sustainable growth and no means to generate revenue.

Continue reading

Eng - Z$tandard

New Engineering Post: Reducing Kafka costs with Z$tandard

One of the major challenges that technology startups will face is scaling up effectively and efficiently. As your user base doubles or triples, how do you ensure that your services still run smoothly and deliver the same user experience? How do you maintain performance while being cost-efficient? Here at Amplitude, our customers have tracked more events in the past year than in the first 3 years of our company combined. As we and our customers grow, we need to continue providing the same if not better service across our platform. Previously, we explained how Nova, our distributed query engine, searches through billions of events and answers 95% of queries in less than 3 seconds. In this blog post, we will focus on our data processing pipeline that ingests and prepares event data for Nova, and explain how we stay cost-effective while our event volume multiplies.

Check out the full post on our Engineering Blog >>