One of the major challenges that technology startups will face is scaling up effectively and efficiently. As your user base doubles or triples, how do you ensure that your services still run smoothly and deliver the same user experience? How do you maintain performance while being cost-efficient? Here at Amplitude, our customers have tracked more events in the past year than in the first 3 years of our company combined. As we and our customers grow, we need to continue providing the same if not better service across our platform. Previously, we explained how Nova, our distributed query engine, searches through billions of events and answers 95% of queries in less than 3 seconds. In this blog post, we will focus on our data processing pipeline that ingests and prepares event data for Nova, and explain how we stay cost-effective while our event volume multiplies.
New Engineering Post: Reducing Kafka costs with Z$tandard
See how Amplitude's engineering team used Zstandard to scale operations efficiently and effectively.
Perspectives
July 12, 2017
Daniel Jih
Software Engineer II
About the Author
Daniel Jih
Software Engineer II
Daniel is on Amplitude's back-end engineering team, where he manages Amplitude's client SDKs and works on ingestion processes. He graduated from Stanford University with a Masters in computer science.
More Perspectives
Brynn Haynam
Sr. Director, Brand Marketing
Nate Franklin
Director, Product Marketing, Amplitude
Tifenn Dano Kwan
Chief Marketing Officer, Amplitude
Julia Dillon
Senior Product Marketing Manager, Amplitude