New Engineering Post: Reducing Kafka costs with Z$tandard

Daniel Jih

Software Engineer II

people reacted
< 1 -minute Read,

Posted on July 12, 2017

One of the major challenges that technology startups will face is scaling up effectively and efficiently. As your user base doubles or triples, how do you ensure that your services still run smoothly and deliver the same user experience? How do you maintain performance while being cost-efficient? Here at Amplitude, our customers have tracked more events in the past year than in the first 3 years of our company combined. As we and our customers grow, we need to continue providing the same if not better service across our platform. Previously, we explained how Nova, our distributed query engine, searches through billions of events and answers 95% of queries in less than 3 seconds. In this blog post, we will focus on our data processing pipeline that ingests and prepares event data for Nova, and explain how we stay cost-effective while our event volume multiplies.

Check out the full post on our Engineering Blog >>

Daniel Jih

Daniel is on Amplitude's back-end engineering team, where he manages Amplitude's client SDKs and works on ingestion processes. He graduated from Stanford University with a Masters in computer science.

More from Daniel