Solving Lead Flow Dropoff with Data
There’s a big difference between noticing something strange in the data and actually understanding what’s causing it. For this How I Amplitude episode, we’re joined by Michael Dewey, Lead Analyst at Cars Commerce (formerly Cars.com), who went beyond surface-level metrics to uncover—and solve—the real reason users weren't completing lead forms.
“I love array operators because they allow us to slice and dice the data to look at cases where someone only had an email error, or where they had both email and name errors, or any other combination you can think of.”
Key Highlights
The problem: what’s causing the drop off in completed lead forms?
At Cars Commerce, we define a lead as anyone who submits information to a dealership and says, "I'm interested in purchasing this vehicle."
We have a variety of different types of leads - you can call dealerships, chat on our site cars.com, and you can even presubmit financing applications.
But one of our most crucial sources is the “Contact seller” form that shows up on each individual listing.
This form, which you can see below, asks the individual to provide some basic details about themselves, leave a comment, and submit that information directly to our dealership customer.
To understand the flow on this page, we use a simple funnel chart. This is how we quickly spotted that a significant portion of users who clicked the button to submit our email lead form, weren't actually making it through to completion. Uh-oh.
What made this particularly puzzling was we couldn't immediately tell if users were selectively filling out certain fields and not others, if they were running into technical issues, or if something else entirely was causing them to drop off.
The gap between attempted submissions and successful completions was clear, but the "why" behind it remained.
Our data stack: switching to RudderStack and Amplitude
Before we dive in, it’ll probably help to give you a quick lay of the land of our tech stack and how we got here.
Back in November 2023, we bit the bullet and switched our data infrastructure from Adobe Analytics to Amplitude and RudderStack.
This wasn't just a platform change - it was an opportunity to finally reset how we thought about analytics.
Our old Adobe instance was weighed down by years of cumulative changes and replatforming. It had become messy for product leaders to use effectively. We wanted something simpler that empowered our product leaders to tell the user journey story.
You can see in the image above that we use RudderStack to help us clearly delineate the use of our downstream tools. In our case, this means almost exclusively track user-initiated, frontend activity (or "Active" events, in the Amplitude parlance).
With this higher signal to noise ratio of data in Amplitude, our Product Leaders can confidently go into Amplitude and answer all of their questions around what users are doing in our product.
Houston, we have a (data) problem
We started off including two main events in our tracking plan:
- when someone clicks "Check Availability"
- when an email lead submission attempt is successful.
As I mentioned at the start, our mystery began when we saw in the funnel chart that our rate of nonsubmission was significant. But from there, our insights were throttled because of two things.
First, we couldn't say anything about total event counts - only visitors and visits.
Second, we had no way to distinguish between different types of errors. Was it our internal filtering process? Actual form errors? We just couldn't tell.
Most importantly, we couldn't identify what specifically was causing users to fail to convert.
The solution: making invisible events visible
We introduced a new event that fired when a user attempted to submit an email lead with an invalid form.
The event contained an event property called "action_detail_level_1" to provide context about each error event. Rather than simply knowing that the form wasn’t filled out, we’d collect data on what specific form details weren't filled out correctly, e.g. email, first name, and/or last name.
What we found was surprising: the error patterns for email, first name, and last name were heavily collinear. The line chart below shows identical spikes across these three form fields.
But why?
Our hunch was that the most common reason for submission failure was users failing to fill out any of the required fields at all. Not just one field, not just two - but all of them.
Enter the unsung hero: array operators
We created “action_detail_level_1” to accept an array of error values.
With this approach, we could pass dense, contextual data in a single event property, while maintaining the ability to filter precisely for specific error patterns. Instead of just seeing that errors happened, we could now understand exactly which combination of fields were causing problems.
For example:
- If I only didn’t fill out the email, “action_detail_level_1” = [email].
- If I didn’t fill out anything in the form, “action_detail_level_1” = [email, first_name, last_name]
I love array operators because they allow us to slice and dice the data to look at cases where someone only had an email error, or where they had both email and name errors, or any other combination you can think of.
If A is ‘email,’ the expression ‘where action_detail_level_1 {=} email’ will only give me cases where the error was email but not ‘email and first_name’ or ‘email and first_name and last_name.’
By using Amplitude's “Set equals” array operator, we split out the error types into all possible categories:
Running this analysis, we see that by far the biggest error is, indeed, when all three required fields - email, first_name, and last_name - are not filled out.
A modest theory of why: a misleading CTA button
It’s one thing to have a hunch as to why something is happening, and another thing entirely to be able to prove that hypothesis with the data.
Now came time to understand why the explicit error was happening.
The next step is to develop and test a hypothesis as to what’s causing the error.
Our modest theory was that the CTA button to complete the form, "Check Availability" - wasn’t clear.
If you're checking availability, you might expect to simply verify if a car is in stock. And you might expect that you’d get an answer right away, so why would you need to provide all your contact info?
The button text wasn't aligned with the action we were asking users to take.
So we ran an A/B test with a different CTA of "Send Message" to see if there was any significant difference. We theorized that this change would help improve the conversion rates that we were seeing, as there would be fewer initial submission errors, and greater clarity for the user as to what they were doing.
This small adjustment, we believed, better reflected what users were actually doing - sending their information to a dealership.
The results? We saw improved submission rates without affecting overall conversion, reduced form error rates (which saved us money on email validation), and most importantly, a better user experience through clearer communication.
About Michael Dewey
Michael Dewey is Lead Analyst at Cars Commerce, where he manages the company's Amplitude instance and leads analytics for conversion metrics and authentication experience. Since joining in 2022, he's focused on helping product teams make data-driven decisions through intentional, focused analytics implementation.
Join the community!
Connect with Michael and other practitioners in our community. The Cohort Community exists to help people become better analysts. We focus on actionable programs, sharing best practices, and connecting our members with peers and mentors who share in the same struggles.