Increasing Signups for a Pet Insurance Aggregator

Increasing Signups for a Pet Insurance Aggregator

🎨 Page Design

🧪 User Testing

📊 Research Report

🔍Heuristic Evaluation

Overview

Pet Insurance Quotes is a marketplace where pet owners compare insurance plans from leading providers like Healthy Paws, Embrace, and Nationwide. When our team took over the product from its previous owners, we inherited a funnel that had received minimal investment: there was no existing analytics, no prior user research, and no documentation of what was or wasn't working.

I led the end-to-end redesign of the conversion funnel as the sole Product Designer and UX Researcher. Starting from a near-zero knowledge baseline, I built the research foundation, identified the core UX problems through competitive analysis and usability testing, and redesigned the experience from form entry through quote results. The goal was to reduce friction, build user trust, and help people move from "I'm curious about pet insurance" to "I want this plan."

Impact

6

6

moderated usability

tests conducted

Increase in B2C Navigation Task Success Rates

3

3

critical UX

barriers identified

Role

Product Designer & UX Researcher

Year

2024

Understanding the Problem

When we took over Pet Insurance Quotes, there was no analytics baseline, no prior research, and no record of design decisions. The site had been running with minimal updates, and the only thing we knew for certain was that the conversion funnel wasn't performing the way the business needed it to.

Without existing data to guide priorities, I started by auditing the experience myself. I walked through the full funnel as a user, documented every friction point I encountered, and mapped the issues into three categories.

The form experience created confusion instead of confidence. Pet-related and personal information fields were mixed together on a single page with no clear grouping or sense of progression. Field labels were vague ("Pet Type," "Zipcode") without context for why the information was needed. There were no interactive states on form fields or buttons, and the overall layout felt dated and inconsistent.

The visual design undermined trust. Beyond layout issues, the funnel failed WCAG color contrast standards on several elements, had no keyboard navigation support, and lacked the visual polish users expect from a site asking for personal information. For a product that's ultimately asking people to make a financial decision, this mattered.

The copy wasn't working hard enough. Labels and CTAs were generic rather than guiding. Working with our copywriter and SEO team, I flagged opportunities to clarify what each form step was asking for, strengthen the value proposition language, and integrate keywords the page was missing.

AUDIT OF THE OLD FUNNEL

Audit of the inherited funnel experience. No analytics or prior research existed, so I started with a hands-on evaluation.

1

3

2

4

1

Lack of Design Structure

2

Unclear Labeling

3

No Interactive States

4

Failed WCAG Color Contrast

Learning From the Market

With no internal data to reference, I turned to the market. I conducted a heuristic evaluation of how competing providers and aggregators structured their quote funnels, analyzing the flows, content strategies, and design patterns of Lemonade, Pawlicy Advisor, Spot Pet Insurance, and Nationwide.

Three patterns stood out across the strongest competitors. First, top-performing funnels used progressive disclosure, breaking forms into clear, labeled steps rather than presenting everything on a single page. This reduced perceived effort and gave users a sense of momentum. Second, providers like Lemonade placed social proof (reviews, ratings, trust badges) directly adjacent to CTAs, reinforcing confidence at the moment users were deciding whether to proceed. Third, the best results experiences didn't treat all quotes equally. They highlighted a recommended option and used visual hierarchy to surface the most decision-relevant details first.

These patterns gave us a clear direction: restructure the form into logical groupings, embed trust signals throughout the flow, and design the results page to guide decision-making rather than just display options.

Building a Research Prototype

Our team wanted to validate design changes with real users before committing to a full production build. But rather than testing the old funnel and surfacing problems we already knew about, we took a more strategic approach: we designed a research prototype that incorporated our initial improvements, so testing would surface new, actionable insights rather than confirm obvious issues.

This prototype made targeted changes: reorganizing the form field structure, updating copy to align with our tone and SEO requirements, and applying foundational visual improvements from the design system I was simultaneously helping to build. It wasn't intended to be the final design. It was intended to be good enough to test against.

FUNNEL PROTOTYPE

Research prototype used for moderated usability testing, incorporating initial structural and content improvements.

What Users Told Us

I conducted 6 moderated usability tests over Zoom with participants who had either recently purchased pet insurance or were actively considering it for a pet under 2 years old, our target demographic. Each participant used a mobile staging link to complete the task of getting a quote for their pet, starting from the homepage. Sessions included pre-task questions about participants' familiarity with pet insurance, a think-aloud walkthrough, and post-task reflection.

Three findings shaped everything that came next.

The form felt like data collection, not a helpful tool. Participants were confused about why certain information was being requested. Several hesitated at the personal information fields, unsure whether they were committing to something or just exploring options. The lack of context around each field created friction at the exact moment users were deciding whether to trust the site.

People wanted to talk about their pet first, not themselves. Multiple participants reacted negatively to being asked for their name and email before entering any information about their pet. This ordering felt transactional, signaling that lead capture was the priority rather than helping them find the right coverage.

The results page was a wall of sameness. Once participants reached the quote results, they struggled to differentiate between providers. Cards were visually identical regardless of insurer. Without a recommended option, clear comparison points, or any hierarchy, most users felt overwhelmed and unsure how to choose. That said, participants consistently valued the ability to compare multiple quotes in one place, viewing it as the core promise of the product.

I'm not sure what quote to look at first

I don't know what a waiting period is.

WHAT WE HEARD & CHANGED

Mapping research findings to design changes, showing how each user pain point informed a specific funnel improvement.

What We Heard

What We Changed

Users confused by mixed form fields

Clarify copy so users know if a form is asking for pet or owner details

Users wanted pet-first,
not personal-first

Ensured the form starts with
pet details

Results felt identical and overwhelming

Added recommended badge, dynamic provider details, progressive disclosure

Turning Insights into Design Decisions

Armed with clear research findings, I redesigned the funnel to directly address each point of friction. Every major design change maps back to something users told us.

EVOLUTION OF THE FUNNEL

Progression from inherited funnel to research prototype to refined production design.

Restructured the form around how pet owners actually think. We separated pet information from owner information into clearly labeled sections, with pet details coming first. This matched the mental model users showed us in testing: "tell me about your pet, then tell me about you." Field labels were rewritten to explain what each piece of information is used for, reducing the "why are they asking me this?" hesitation that stalled users in the original flow.

Redesigned the results page to guide decisions, not just display options. We introduced a "Recommended provider" badge on the top result to give users a clear starting point. Each quote card was redesigned to surface the most decision-relevant details upfront: provider rating, max annual payout, and a coverage overview. Additional details are accessible through expandable sections, reducing initial overwhelm without hiding information users need to make a confident choice.

Made comparisons meaningful with dynamic, provider-specific details. The previous results cards displayed identical layouts regardless of insurer. The redesigned cards pull in data unique to each provider, including specific coverage terms, waiting periods, and policy options, so users can make real comparisons without leaving the page. API limitations prevented us from displaying exact monthly or yearly costs from each provider, which we documented as a known gap for future iterations.

Final Design: Helping Users Compare With Confidence

Confirm what the user submitted

Pet cards reassure users that quotes match the pets they entered.

Give users a clear starting point

The recommended provider treatment helps reduce decision paralysis.

Make comparison details easier to scan

Provider-specific payout, coverage, waiting period, and policy details help users evaluate tradeoffs.

Product constraints

Exact monthly/yearly pricing was not available through provider APIs, so the design emphasized the comparison details the system could reliably provide.

Reflections

This project reinforced how lightweight, focused research can de-risk product decisions when teams are working from limited data. Six moderated tests, conducted early enough to influence the build, helped us move from assumptions to specific design priorities: a pet-first form structure, clearer comparison hierarchy, and a more guided quote results experience.

Working within real constraints also shaped the outcome in productive ways. A tight timeline pushed us to be ruthless about scope. Limited development resources meant we had to prioritize the changes with the highest expected impact on conversion, like the form restructure and the recommended provider treatment, while deferring enhancements like tooltips on coverage terms for future iterations.

If I could do it again with more time and budget, I'd want to run a follow-up round of testing on the final shipped design to validate our post-launch metrics with qualitative data, and I'd push to expand the comparison experience with richer filtering and sorting tools on the results page.