Shayan

From Clicks to Clarity

My role:

Product Designer

Main collaborators:

This is a brief look—curious about the full story? Reach out to me.

Copied. Say hello anytime.

Background & Context

Mrbilit is an online travel agency (OTA) that offers flights, trains, buses, and hotels. When this project began, there had already been three failed attempts at “doing analytics”:

  • Events had been added to almost every UI element with no design intent.
  • Four different analytics tools slowed page load times and contradicted each other.
  • Teams had a basic click chain (search → results → form → payment → success) and a global conversion rate, but no one could say what decisions this data actually supported.

We chose to fix this first on a high-impact area: the domestic flights booking funnel, with roughly 500k customers and an average of 2.15 tickets per customer, and only one month before the main seasonal peak.

Problem

In a post-COVID market with thin margins and high operational costs, we were effectively designing blind. 1. We saw interactions, not intent or outcomes. We couldn't tell whether using filters or the price calendar actually led to more successful bookings, or what happened after a "no results" state. We knew what was clicked; we didn't know what helped people reach a confirmed reservation. 2. Business metrics were disconnected from experience. Bookings, revenue, route trends, CAC, repeat bookings, and call-center load lived in separate places, not tied back to the funnel. This meant we couldn't answer essential questions:

  • Is a drop on a key route caused by results, checkout, payments, or supply?
  • Do SEO users on the results page behave differently from users coming from the homepage?

3. The mental model was wrong. The implicit logic was: "more events" and "feature usage = success." In reality, nobody owned most metrics, and some "popular" features might have been hurting completion. Any change to the funnel could quietly damage revenue or overload the call center, and we'd only see the symptoms later.

What I Did

I took ownership as a designer, redesigning the system that supports decisions, not the UI.

  • Framed the problem and aligned stakeholders
  • I mapped customer journeys, analyzed competitors, and ran workshops with product, data, tech, supply, support, and marketing to create shared clarity around risks, gaps, and priorities.
  • Defined a shared, minimal metric framework
  • We killed low-value events and tools and agreed on a small set of high-value metrics:
  • Outcome metrics: bookings, revenue, (where possible) margin per product, route, device, and channel.
  • Funnel metrics at key steps:
  • Session → search
  • Search → results view
  • Results view → ticket click
  • Ticket click → checkout started
  • Checkout started → payment attempted
  • Payment attempted → payment success
  • Payment success → reservation issued
  • Feature-to-outcome questions
  • Filters: Do they actually improve completion?
  • Calendar: Do date-changers, especially to cheaper days, complete more bookings?
  • “No results”: Who switches to other dates/services? With what conversion?
  • Guardrails:
  • Calls per 100 bookings by reason (results confusion, rules/refunds, payment, airport issues).
  • Payment failure rates by gateway and method.
  • Provider error rates (no capacity, price mismatch, technical failures).
  • Introduced three practical funnels
  • I defined three shared perspectives for diagnosing behaviour:
  • End-to-end booking funnel:
  • Session → search → results → ticket click → checkout → payment → reservation
  • Results effectiveness funnel:
  • Results viewed → ticket seen → ticket clicked → checkout started
  • Checkout & reliability funnel:
  • Checkout started → form completed → payment attempted → payment success → reservation issued
  • These became the backbone of simple dashboards and a weekly cross-team review.
  • Enabled hypothesis-driven work
  • Using this structure, we could finally frame experiments like:
  • “If we improve how price, times, capacity and basic conditions are presented on result cards, results → ticket click will increase for mobile SEO traffic, without more calls about ticket rules.”
  • “If we improve ‘no results’ with better alternatives, fewer users should exit and more should book on other dates or services.”

This turned analytics from a passive reporting layer into a toolbox for hypothesis-driven design and product work.

Journey mapping for the flights funnel
Customer journey map used to align teams on how travellers experience the booking funnel, from first search to post-booking support.
System map for the analytics overhaul
System map of the booking funnel in our OTA, mapping user behaviour to signals, metrics, diagnosis, and the product and design decisions built on top of them.

Impact

This project didn't change a single screen directly, but it changed what we could design, prioritize, and defend.

  • We moved from “it seems there’s a problem at step 4 on mobile” to a precise diagnosis of where and for whom the funnel was leaking.
  • We could see differences in their behaviour and conversion across the funnel for different entry paths, and design SEO landings and internal searches as distinct journeys.
  • When we connected feature usage to booking success, we discovered that one frequently used feature was actually harming conversion, not helping. That changed how we judge which features to keep, fix, or remove.
  • Most importantly, it positioned design as the driver of a cross-team, data-informed way of working: we didn’t just decorate the funnel; we built the analytic foundation that later SRP, checkout, and experimentation projects rely on.
From Clicks to Clarity — Shayan Khalilian — Shayan Khalilian