top of page

Memphis Taxi 

We used Eye Tracking and A/B Testing data to determine the best design for a Taxi Listing page. We then performed a series of metrics and statistical calculations in order to distinguish between the usability and viability between two designs.

This project was done in collaboration with Daniel Smith, Hu Yun Yi and Rory Hernandez-Romero over the course of 2 weeks. You can see our group project here

You can find our site uploaded with Heroku here.

EYE TRACKING

Hypothesis: 

We expect Version A to be read with a more circular eye motion that covers a larger portion of

the screen, whereas Version B will be read with sweeping eye movements left and right across

the top portion of the screen. This is because Version A has the information arranged in a large,

centered grid, while Version B has it set up in a single row near the top.

Screen Shot 2020-01-27 at 1.19.33 AM.png

As we expected, the visitor to this site started off by looking through all the options in a circular motion before spending some time looking at the details given for Ride Charge and clicking on the button to reserve it.

Screen Shot 2020-01-27 at 1.20.00 AM.png

The user started by looking through all the options from the right to left. Then, they spent some time comparing the details given for Memphis Taxi and Uber before finally clicking the button to reserve with Uber. We were surprised that our user parsed the options from right to left. This was possible because the rightmost image is much brighter

then the others, making it stand out.

A/B TESTING

Hypothesis: 

Click-Through Rate:

● Null Hypothesis: Version A and B both have the same click-through rate

● Alternative Hypothesis: Version A has a higher click-through rate than Version B because the simplistic layout of A encourages clicking through every link for information before making a choice.

Time To Click:

● Null Hypothesis: Version A and B have the same time to click

● Alternative Hypothesis: Version B takes a longer time to click than Version A because there is more information to process on Version B, meaning the user would take more time to read before choosing.

Dwell Time:

● Null Hypothesis: Version A and B have the same dwell time

● Alternative Hypothesis: Version A has a longer dwell time than B because of the user most likely needs to stay on the A reserve pages for longer to read the information about the options before returning, whereas Version B provides more information, meaning there will be less to read on the reserve page.

Return Rate:

● Null Hypothesis: Both versions have the same return rate

● Alternative Hypothesis: Version B has a higher return rate than A because it provides more information and has a more learnable design, meaning that, as long as the user has a satisfactory experience reserving, they would be more likely to trust it and come back again to reserve.

Click-Through Rate:

To calculate our click-through rate, we summed the total number of unique clicks for Version A and B, as well as the number of unique sessions. We then divided the number of unique clicks by the number of unique sessions.

           A: 73.17%

           B: 66.67%

We used the Chi-Squared Test because it is meant to compare multiple variables or categories of information. Here, we have two: whether or not the user clicked. Our resulting value, 0.4300, is far from the 95% value, which is 3.84, meaning that our results were not statistically significant. Therefore, we cannot say for certain that versions A and B do not have the same click-through rate.

Return Rate:

We calculated the return rate by taking the number of unique sessions that returned to the landing page, summing the number of unique sessions that left the page after their first click, and the number of unique sessions that visited the page and did not click anything at all. We then divided those 2 numbers together.

           A: 13.89 %

           B: 18.92 %

We used the Chi-Squared Test for our Return Rate as we are again comparing multiple variables, in this case, the number of users who did vs didn’t return. Our calculated value, 0.6779, is lower than our target value of 3.84, meaning that our results were not statistically significant. We, again, cannot say for certain that versions A and B do not have the same click-through rate.

Time to Click:

To calculate the time-to-click, we found the number of unique sessions that clicked a link on the version they visited, then calculated the difference for their page load time and their click time. We then averaged all the results.

           A: 26565.129 ms

           B: 14006.8065 ms

We used a T-Test for our Time-To-Click because it is meant to compare means between different samples. Our p-value, 0.3849, falls well below the required range for 95%, which is 6.313752. So, we cannot conclude whether our null hypothesis is false, as our data was not statistically significant here.

CONCLUSION

If the company’s priority is the customer’s time, we recommend using Version A, as we found our user made a decision faster on that page. However, if the priority is serving an audience of people new to the area, we recommend Version B, as it provides more information, and its more condensed layout is more accommodating to future additions of companies and information to the page.

bottom of page