Image of the new design for the reporting component

Improving the reporting of ratings and review

UX & Product designer, 2023
Responsibilities
User research, user experience design, interface design, concept development, product development,  accessibility, prototyping, user experiment, AB-testing

Client

FINN.no AS, Norway's largest online marketplace, was founded in March 2000. Currently, it is owned by Schibsted AS and specializes in facilitating classified ads and services for both individual and corporate users engaged in buying and selling activities.

A standout feature of FINN.no is its widespread popularity and extensive user base among Norwegians. It has firmly established itself as the preferred platform for many people seeking to buy or sell used items, thus thriving as a marketplace for second-hand goods. Notably, each user spends an average of more than 40 hours per year on the platform, and it hosts a staggering number of over 12.6 million ads across various categories, collectively accumulating over 460 million views each year.

Image of a grey cat holding up the FINN logoImage of the FINN logo illustration with the FINN shapes
Illustration of the FINN mascot: PuseFINN
FINN branding with the FINN shapes
Did you know?
Complaining about received ratings or reviews after trades on FINN is one of the top 3 reasons why users contact Customer Service

Experiment scope

Presently, there is an exceedingly low threshold for reporting ratings and reviews on FINN. Users can effortlessly click on the "report" button and express their disagreement in a textbox if they are dissatisfied with what they have received. Unfortunately, this ease of reporting has led to a significant influx of tickets to the customer service department.

Image of the old reporting functionality
The reporting functionality: where a user can report a review with a simple click

Out of the numerous complaints received by the customer service team, only a small fraction has valid reasons for getting their rating or review deleted. This situation has resulted in a considerable number of dissatisfied customers who feel their concerns are being overlooked. Additionally, it places an unnecessary burden on the customer service team, dealing with a significant load of complaints that do not warrant action.

To delete a rating or review, it has to tick off at least one of these boxes:
- The trade never took place
- The review contains threats, harassments or sensitive information
Problem to be solved
How can we help the user to report ratings & reviews correctly and reduce the not relevant traffic to the customer service?

Project opportunities

Can expectation management about the rules of deletion and more informative display of information reduce the irrelevant traffic sent in to customer service?
Can we perhaps help the customers to regain dignity after receiving a rating or review they do not feel aligned with?
Utilizing the power of nudging and choice architecture to both inform the user and help them make the right choice

Design process

With these ideas in mind, we initiated a brainstorming session and sought insights from other services to align our approach with best practices. Our primary objective was to test the hypothesis concerning expectation management, informative display, and user nudging. To achieve this, we began creating wireframes and a prototype of the new flow.Embracing an iterative design process, we conducted four rounds of remote user testing.

During these tests, users were provided with a context of the situation and then given the freedom to perform the report as they deemed fit. This approach allowed us to observe how users interacted with the new flow, what they read, what they skipped, and how well they understood the various report functionalities.

Following each user testing session, we meticulously transcribed, compiled, and sorted the feedback we received. This valuable input was then leveraged to enhance and refine the design in the subsequent iterations. By repeating this iterative cycle, we could fine-tune the user experience, ensuring that our solution was intuitive, effective, and aligned with the users' needs and expectations.

Image of four iterations of user testing
Four iterations of user tests adapted after the results of each test to improve the new flow
Final flow specifications
- We aimed to inform users about various options available to handle a negative rating or review. By presenting these
   alternatives, users could decide on the most suitable course of action for their situation.
- To reduce unnecessary reports to customer service, we made sure to clearly communicate the rules and criteria for
   having a rating or review deleted.
- Customers who did not qualify for deletion of a negative rating or review were provided with an opportunity to    express their frustration by sending feedback to us. This allowed them to vent their emotions and regain a sense of    dignity, even if the review couldn't be removed.
- Depending on the selections made by the user while reporting, the reports were categorized and sorted within the    customer service team's back-office tools. This facilitated a more efficient process for providing assistance to those    who needed it the most.
- We ensured that tracking mechanisms were in place for all the different options provided to the user. This enabled
   us to analyze user interactions with the new flow and understand which options they would choose if they decided    not to report a review.
Image of the final version of the reporting feature showcasing the different user flows
The flow of the final version used in the A/B test

The results

In the end, the experimentation was ran over a period of 7 weeks, due to the number of total exposures (1.191). Here are some of the results:

- Control group (old flow)
       - 133 out of 584 sent report to customer service

- Challenger group (new flow)
       - 42 out of 609 reported a rule violation to customer service
       - 27 out of 609 sent in feedback without rule violation
       - 77 out of 609 followed the link to send message to the other party
       - 37 out of 609 followed the link to reply on the received feedback
       - 426 out of 609 did not report

Image of the statistic from Amplitude results from the end of the experiment
Data from Amplitude as of may 2023 at the end of the experiment
Image showing the new design reduced amount of reports by 68%Image showing a pie chart of what the user clicked on in the new reporting flow
Comparison between the number of reports during the experiment
What the user clicked on in the new challenger flow

Based on the comparison between the Control and Challenger approaches, the Challenger implementation resulted in a significant reduction of 68% in reports sent to customer service. In the Control scenario, both rule violations and non-rule violations were forwarded to customer service for analysis and handling. However, in the Challenger approach, only rule violations led to a report being sent to customer service, streamlining the process and reducing unnecessary reports.

For the Challenger, in cases of no-rule violations, users were given the option to write both a venting message and send feedback. This empowered customers to express their frustration and provide feedback directly without the need to involve customer service for non-rule violation issues.

Additionally, the Challenger offered clearer options that empowered users to resolve issues themselves. As a result, 77% of users with no rule violations opted to use one of the alternative solutions, effectively solving their problems without contacting customer service.The results indicate that the Challenger approach successfully reduced the burden on customer service, improved user autonomy in resolving issues, and enhanced the overall user experience on the platform.

Estimated impact

Image of the estimated impact of reducing the amount of tickets to around 300k NOK savings per year
*The experiment was rolled out on web only. The cost reduction estimate per year is based on full roll out on all clients, that is web, iOS & Android. Cost are estimated by today's traffic and is expected to increase in pace with increased transaction volume.

Conclusion

The Challenger approach not only reduced reports to Customer Service by 68% but also had a positive impact on customer satisfaction. By providing users with the ability to be self-served when they received ratings and reviews they disagreed with, the Challenger approach empowered customers to take control of their own experiences. This was evident through increased usage of self-service features such as replying to reviews or sending messages to the review sender. Additionally, customers with no-rule violation cases also provided positive feedback, indicating their satisfaction with the available options.

Throughout the experiment, close collaboration with customer service was vital in understanding how the change affected users. It was crucial to ensure that the reduced reports to Customer Service were not simply being shifted to other channels, as this would not reflect the desired change. However, the collaboration with Customer Service confirmed that there was no significant increase in traffic to other channels, indicating that the reduction in reports was genuine and effective.

By working closely with Customer Service, the Challenger approach demonstrated its success in both reducing the burden on support team and improving the overall user experience. It effectively empowered users to handle certain issues on their own, leading to greater satisfaction and a more streamlined reporting process for rating and review concerns.

Feedback from customer service & users:
- Users in the challenger group are more satisfied because they get the opportunity to solve the case themselves
- Those user who were in the challenger group and end up contactingCustomer Service because of rule violation get    correct help that is alignedwith their expectations
- Those users who are in the control group and do not get their rating deleted (because there are no rule violation)    are often disappointed as theydid not get the help they expected

In conclusion, the Challenger approach yielded notable results both quantitatively and qualitatively. The reduction in load on customer service, as well as the positive impact on customer service interactions, validated the effectiveness of the new flow. Based on these findings, the decision was made to implement the Challenger approach at 100% capacity after the conclusion of the experiment.

By turning up the Challenger flow to full deployment, the platform could capitalize on the benefits of decreased reports to customer service and enhanced user satisfaction. The successful outcome of the experiment validated the value of the changes made, leading to a more efficient and user-centric system for handling ratings and reviews.