Browser Extensions 

overlap

ReviewMeta Analysis Test: Overlapping Review History

April 27th, 2016

At ReviewMeta, we do a comprehensive examination of the reviewers behind the reviews.  One test that we use to help us identify unnatural reviews is comparing the review history of each reviewer and seeing where there is substantial overlap.   

Here is how it works: for each product, we make a list of all other products that these reviewers have also reviewed. Any products that show up at least twice in this list are considered overlapping.  We then narrow the list down to the top nine products that have the most overlap. Any reviewer who has also reviewed at least one product on the top overlapping products list is considered to have substantial overlapping history.  

There are a number of benign reasons that we’d see a high percentage of reviewers having a substantial overlapping history. Certain products can lead to naturally higher levels of these reviewers. If a product is very niche or is used in conjunction with other products it isn’t strange that there are reviewers who have reviewed the same products.

However, a high percentage of reviewers with substantial overlapping history is more commonly caused by factors that can introduce bias. If a single brand dominates the top overlapping products, many of these factors are similar to those causing Brand Repeats such as:

  • Die-hard fans: A brand with a loyal following may have fans who review every single one of their products. These fans may not be reviewing the product with a neutral mindset; their loyalty and past relationship with the brand may positively influence their reviews.
  • Enticing Reviews: Brands will offer discounts or freebies to customers who write reviews for them. The customers they choose to write reviews are intentionally selected to be uncritical; a single negative review could get a customer off the “free stuff” mailing list. This list acts as an incubator for positive reviews, that a brand and its subsidiaries can dip into if they want to boost the average rating of any of their products.
  • Sockpuppet Accounts: Brands will create fake accounts to post positive reviews on all of  their (and any of their sister companies’) products. These reviews are not just biased, they are fraudulent.

But where our Overlapping Review History test really shines, is in it’s ability to find connection between reviewers who aren’t reviewing a common brand. This is essential in spotting the possible use of third party review manipulation services. These companies range from facilitating incentivized reviews to the manufacturing of fraudulent reviews.  These third party services don’t have brand loyalty, they will review products for whichever brand pays them. They may boost a lawn mower’s average rating in the morning and then boost a laptop cover’s rating later that day. Since our Overlapping Review History test looks past the brand we can still find the common connection between these third party service reviewers.

We like to empower our readers with tools to determine the trustworthiness of a product’s reviews. That is why we always show the list of top nine overlapping products for our readers to examine on their own. While we can never determine with complete certainty what’s causing the overlap, our readers can gain insight by examining the top products list.  Here are a few common scenarios and what they might indicate:

  • Product’s brand dominates: This may be an indication of a brand or die hard fans of a brand influencing reviews. Checking the product’s Brand Repeats test results may help support this.
  • One or two brands dominate: If the commonly reviewed items only consist of one or two brands, it is possible that these brands are sister brands of the product being reviewed.
  • Products are seemingly unrelated: Seemingly random products with a high level of overlap may be an indication of third party services influencing the reviews. These services can range from helping facilitate incentivized reviews to services that manufacture fake reviews. Our Incentivized Review test results may help collaborate if a third party incentivized review service is being used.

Even if we aren’t certain of the cause of reviewers with substantial overlapping history we can check to see if these reviewers are influencing the average rating of a product.  We first look at the raw percentage of these reviewers.  While it’s perfectly normal to see some reviewers with substantial overlapping history an excessive amount can trigger a warning or failure. We next look to see if the average rating from reviewers with substantial overlapping history  is higher than the average rating from reviewers without one. If there is a discrepancy we’ll check to see if  is statistically significant.  This means that we run the data through an equation that takes into account the total number of reviews along with the variance of the individual ratings and tells us if the discrepancy is more than just the result of random chance. (You can read more about our statistical significance tests here).  If reviewers with substantial overlapping history have a significantly higher average rating than reviewers without one, it’s a strong indicator that reviewers with substantial overlapping history aren’t evaluating the product from a neutral mindset, and are unfairly inflating the overall product rating.