Browser Extensions 

participation

ReviewMeta Analysis Test: Reviewer Participation

April 27th, 2016

At ReviewMeta, we look for patterns beyond the reviews themselves by looking at the data that we gather from the reviewers themselves. Our Reviewer Participation test examines the history of all the reviewers of a given product and can help identify unnatural patterns. A reviewer’s participation is the number of reviews they have written. While we can’t conjure a whole lot from an individual participation number, we can start to identify patterns when looking at the participation of all reviewers of a specific product.

Here is how it works:

First, we place every review of a given product into an participation group.  For example, a review written by someone with 2 reviews will fall into the “2 Review” group, a review written by someone with 14 reviews would fall into the “11-20 Reviews” group, and a review by someone with 4,000 reviews (yes, these reviewers do exist) would fall into the “51+ Reviews” group. This allows us to see the distribution of participation groups for the product.

Just a product’s participation group distribution doesn’t tell us a whole lot on it’s own. Without being able to compare the product’s distribution to an expected distribution, we can’t say what groups are suspicious; there is no participation group that is always suspicious or unnatural. To find our expected participation distribution, we pull data about the participation groups for every review in the category.  We then compare the participation distribution of the product with our expected distribution and identify any groups that have a higher concentration than what we’d expect to see.

There are several reasons we’d see a participation distribution that is different than what we’d expect, but all of them indicate that there may be unnatural factors at play.

  • A brand might have offered an incentive for customers or fans to review their product.  This might result in an increase in people who don’t normally write reviews to write a review, causing a spike in the lower participation levels.
  • A brand might be using a third party service to help find reviewers they can send their product out to for free in exchange for a review.  This would result in a disproportional amount of reviewers having a high level of participation.
  • A brand might be creating sockpuppet accounts that all review the same products, creating a spike in that participation group.
  • A brand might be using a third party service to completely manufacture reviews, creating a spike in a specific participation group, depending on the strategy that the third party service uses to create those reviews.  Often times you’ll see a spike in accounts in the 51+ review level, but other times they will be more cautious and only review up to a certain amount of products per account to try and fly under the radar.

If we find any overrepresented participation groups, we’ll list each one in the report, and add them up to see what percent of the product’s reviews are in these participation groups. While it isn’t uncommon to see a small percentage of reviews in overrepresented participation groups, an excessively high number can trigger a warning or failure of this test. Furthermore, if the average rating from the reviews in overrepresented participation groups is higher than the average rating from all other reviews, we will check to see if this discrepancy is statistically significant.  This means that we run the data through an equation that takes into account the total number of reviews along with the variance of the individual ratings and tells us if the discrepancy is more than just the result of random chance. (You can read more about our statistical significance tests here).  If reviews in overrepresented participation groups have a statistically significantly higher average rating than all other reviews, it’s a strong indicator that these reviewers aren’t evaluating the product from a neutral mindset, and are unfairly inflating the overall product rating.

Keep in mind that the individual reviewer participation isn’t what we’re looking at here.  A reviewer with 40 reviews isn’t necessarily any more trustworthy than a reviewer with 4 reviews. However, if every reviewer of a product has 40 reviews, it’s much more suspicious than a product with an even distribution of reviewer participation.