Browser Extensions 


ReviewMeta Analysis Test: Sample Reviews

April 28th, 2016

At ReviewMeta we have developed a simple way to estimate which reviews from are not actually reviewing the full product but rather a sample of the product. You might have noticed that we use this test for instead our Incentivized Review analysis we conduct for Amazon reviews. The reason we don’t check for incentivized reviews is because it is not nearly as big of an industry on as it is on Amazon. Of all the reviews on, only 0.05% contained language that would indicate it’s an incentivized review.

If a review contains the word “sample” we flag it as a sample review. Although this can pick up false positives, such as: “I sampled this product before I bought it”, in general it is a very accurate tool to determine which reviews are actually based on samples.

Sample reviews are problematic for a number of reasons. First, and most obvious, they are not reviewing the full product. This is a fundamental and important distinction. You wouldn’t trust a review of a movie from someone who only saw the trailer, so why would you trust a review of a supplement where the reviewer has only taken 1 of the total 32 servings? The reviewer cannot possibly know how the product will perform in the long run. Maybe the first time they use that new pre-workout they feel the rush of clean energy but by the tenth time that energy has turned into insomnia and less than pleasant digestive side effects.

Another reason sample reviews are untrustworthy is the way that they are receiving the samples.  Sure some reviewers will go and purchase a sample of a product on their own, or they will receive free samples with their order, but it is also very common for brands to send out samples to reviewers with suggestions, or sometimes explicit instructions to write a review for the product. This ends up being a quid pro quo scenario. You scratch my back (give me a free sample), and I’ll scratch yours (give you a good review). This creates an obvious bias. The reviewer isn’t giving an open and honest review out of their interest in helping their fellow consumers – they are doing it because they got something for free, and either want more free stuff – or they feel like they have an obligation to write a review. The result is low effort, low quality reviews, with a bias, which aren’t actually reviewing the product they say they are but rather a sample of that product.

While it is somewhat normal to see some reviews mentioning the word “sample”, an excessive amount can trigger a warning or failure for this test.  Furthermore, if the average rating from sample reviews is higher than the average rating from all other reviews, we’ll check to see if this discrepancy is statistically significant.  This means that we run the data through an equation that takes into account the total number of reviews along with the variance of the individual ratings and tells us if the discrepancy is more than just the result of random chance.  (You can read more about our statistical significance tests here).  If the sample reviews have a significantly higher average rating than all other reviews, it’s a strong indicator that reviewers who are reviewing a sample aren’t evaluating the product from a neutral mindset, and are unfairly inflating the overall product rating.

It’s understandable why brands send samples out to consumers to try and get reviews.  They know that it’s a great way to build up their reputation. Furthermore, they get to build up a list of reviewers who love their products so they can send them more samples in the future.  However they often don’t realize that they are abusing the review platform, which was designed to allow customers to share their honest feedback.  By pumping full of low quality and biased reviews, they are turning the review platform into their personal marketing platform.

Leave a Reply