Using AI to Research Products for You

Using AI to Research Products for You

Reviewr Team

Reviewr Team

Product Research is a Mess

I'm sure you've been there. You're looking for a product. You start reading the product reviews and realize that you want to compare products, read other opinions, and check other sources. You go back to the search engine, search for “reviews for XYZ,” and you start scrolling and scrolling and scrolling.

You’re confused. You don’t want just to trust the reviews on Amazon. You don’t want just to trust the word of a random blogger or an anonymous Reddit user.

Why isn’t there something like Rotten Tomatoes but for everything else that offers an aggregate view of product reviews?

We asked ourselves that question and are acting to fix it.

To further understand the current problems with product research, we've spoken to hundreds of consumers and experts.

The biggest frustrations with online product research we’ve found are:

  • It's time-consuming
  • Sponsored content / Ads
  • SEO / YouTube spam
  • Fake reviews
  • Inconsistent metrics and info across sites

On average, people visit five websites for product research before purchasing and spend many hours or even days before they finally make the purchase.

90% of the consumers check user reviews before making a purchase, making reviews the ultimate gatekeeper to a sale.

backpack comic

The Solution We’re Building

reviewr.ai aggregates online product reviews and provides AI-powered analysis. It's like having a team of people researching products for you.

Our mission is to make product research less complicated, more transparent, and less time-consuming for end-consumers.

Using technologies like GPT-3, BERT, and AutoML, we’ve built a service that collects product reviews from all over the web, filters and aggregates them, and presents results in simple and understandable ways.

Our core features are:

  • Automated review collection
    • Consumer (Amazon, Walmart, etc.)
    • Article (Wirecutter, etc.)
    • Video (YouTube, etc.)
    • Community (Reddit, etc.)
  • Review relevance/helpfulness rating
  • Fake detection
  • Sentiment analysis
  • Keyword extraction
  • Short summaries
baqpa
Explore what we can do on our demo site: baqpa.com

Our Step-by-Step Process

reviewr.ai Software Architecture

Collecting Reviews

We collect reviews from across the internet to provide a democratic, unified voice for both consumers and experts.

We have custom scrapers for the big marketplaces like Amazon, Walmart, Target, and e-commerce sites via the most popular integrated review systems like Yotpo, Stamped, PowerReviews, Bazaarvoice, and more.

In addition, we’re scraping search engine results to collect expert reviews from blog posts, articles, and YouTube.

We’re also working on collecting more unstructured review data from sources like Reddit. The challenge here is to understand the context of the unstructured content and whether or not it is actually a review of the product and not something else (i.e., promoting a sale).

Relevance/Helpfulness Rating

We trained an AutoML model with the helpful votes of over 200,000 Amazon reviews. It is now able to rate every new review in terms of its relevance and helpfulness.

Here are two examples of the helpfulness rating in practice on baqpa.com. You can see the expert review from packhacker.com has a rating of very helpful, while the short consumer review on tortugabackpacks.com has a rating of very unhelpful.

helpful
Helpful review (10/10) on baqpa.com
not helpful
Unhelpful review (1.5/10) on baqpa.com

Fake Detection

Fake reviews are a big challenge that even big platforms like Amazon don’t have under control.

We’re running statistical analysis to spot discrepancies and contradictions among reviews and filter out fake reviews. In addition, we’re adding weight to reviews from trustworthy sources like Reddit and Wirecutter.

By calculating the similarity between reviews or even sentences, we’re able to cross-reference reviews across multiple reviews and sources.

fake detection
Demo of cross-referencing reviews for SaaS products

Sentiment Analysis

In our opinion, we should avoid meaningless/subjective star ratings and scores. How often do you see someone rave about a product and give it three stars? Or the opposite by trashing a product and then giving it a perfect score. Instead, we look at what reviews honestly say and score them accordingly via sentiment analysis and divide reviews into three groups: positive, neutral, and negative.

internetscore
Our sentiment-driven score on baqpa.com

On baqpa.com, we score each product with our proprietary InternetScore: our intelligent scoring system that accounts for multiple factors in every review, including sentiment, helpfulness, and more.

Keyword Extraction

We’re using a BERT model to extract the keywords for each review and sentence. Then, in combination with the sentiment analysis, we can assign a score to each keyword.

keyword
Highlighted text in a review related to a keyword on baqpa.com
keyword 2
Keywords and respective sentiment (negative, neutral, and positive) on baqpa.com

Summarization

To summarize all the reviews, we’re using a new generation model of natural language processing called GPT-3. What makes it so powerful is that it can generate text that looks like a human wrote it. We instruct GPT-3 to create concise summaries of the most helpful reviews for each product.

summary
AI-written review summary on baqpa.com

Some issues we’ve encountered so far include contradictory results when summarizing pros and cons. It’s not uncommon for some people to describe a feature as a positive while other people describe that same feature as a negative. For now, we are sticking to just the positive results.

We plan to go a step further to provide better GPT-3-powered analysis by summarizing text associated with each keyword, which will result in something like this:

keyword-summary
Demo of keyword summaries - not yet released on baqpa.com

There are cases (like brand-owned e-commerce sites) where negative reviews are filtered out to paint a more favorable picture. This filtering skews results. Thus we are working on a process to fairly weigh reviews on e-commerce sites versus marketplaces, blogs, etc.

What’s next?

Apart from improving our core features, we are working on a self-service portal where users can upload a list of products that automatically get processed within a day. Users can then conveniently access the data through an API or dashboard.

As we’re getting more and more customers, we’re working hard on the scalability and reliability of our data processing pipeline.

We will continue to work with the community to build reviewr.ai. We would love to hear your experiences, thoughts, and ideas!

Feel free to email us at hello@reviewr.ai or tweet us at @reviewr_ai.