As an Amazon Associate I earn from qualifying purchases.

Predicting answers to product questions using similar products

[ad_1]

Community product question answering (PQA) is a popular and important feature on e-commerce websites. Customers ask, for example, “Will these jeans shrink after a wash?” or “Can I put this dish in the microwave?”, and other customers answer based on their experiences.

Many e-commerce sites also offer automatic question-answering tools, which attempt to address questions immediately, while customers are waiting for responses from the community. These tools typically work by retrieving potential answers from archives of previously resolved questions. However, in many cases, relevant Q&A records cannot be found, because the product is new or rare, or it simply did not get enough attention from the community. 

In a paper accepted at this year’s meeting of the North American chapter of the Association for Computational Linguistics (NAACL), Ohad Rozen, David Carmel, Vitaly Mirkis, Yftah Ziser, and I present a novel approach to predicting the answers to such unanswered questions by identifying similar questions asked about similar products. Ohad is a graduate student at Bar-Ilan university in Israel, and the research was done as part of his internship with our team during the summer of 2020.

In addition to the predicted answer to a product-related question, the researchers’ system also provides the similar questions and answers that were the basis of the prediction, helping customers form their own opinions.

In an experiment using yes-no questions from multiple product categories in Amazon’s PQA dataset, our method predicts the correct yes-no answer with an accuracy ranging from 75% to 80%, depending on the number of similar products available for comparison.

One original property of our approach is that, in addition to the predicted answer, we also return the similar questions and answers that were its basis. These provide customers with some context and rationale for the model’s predictions, which help them form their own opinions, even when the predictions are inaccurate. Hopefully, by consulting these resources, customers can infer the correct answers themselves.

Answer prediction method

Our method operates in four steps, as shown in the following example:

Step 1: Given a new question about a specific product, our algorithm retrieves a few hundred records of questions asked about other products. Each record includes both a question-answer pair and data about the relevant product. As the retrieval is performed over a large corpus, we estimate similarity using nearest-neighbor search over pretrained embeddings, a fast but approximate method.

Step 2: Using a question-to-question similarity model — a much more accurate but slower neural model, trained on similar-question pairs — we recompute the semantic similarity between the questions in the retrieved records and the new question. Only records with high semantic similarity are retained.

Step 3: Each of the retained records, together with the record for the current question, passes to a novel contextual-product-similarity (CPS) model that we developed. The CPS model estimates the similarity between two products in the context of a specific question. For example, two pairs of jeans may be considered similar in the context of the question “Can I put them in the dryer?” but not in the context of the question “Are these stretchy?” The CPS model estimates the similarity between the current product and each of the products retained in step 2, in the context of the current question.

Step 4: Finally, we use the CPS model scores to weight the answers as voters in a mixture-of-experts model in order to predict the answer to the customer’s question.

Contextual product similarity

The CPS model is based on a scalable and unsupervised training procedure that we developed, which is performed offline and leverages a large corpus of resolved product questions — in particular, yes-no questions, or questions that can be answered by a simple yes or no. 

The key element of the training procedure, as shown in the example below, is locating pairs of products from the same product category — e.g., pairs of jeans — where both products also have highly similar yes-no questions — e.g., “Are these stretchy?” and “Do they stretch?”

To measure the similarity between questions, we use the same question-to-question model that filters candidate questions at inference time (step 2, above). Each product pair is labeled as similar or not similar, in the context of the question, according to the agreement or disagreement of the yes-no answers. Through this automatic and unsupervised procedure, we produce a large-scale labeled dataset, which we then use to train the CPS model.

Products are paired according to the similarity of their questions. Each pair is then labeled as similar or not similar, in the context of the question, according to the agreement or disagreement of the yes-no answers.

We continue to work on improving our model, but even without perfect accuracy, we believe customers will already find it beneficial. Returning the related Q&As that we use for answer prediction lets customers decide for themselves how reliable the predictions are. Customers’ interactions with these Q&As, in an assisted-AI approach, can feed continuous-learning methods that help improve our predictions further.



[ad_2]

Source link

We will be happy to hear your thoughts

Leave a reply

Discover Your Essential Style at NovaEssentials
Logo