As an Amazon Associate I earn from qualifying purchases.

Multilingual shopping systems

[ad_1]

Amazon’s online shopping experience is available in a lot of different languages and a lot of different countries. But regardless of language or location, many customers are looking for the same products.

Recent AI research has shown that frequently, a machine learning model trained on multiple data sets to perform multiple tasks will, on any one of those tasks, outperform a single-data-set model dedicated to just that task. In a multitask model, functions that overlap across tasks tend to reinforce each other, without seeming to impair the performance of functions specialized to individual tasks.

My colleagues and I hypothesized that a multitask shopping model, trained on data from several different languages at once, would be able to deliver better results to customers using any of those languages. It might, for instance, reduce the likelihood that the Italian query “scarpe ragazzo” — boys’ shoes — would return a listing for a women’s heeled dress sandal.

We suspected that a data set in one language might be able to fill gaps or dispel ambiguities in a data set in another language. For instance, phrases that are easily confused in one language might look nothing alike in another, so multilingual training could help sharpen distinctions between queries. Similarly, while a monolingual model might struggle with queries that are rare in its training data, a multilingual model could benefit from related queries in other languages.

In a paper we’re presenting in February at the ACM Conference on Web Search and Data Mining (WSDM), we investigated the application of multitask training to the problem of multilingual product search. We found that multilingual models consistently outperformed monolingual models and that the more languages they incorporated, the greater their margin of improvement.

For instance, according to F1 score, a standard performance measure in machine learning that factors in both false-positive and false-negative rates, a multilingual model trained on both French and German outperformed a monolingual French model by 11% and a monolingual German model by 5%. But a model trained on five languages (including French and German) outperformed the French model by 24% and the German model by 19%.

Sharing space

An essential feature of our model is that it maps queries relating to the same product into the same region of a representational space, regardless of language of origin, and it does the same with product descriptions. So, for instance, the queries “school shoes boys” and “scarpe ragazzo” end up near each other in one region of the space, and the product names “Kickers Kick Lo Vel Kids’ School Shoes – Black” and “Kickers Kick Lo Infants Bambino Scarpe Nero” end up near each other in a different region. Using a single representational space, regardless of language, helps the model generalize what it learns in one language to other languages.

These images depict embeddings — representations in a geometric space — of queries and product descriptions in Italian and English. At left are the embeddings that result from separate training of four monolingual models; queries (orange) and product descriptions (blue) in Italian and in English (green and yellow) cluster in four distinct regions of the space. At right are the embeddings that result from simultaneously training a multitask model on English and Italian data. Queries cluster together by topic, irrespective of language of origin, as do product descriptions.

Aman Ahuja

Our model takes two inputs, a query and a product title, and outputs a single bit, indicating whether the product matches the query or not. The inputs are encoded, or transformed into a fixed-length vector representation, which serves as the input to a separate classifier module. The classifier outputs a decision about the query-product match. In our case, we have two encoders for each input language, one for products and one for queries, but a single shared classifier.

Each encoder uses the transformer neural-network architecture, which scales better than alternative architectures, such as long short-term memory (LSTM) architectures. The first layer of the classifier uses the Hadamard product to combine query and product encodings, and the joint encoding passes to a standard feed-forward neural network, whose output is the match assessment.

We begin training our model by picking one of its input languages at random and training it to classify query-product pairs in just that language. This initializes the classifier’s settings to values that should be useful for query-product matching.

Dual objectives

Training proceeds through a series of epochs. In each epoch, we train the model end to end — encoders and classifier — on annotated sample queries in each of its input languages. But each epoch also includes an output alignment phase, to ensure that the outputs of encoders tailored to different languages share a representational space.

For that phase, we construct two sets of cross-lingual mappings for every language pair in our training data. One mapping is for products, and the other is for queries. Product mappings simply correlate the titles of products cross-listed in both languages. Query mappings align queries in different languages that resulted in the purchase of the same cross-listed product.

During the output alignment phase, we train the encoders to minimize the distance in the representational space between their respective encodings of titles and queries.

A bilingual example of our architecture. During training, query alignment and product alignment phases alternate with end-to-end training on the classification task.

Aman Ahuja

Alternating between end-to-end training and encoder alignment ensures that the network doesn’t prioritize one objective at the expense of the other. In fact, our hypothesis was that the objectives should reinforce each other, as they enable the network to better generalize its training results across languages.

In principle, training could proceed for any number of epochs, but we found that we got strong results after as few as 15 or 20 epochs. In our experiments, we trained 10 different bilingual models (five languages, each of which was paired with the other four), 10 different trilingual models, and one pentalingual model.

There were a few exceptions, but for the most part, adding languages to the model improved its performance on any one language, and the pentalingual model outperformed all the monolingual models, sometimes quite dramatically. Our results suggest that multilingual models should deliver more consistently satisfying shopping results to our customers. In ongoing work, we are continuing to explore the power of multitask learning to improve our customers’ shopping experiences.



[ad_2]

Source link

We will be happy to hear your thoughts

Leave a reply

Discover Your Essential Style at NovaEssentials
Logo