As an Amazon Associate I earn from qualifying purchases.

Improving cross-lingual transfer learning by filtering training data

[ad_1]

In the past year, we’ve published several papers demonstrating that a natural-language-understanding model trained in one language, then re-trained in another, can outperform a model trained from scratch in the second language.

This type of cross-lingual transfer learning can make it easier to bootstrap a model in a language for which training data is scarce, by taking advantage of more abundant data in a source language. But sometimes the data in the source language is so abundant that using all of it to train a transfer model would be impractically time consuming.

Moreover, linguistic differences between source and target languages mean that pruning the training data in the source language, so that its statistical patterns better match those of the target language, can actually improve the performance of the transferred model.

In a paper we’re presenting at this year’s Conference on Empirical Methods in Natural Language Processing, we describe experiments with a new data selection technique that let us halve the amount of training data required in the source language, while actually improving a transfer model’s performance in a target language.

For evaluation purposes, we used two techniques to cut the source-language data set in half: one was our data selection technique, and the other was random sampling. We then pre-trained separate models on the two halved data sets and on the full data set and fine-tuned the models on a small data set in the target language. The model trained using our data selection technique outperformed not only the random-sampling model but the model trained on the full data set as well.

All of our models were trained simultaneously to recognize intents, or the actions that a speaker wants performed, and to fill slots, or the variables on which the intent is to act. So, for instance, the utterance “play ‘Talk’ by Khalid” has the intent PlayMusic, and the names “Talk” and “Khalid” fill the SongName and ArtistName slots.

The architecture of our model, which was trained simultaneously on intent classification and slot filling.

To increase the accuracy of our transferred models, we train them using multilingual embeddings as inputs. An embedding maps a word (or sequence of words) to a single point in a multidimensional space, such that words with similar meanings tend to cluster together. (The notion of similarity is usually based on words’ co-occurrence with other words in large text corpora.)

A multilingual embedding maps words from different languages into the same space. This should make cross-lingual transfer learning more efficient, as even before transfer, the model is in some sense tuned to the meanings of words in the target language.

We combine the multilingual embedding of each input word with a character-level embedding, which groups words in the embedding space according to character-level similarities, rather than word co-occurrence. Character embeddings are helpful in handling unfamiliar words, as they encode information about words’ prefixes, suffixes, and roots.

To select source-language data to train our transfer model, we rely on language models, which are trained on large text corpora and compute the probability of any given string of words. Most language models are n-gram models, meaning they calculate probabilities for n words at a time, where n is usually somewhere from two to five.

Using a bilingual dictionary, our system first translates each utterance in the source data set into a string of words in the target language. Then we apply four language models to the resulting strings: a bigram (2-gram) model applied to word embeddings, a bigram model applied to character embeddings, a trigram (3-gram) model applied to word embeddings, and a trigram model applied to character embeddings.

For each utterance in the training set, the sum of the probabilities computed by the four language models yields a score, which we normalize against the range of scores for the associated intent. Then we select only the utterances with the highest normalized scores.

We evaluated this approach in four different sets of experiments. In each experiment, we did cross-lingual transfer learning using our three source-language data sets — the full set and the two halved sets. As a baseline, we also trained a model from scratch in the target language.

In two of the experiments, we transferred a model from English to German, with different amounts of training data in the target language (10,000 and 20,000 utterances, respectively — versus millions of utterances in the full source-language data set). In the other two, we trained the transfer model on three different languages — English, German, and Spanish — and transferred it to French, again with 10,000 and 20,000 utterances in the target language.

Across the board, all three transferred models outperformed the model trained only on data in the target language, and the transfer model trained using our data selection technique outperformed the other two. For the slot-filling task, we measured performance using F1 score, which combines false-positive and false-negative rate. For intent classification, we used classification accuracy.

Relative to the model trained on the target language alone, the model trained using our data selection technique showed improvements of about 3% to 5% on the slot-filling task and about 1% to 2% on intent classification. Relative to the random-selection transfer model, our model’s gains were around 1% to 2% on slot-filling and 1% or less on intent classification.



[ad_2]

Source link

We will be happy to hear your thoughts

Leave a reply

Discover Your Essential Style at NovaEssentials
Logo