As an Amazon Associate I earn from qualifying purchases.

Cross-lingual transfer learning for bootstrapping AI systems reduces new-language data requirements

[ad_1]

Transfer learning is the technique of adapting a machine learning model trained on abundant data to a new context in which training data is sparse.

On the Alexa team, we’ve explored transfer learning as a way to bootstrap new functions and to add new classification categories to existing machine learning systems. But in a paper we’re presenting at this year’s International Conference on Acoustics, Speech, and Signal Processing, we report using cross-lingual transfer learning (a sub-category of transfer learning) to bring existing functions to a new language. Alexa is currently available in English, German, Japanese, French, Spanish, Italian, and an additional six variants of those languages.

In our experiments, we found that cross-lingual transfer learning can reduce the data requirements for bootstrapping a spoken-language-understanding (SLU) system in a new language by 50%.

Playing a crucial role in spoken-dialogue systems, SLU typically involves two subtasks: intent classification and slot tagging. In SLU, an intent is the task that a user wants performed; slots indicate the data items on which the intent is supposed to act. For instance, if an Alexa customer says, “Alexa, play ‘High Hopes’ by Panic! at the Disco,” the intent is PlayMusic and “High Hopes” and “Panic! at the Disco” fill the SongName and ArtistName slots.

SLU systems frequently have separately trained intent and slot classifiers, but training a single network on both tasks should improve performance: learned knowledge that yields accurate slot filling is likely to aid intent classification, and vice versa.

We explored six different machine learning architectures for doing joint intent and slot classification. To enable comparison with existing systems, we trained them using ATIS, a benchmark data set of English-language SLU examples. All of our models outperform their predecessors on at least one task, and three of them outperformed their predecessors on both tasks.

In most of today’s SLU systems, inputs, whether words or strings of words, are represented using word embeddings. A word embedding is a vector — a series of coordinates — of fixed length. Each vector corresponds to a point in a multi-dimensional space, and embedding networks are trained so that words or groups of words with similar meanings will cluster near each other in the vector space.

With our models, we experimented with both word embeddings and character embeddings, which cluster words according to not only their meanings but the meanings of their component parts. So, for instance, character embeddings might group the words “asteroid” and “disaster” near each other, since they share the Greek root astēr, meaning star.

We also explored different ways of representing the linguistic contexts of inputs to our model. A network trained to recognize objects can just take in an image and spit out the corresponding label. But a network trying to determine the meaning of a word needs to know what other words preceded it.

We considered four different types of context-modeling networks. Two were varieties of recurrent neural networks, called gated-recurrent-unit (GRU) networks and highway long-short-term-memory (LSTM) networks. Recurrent neural networks process sequenced inputs in order, and each output factors in those that preceded it.

The other networks used two different types of attention mechanisms, a multihead attention mechanism and a bidirectional attention mechanism. For each word in an input utterance, the attention mechanism determines which other words of the utterance are useful for interpreting it.

Our modular architecture, with a range of possible input embeddings and context-modeling (sequential token modeling) networks. Here, we’re applying it to a flight-booking task.

With most of the networks, we represented inputs using both word embeddings and character embeddings, but with the highway LSTM network, we tested word and character embeddings separately and together. That gave us a total of six different network configurations.

Overall, the highway LSTM model using both embeddings was the top performer. It had the highest score on the slot-filling task, and it was within 0.2% of the accuracy of the top-performing intent classifier.

Next, we explored leveraging data from a source language to improve the performance of the SLU system in a target language. We first pre-trained the SLU model on the source data set, then fine-tuned it on the target data set. We evaluated this method in both small-scale and large-scale settings on the best-performing architecture (highway LSTM), using German and English as the target and source languages, respectively.

Like most machine learning data sets, each of our experimental data sets is divided into three parts: a training set, which is used to train a machine learning model; a development set, which is used to fine-tune the “hyperparameters” of the model (such as the number of nodes in a network layer or the learning rate of the learning algorithm); and a test set, which is used to evaluate the fine-tuned model.

For the small-scale experiment, we created a bilingual version of ATIS by translating the test set, 463 random utterances from the training set, and 144 random utterances from the development set into German. For the large-scale experiment, we created a training set from one million training-data utterances from an English Alexa SLU system, plus random samples of 10,000 and 20,000 utterances from a German Alexa SLU system. The development set consisted of 2,000 utterances from the German system.

We believe that this is the first time that cross-lingual transfer learning has been used to translate a joint intent-slot classifier into a new language.

In all of the transfer learning experiments, we used bilingual input embeddings, which are trained to group semantically similar words from both languages in the same region of the embedding space.

In our experiments, we found that a transferred model whose source data was the million English utterances and whose target data was the 10,000 German utterances classified intents more accurately than a monolingual model trained on 20,000 German utterances. With both the 10,000- and 20,000-utterance German data sets, the transferred model exhibited a 4% improvement in slot classification score versus a monolingual model trained on only the German utterances.

Although the highway LSTM model was the top-performing model on the English-language test set, that doesn’t guarantee that it will yield the best transfer learning results. In ongoing work, we’re transferring the other models to the German-language context, too.

Acknowledgments: Jutta Romberg, Robin Linnemann



[ad_2]

Source link

We will be happy to hear your thoughts

Leave a reply

Discover Your Essential Style at NovaEssentials
Logo