[ad_1]
Developing a new natural-language-understanding system usually requires training it on thousands of sample utterances, which can be costly and time-consuming to collect and annotate. That’s particularly burdensome for small developers, like many who have contributed to the library of more than 70,000 third-party skills now available for Alexa.
One way to make training more efficient is transfer learning, in which a neural network trained on huge collections of previously annotated data is then retrained on the comparatively sparse data in a new area. Last year, my colleagues and I showed that, for low volumes of training data, transfer learning could reduce the error rate of natural-language-understanding (NLU) systems by an average of 14%.
This year, at the 33rd conference of the Association for the Advancement of Artificial Intelligence (AAAI), we will present a method for reducing the error rate by an additional 8% — again, for low volumes of training data — by leveraging millions of unannotated interactions with Alexa.
Using those interactions, we train a neural network to produce “embeddings”, which represent words as points in a high-dimensional space, such that words with similar functions are grouped together. In our earlier paper, we used off-the-shelf embeddings; replacing them with these new embeddings but following the same basic transfer-learning procedure reduced error rates on two NLU tasks.
Our embeddings are based on an embedding scheme called ELMo, or Embeddings from Language Models. But we simplify the network that produces the embeddings, speeding it up by 60%, which makes it efficient enough for deployment in a real-time system like Alexa. We call our embedding ELMoL, for ELMo Light.
Embeddings typically group words together on the basis of their co-occurrence with other words. The more co-occurring words two words have in common, the closer they are in the embedding space. Embeddings thus capture information about words’ semantic similarities without requiring human annotation of training data.
Most popular embedding networks are “pretrained” on huge bodies of textual data. The volume of data ensures that the measures of semantic similarity are fairly reliable, and many NLU systems simply use the pretrained embeddings. That’s what we did in our earlier paper, using an embedding scheme called Fasttext.
We reasoned, however, that requests to Alexa, even across a wide range of tasks, exhibit more linguistic regularities than the more varied texts used to pretrain embeddings. An embedding network trained on those requests might be better able to exploit their regularities. We also knew that we had enough training data to yield reliable embeddings.
ELMo differs from embeddings like Fasttext in that it is context sensitive: with ELMo, the word “bark”, for instance, should receive different embeddings in the sentences “the dog’s bark is loud” and “the tree’s bark is hard”.
To track context, the ELMo network uses bidirectional long short-term memories, or bi-LSTMs. An LSTM is a network that processes inputs in order, and the output corresponding to a given input factors in previous inputs and outputs. A bidirectional LSTM is one that runs through the data both forward and backward.
ELMo uses a stack of bi-LSTMs, each of which processes the output of the one beneath it. But again, because Alexa transactions display more linguistic uniformity than generic texts, we believed we could extract adequate performance from a single bi-LSTM layer, while gaining significant speedups.
In our experiments, we compared NLU networks that used three different embedding schemes: Fasttext, as in our previous paper, ELMo, and ELMoL. As a baseline, we also compared the networks’ performance with that of a network that used no embedding scheme at all.
With the Fasttext network, the embedding layers were pretrained; with the ELMo and ELMoL networks, we trained the embedding layers on 250 million unannotated requests to Alexa. Once all the embeddings were trained, we used another four million annotated requests to existing Alexa services to train each network on two standard NLU tasks. The first task was intent classification, or determining what action a customer wished Alexa to perform, and the second was slot tagging, or determining what entities the action should apply to.
Initially, we allowed training on the NLU tasks to adjust the settings of the ELMoL embedding layers, too. But we found that this degraded performance: the early stages of training, when the network’s internal settings were swinging wildly, undid much of the prior training that the embedding layers had undergone.
So we adopted a new training strategy, in which the embedding layers started out fixed. Only after the network as a whole began to converge toward a solution did we allow slow modification of the embedding layers’ internal settings.
Once the networks had been trained as general-purpose intent classifiers and slot-taggers, we re-trained them on limited data to perform new tasks. This was the transfer learning step.
As we expected, the network that used ELMo embeddings performed best, but the ELMoL network was close behind, and both showed significant improvements over the network that used FastText. Those improvements were greatest when the volume of data for the final retraining — the transfer learning step — was small. But that is precisely the context in which transfer learning is most useful.
When the number of training examples ranged from 100 to 500, the error rate improvement over the FastText network hovered around 8%. At those levels, the ELMo network was usually good for another 1-2% reduction, but it was too slow to be practical in a real-time system.
Acknowledgments: Angeliki Metallinou, Aditya Siddhant
window.fbAsyncInit = function() { FB.init({
appId : '1024652704536162',
xfbml : true, version : 'v2.9' }); };
(function(d, s, id){
var js, fjs = d.getElementsByTagName(s)[0];
if (d.getElementById(id)) {return;}
js = d.createElement(s); js.id = id;
js.src = "https://connect.facebook.net/en_US/sdk.js";
fjs.parentNode.insertBefore(js, fjs);
}(document, 'script', 'facebook-jssdk'));
[ad_2]
Source link