[ad_1]
Alexa currently has more than 90,000 skills, or abilities contributed by third-party developers — the Uber ride-sharing skill, the Jeopardy! trivia game skill, the Starbucks drink-ordering skill, and so on.
To build a skill, a third-party developer needs to supply written examples of customer requests, such as “Order my usual” or “get me a latte”, together with the actions those requests should map to. These examples are used to train the machine learning system that will process real requests when the skill goes live.
Constructing lists of sample requests, however, can be labor intensive, and smaller developers could benefit if, during training, their examples were pooled with those for similar skills. In machine learning, more training data usually leads to better performance, and the examples provided by one developer could plug holes in the list of examples provided by another.
In a paper we presented last week at the annual meeting of the Association for Computational Linguistics, my colleagues and I explore several different techniques for pooling sample requests from different skills when training a natural-language-understanding (NLU) system. We evaluated our techniques using two different public data sets and an internal data set and found that, across the board, training an NLU system simultaneously on multiple skills yielded better results than training it separately for each skill.
The advantage of multitask training is that learning the structure of, say, the request “Order me a cab” could also help an NLU system process the request “Order me a sandwich”. The risk is that too much training data about condiments could interfere with the system’s ability to, say, identify cab destinations.
To ensure that our system benefits from generalizations about common linguistic structures without losing focus on task-specific structures, we force the machine learning systems in our experiments to learn three different representations of all incoming data.
The first is a general representation, which encodes shared information across all tasks. The second is a group-level representation: Each skill’s category is known — for example, the Uber and Lyft skills are in the Travel category, while the CNN and ESPN skills are in the News category. The group-level representations capture commonalities among utterances in a given skill category. Finally, the third representation is task-specific.
The machine learning systems we used were encoder-decoder neural networks, which first learn fixed-size representations (encodings) of input data and then use those as the basis for predictions (decoding). We experimented with four different neural-network architectures. The first was a parallel architecture, meaning that each input utterance passed through the general encoder, a group-level encoder, and a task-specific encoder simultaneously, and the resulting representations were combined before passing to a task-specific decoder.
The other three networks were serial, meaning that the outputs of one bank of encoders passed to a second bank before moving on to the decoders. The serial architectures differ in the order in which the shared and task-level encodings takes place and in whether the outputs of the first encoder bank are directly available to the decoders.
All of these network architectures contain separate encoder modules for individual tasks, groups of tasks, and the “universe” of all tasks. On any given input utterance, a “switch” in the network controls which of the encoders gets to process the utterance. If the user hasn’t mentioned a skill by name, the system determines the intended skill using a predictive model. If, for instance, the utterance is “Get me an Uber to the hotel”, the task-specific Uber encoder, the group-specific Travel skills encoder, and the general universe encoder process it.
During the training phase, the group-specific encoders learn how to best encode utterances characteristic of their groups, and the skill-specific encoders learn how to best encode utterances characteristic of their skills. As a result, the decoders, which always make task-specific predictions, can take advantage of three different representations of the input, ranging from general to specific. If a particular skill does not have sufficient training examples, its task-specific representations may be poor, but the group- and universe-level representations can compensate.
All of the tasks on which we tested our architectures were joint intent classification and slot-filling tasks. “Intents” are the actions that a voice agent is supposed to take. If an Alexa customer says, “Play ‘Overjoyed’ by Stevie Wonder”, the NLU system should label the whole utterance with the intent PlayMusic. Slots are the data items on which the intent acts. Here, “Overjoyed” should receive the slot tag SongName and “Stevie Wonder” the slot tag ArtistName.
To ensure that the group-level and universe-level representations remain general — that the universe-level representations don’t get hung up on the mechanics of condiment requests, for instance — we impose two constraints during training. The first is adversarial: during training, the network is rewarded when it accurately classifies slots and intents but penalized when its group- and universe-level encodings make it easy to predict which skill an utterance belongs to. This prevents task-specific features from creeping into the shared representation space.
The second constraint is an orthogonality constraint. Because the outputs of the encoders are of fixed length, they can be interpreted as points in a multidimensional space. During training, the system is rewarded if the points produced by the different types of encoders tend to cluster in different regions of the space — that is, if the task-specific encoders and the shared encoders are capturing different information.
We tested our systems on three different data sets and compared their performance to four different single-task baseline systems. On 90 Alexa skills, two of the serial systems (Serial+Highway and Serial+Highway+Swap) yielded significantly better performance on mean intent accuracy and slot F1 (which factors in both false-negative and false-positive rate) over the baseline systems. On any given test, one or another of the multitask systems was consistently the best-performing, with improvements of up to 9% over baseline.
Acknowledgments: Shiva Pentyala, Markus Dreyer
window.fbAsyncInit = function() { FB.init({
appId : '1024652704536162',
xfbml : true, version : 'v2.9' }); };
(function(d, s, id){
var js, fjs = d.getElementsByTagName(s)[0];
if (d.getElementById(id)) {return;}
js = d.createElement(s); js.id = id;
js.src = "https://connect.facebook.net/en_US/sdk.js";
fjs.parentNode.insertBefore(js, fjs);
}(document, 'script', 'facebook-jssdk'));
[ad_2]
Source link