As an Amazon Associate I earn from qualifying purchases.

How Voice and Graphics Working Together Enhance the Alexa Experience

[ad_1]

Last week, Amazon announced the release of both a redesigned Echo Show with a bigger screen and the Alexa Presentation Language, which enables third-party developers to build “multimodal” skills that coordinate Alexa’s natural-language-understanding systems with on-screen graphics.

One way that multimodal interaction can improve Alexa customers’ experiences is by helping resolve ambiguous requests. If a customer says, “Alexa, play Harry Potter”, the Echo Show screen could display separate graphics representing a Harry Potter audiobook, a movie, and a soundtrack. If the customer follows up by saying “the last one”, the system must determine whether that means the last item in the on-screen list, the last Harry Potter movie, or something else.

Alexa’s ability to handle these types of interactions derives in part from research that my colleagues and I presented earlier this year at the annual meeting of the Association for the Advancement of Artificial Intelligence. In our paper, we consider three different neural-network designs that treat query resolution as an integrated problem involving both on-screen data and natural-language understanding.

We find that they consistently outperform a natural-language-understanding network that uses hand-coded rules to factor in on-screen data. And on inputs that consist of voice only, their performance is comparable to that of a system trained exclusively on speech inputs. That means that extending the network to consider on-screen data does not degrade accuracy for voice-only inputs.

The other models we investigated are derivatives of the voice-only model, so I’ll describe it first.

All of our networks were trained to classify utterances according to two criteria, intent and slot. An intent is the action that the customer wants Alexa to perform, such as PlayAction<Movie>. Slot values designate the entities on which the intents act, such as ‘Harry Potter’->Movie.name. We have found, empirically, that training a single network to perform both classifications works better than training a separate network for each.

As inputs to the network, we use two different embeddings of each utterance. Embeddings represent words as points in a geometric space, such that strings with similar meanings (or functional roles) are clustered together. Our network learns one embedding from the data on which it is trained, so it is specifically tailored to typical Alexa commands. We also use a standard embedding, based on a much larger corpus of texts, which groups words together according to the words they co-occur with.

The embeddings pass to a bidirectional long short-term memory network. A long short-term memory (LSTM) network processes inputs in order, and its judgment about any given input reflects its judgments about the preceding inputs. LSTMs are widely used in both speech recognition and natural-language processing because they can use context to resolve ambiguities. A bidirectional LSTM (bi-LSTM) is a pair of LSTMs that process an input utterance both backward and forward.

Intent classification is based on the final outputs of the forward and backward LSTMs, since the networks’ confidence in their intent classifications should increase the more of the utterance they see. Slot classification is based on the total output of the LSTMs, since the relevant slot values can occur anywhere in the utterance.

A diagram describing the architectures of all four neural models we evaluated. The baseline system,which doesn’t use screen information, received only the (a) inputs. The three multimodal neuralsystems received, respectively, (a) and (b); (a), (b), and (c); and (a), (b), and (d).

The data on which we trained all our networks was annotated using the Alexa Meaning Representation Language, a formal language that captures more sophisticated relationships between the parts of an input sentence than earlier methods did. A team of Amazon researchers presented a paper describing the language earlier this year at the annual meeting of the North American chapter of the Association for Computational Linguistics.

The other four models we investigated factored in on-screen content in various ways. The first was a benchmark system that modifies the outputs of the voice-only network according to hand-coded rules.

If, for instance, a customer says, “Play Harry Potter,” the voice-only classifier, absent any other information, might estimate a 50% probability that the customer means the audiobook, a 40% probability that she means the movie, and a 10% probability that she means the soundtrack. If, however, the screen is displaying only movies, our rules would boost the probability that the customer wants the movie.

The factors by which our rules increase or decrease probabilities were determined by a “grid search” on a subset of the training data, in which an algorithm automatically swept through a range of possible modifications to find those that yielded the most accurate results.

The first of our experimental neural models takes as input both the embeddings of the customer’s utterances and a vector representing the types of data displayed on-screen, such as Onscreen_Movie or Onscreen_Book. We assume a fixed number of data types, so the input is a “one-hot” vector, with a bit for each type. If data of a particular type is currently displayed on-screen, its bit is set to 1; otherwise, its bit is set to 0.

The next neural model takes as additional input not only the type of data displayed on-screen but the specific name of each data item — so not just Onscreen_Movie but also ‘Harry Potter’ or ‘The Black Panther’. Those names, too, undergo an embedding, which the network learns to perform during training.

Our third and final neural model factors in the names of on-screen data items as well, but in a more complex way. During training, it uses convolutional filters to, essentially, identify the separate contribution that each name on the screen makes toward the accuracy of the final classification. During operation, it thus bases each of its classifications on the single most relevant name on-screen, rather than all the names at once.

So, in all, we built, trained, and evaluated five different networks: the voice-only network; the voice-only network with hand-coded rules; the voice-and-data-type network; the voice, data type, and data name network; and the voice, data type, and convolutional-filter network.

We tested each of the five networks on four different data sets: slots with and without screen information and intents with and without screen information.

We evaluated performance according to two different metrics, micro-F1 and macro-F1. Micro-F1 scores the networks’ performance separately on each intent and slot, then averages the results. Macro-F1, by contrast, pools the scores across intents and slots and then averages. Micro-F1 gives more weight to intents and slots that are underrepresented in the data, macro-F1 less.

According to micro-F1, all three multimodal neural nets outperformed both the voice-only and the rule-based system across the board. The difference was dramatic on the test sets that included screen information, as might be expected, but the neural nets even had a slight edge on voice-only test sets. On all four test sets, the voice, data type, and data name network achieved the best results.

According to macro-F1, the neural nets generally outperformed the baseline systems, although the voice, data type, and data name network lagged slightly behind the baselines on voice-only slot classification. There was more variation in the top-performing system, too, with each of the three neural nets achieving the highest score on at least one test. Again, however, the neural nets dramatically outperformed the baseline systems on test sets that included screen information.

Acknowledgments: Angeliki Metallinou, Rahul Goel



[ad_2]

Source link

We will be happy to hear your thoughts

Leave a reply

Discover Your Essential Style at NovaEssentials
Logo