As an Amazon Associate I earn from qualifying purchases.

New Approach to Language Modeling Reduces Speech Recognition Errors by Up to 15%

[ad_1]

Language models are a key component of automatic speech recognition systems, which convert speech into text. A language model captures the statistical likelihood of any particular string of words, so it can help decide between different interpretations of the same sequence of sounds.

Automatic speech recognition works better when language models adapt to conversational context: the probabilities of the phrases “Red Sox” and “red sauce”, for instance, are very different if the customer is asking about sports or recipes. So when a voice service introduces a new capability, which creates a new set of conversational contexts, it makes sense to update the associated language model.

But building language models requires a large body of training data, which may be lacking for newly launched capabilities. New capabilities are frequently bootstrapped with formal grammars, which generate sample sentences by producing variations on a sometimes very complex set of templates.

Using a formal grammar to produce enough data to train a language model would be prohibitively time consuming, so instead, AI researchers usually make do with a random sampling of the grammar’s output.

In a paper we’re presenting at this year’s IEEE Spoken Language Technologies conference, we propose an alternative. We describe an algorithm that can analyze a particular mathematical representation of a grammar’s rules — a graphical representation — and directly calculate the probability that the grammar will produce any given string of words.

We also describe a technique for integrating language models generated directly from grammars with existing language models, in a way that doesn’t degrade the performance of established capabilities.

In our experiments, language models produced by our method reduced the error rates of speech recognition systems by as much as 15%, relative to language models that sampled the output of the same grammars. We believe that our method could improve the performance of newly launched Alexa capabilities, before their ordinary use begins to generate more realistic training data.

In natural-language-understanding (NLU) research, a grammar generally consists of a list of rules governing word or phrase substitutions. For instance, one rule might declare that in the phrase “I want to”, the words “need” or “would like” can be substituted for “want”. Other rules might link to catalogues of entity names — a list of song names that can follow the word “play,” for instance.

A sample grammar for a recipe application. The variable DISH_NAME links to a catalogue of entity names.

NLU researchers usually implement grammars using so-called finite-state transducers, or FSTs. An FST can be represented as what computer scientists call a graph. Graphs, in this sense, are usually depicted as circles, or nodes, that are connected by line segments, or edges. A network diagram is a common example of a graph.

In the graph of a formal-grammar FST, the edges (line segments) represent valid linguistic substitutions, and the nodes (circles) represent states of progress in the production of a text string. So, for instance, if a given node of the graph represents the text string “I want,” it might have two edges, one representing the “need”/“want” substitution and the other representing the “would like”/“want” substitution. Traversing one of those edges leads to a different state — either “I need” or “I would like”.

When the FST generates sample sentences, it works its way through the graph, building up strings of text one word or phrase at a time. Associated with each edge is (in addition to a pair of substitutable words) a probability, which indicates how likely the FST is to choose one branch or another when constructing a sample sentence.

We use these probabilities to construct a language model. Our algorithm first identifies every string of text encoded by the FST, and then, for each of them, it identifies every path through the graph that could lead to it. Using the probabilities associated with the edges along all of those paths, it then computes the frequency with which the FST will produce that particular string.

To integrate our new language model with the existing one, we use a machine learning system to infer the optimal balance of the probabilities encoded in both. In the paper, we present three different ways of doing this (three different loss functions), depending on the type of training data available for the new NLU capability: no data, synthetic (grammar-generated) data, or real audio transcriptions.

We evaluated our system on three different NLU capabilities: one that looks up stock prices, one that finds and dictates recipes, and one that books airline tickets. We found that the improvement offered by our method reflected the complexity of the associated grammar, peaking at 15% for the flight-booking capability.

It makes intuitive sense that, the more complex the grammar, the more training data would be required to produce an accurate language model, and the less reliable the data-sampling approach would be. Consequently, we suspect that our method will provide greater gains on more challenging tasks.

Acknowledgments: Ariya Rastrow, Björn Hoffmeister



[ad_2]

Source link

We will be happy to hear your thoughts

Leave a reply

Discover Your Essential Style at NovaEssentials
Logo