As an Amazon Associate I earn from qualifying purchases.

More-inclusive speech recognition with cross-utterance rescoring

[ad_1]

Automatic-speech-recognition (ASR) models, which convert speech to text in voice agents, typically have two stages. The first stage involves a deep neural network that maps acoustic information representing an utterance to multiple hypotheses about the words spoken. The second stage is a language model that evaluates (rescores) the plausibility of these hypothesized word sequences.

The first stage — the acoustic model — is optimized for average performance on a large set of speakers; consequently, it tends to perform poorly on speech varieties that are underrepresented in the training set, such as pronunciations found in regional accents. Standard rescoring methods cannot correct for this type of majoritarian bias in the first-stage speech recognizer.

At this year’s International Conference on Acoustics, Speech, and Signal Processing (ICASSP), we presented a new approach to rescoring speech recognition hypotheses that can help recover from errors on speech that is underrepresented in, or otherwise mismatched to, the training data.

Our approach builds a graph from speech samples with different speakers but similar hypotheses, and it creates edges between utterances that sound similar. It then boosts the probabilities of the hypotheses that are shared by adjacent nodes in the graph, meaning that similar-sounding utterances cause similar hypotheses to be boosted. This has the effect that pronunciations of words that are unlikely in isolation can support each other if they are consistent across multiple utterances.

AmazonScience_StaticGraphics

Related content

Second-pass language models that rescore automatic-speech-recognition hypotheses benefit from multitask training on natural-language-understanding objectives.

In experiments, we tested the cross-utterance rescoring method on a database of regionally accented English. The speech recognizer had been trained mainly on North American English and therefore showed high error rates for speakers from England, Scotland, Ireland, India, etc. Our approach lowered the word error rate across the board, by an average of 44%.

The algorithm requires comparing entire sets of utterances, and it is therefore immediately useful mainly in semi-supervised learning. In this setting one, typically large, teacher ASR model labels training data for another, usually more computationally efficient, student model. By attaching more-accurate labels to speech samples featuring underrepresented speech patterns, we can diversify the data used in training, and ultimately help overcome the majoritarian bias.

This year, the ICASSP organizers generalized the concept of the best-paper award by recognizing the top 3% of papers accepted to the conference. We were honored that our paper was included in that group.

Graph construction

We consider the case in which the initial transcription hypotheses are produced by a fully trained, recursive-neural-network-transducer (RNN-T) ASR model. An RNN-T model is an encoder-decoder model, meaning that it has an encoder module that maps inputs to a representational space and a decoder module that uses those mappings — known as embeddings — to generate ASR hypotheses.

To rescore these hypotheses, we adapt the technique of graph-based label propagation to propagate labels from labeled to unlabeled examples. In our case, the graph nodes represent speech embeddings, and the labels are the ASR hypotheses from the first recognition pass.

An overview of ASR hypothesis rescoring using graph-based label propagation (LP).

The first step in our graph construction method is to select the data for inclusion in the graph. We divide the data into groups of utterances with substantial overlap in their ASR hypotheses, and we construct a separate graph for each such group. A single graph, for instance, might consist largely of similarly phrased queries about the weather.

SeqVAT architecture.png

Related content

New method extends virtual adversarial training to sequence-labeling tasks, which assign different labels to different words of an input sentence.

Once we know which utterances to include in the graph, we measure the distance between their embeddings. We experimented with several different distance metrics but settled on a distance metric based on dynamic time warping (DTW). DTW was originally designed to measure distances between time series, but we treat each value in the embedding vector as, essentially, a separate time step. A DTW-based distance metric works well for this application because empirically it correlates well with the distances between utterance transcripts, as measured by edit distance.

On the basis of the distance measurements, we compute edges between graph nodes. We experimented with weighting the edges according to the DTW distance between nodes, but again, empirically, we found that binary edges worked best. From the data, we learn a distance threshold; all nodes whose distances from each other fall below that threshold are connected by edges, and those whose distances exceed that threshold remain unconnected.

Label propagation

In the setting of semi-supervised learning, the graphs include some annotated data, whose transcripts are highly accurate, and larger quantities of unannotated data. We use standard graph-based label propagation algorithms to distribute “goodness scores” for different ASR hypotheses across the graph. Essentially, these algorithms are designed to minimize radical discontinuities in label values between connected (i.e., similar) graph nodes.

The idea is that, even if the ASR model has assigned a low confidence score to the correct transcription of an utterance that features nonstandard pronunciations, the embedding of that utterance will share edges with utterances where the correct transcription receives high confidence scores. The correct transcription will then propagate across that region of the graph, and the odds will increase that the utterance with nonstandard pronunciation is transcribed correctly.



[ad_2]

Source link

We will be happy to hear your thoughts

Leave a reply

Discover Your Essential Style at NovaEssentials
Logo