[ad_1]
Katrin Kirchhoff is the director of speech processing for Amazon Web Services, and her organization has a trio of papers at this year’s Interspeech conference, which begins next week.
“One paper is on novel evaluation metrics for speaker diarization,” Kirchhoff says. “Speaker diarization is the task of determining who speaks when, and errors in that domain can be due to vocal characteristics of speakers, but they can also be due to conversational patterns. So, for instance, speaker diarization is harder when you have a lot of short turns from speakers, very frequent speaker changes, and usually our metrics don’t really disentangle those different causes. So this is a new paper that proposes new ways of looking at this and proposes to measure the contributions in different ways.
“Another paper is on adversarial learning for accented speech, and the third is on incorporating more contextual information into ASR [automatic speech recognition] for dialogue systems. So in the case where you have an ASR system as a front end for a dialogue system, it’s really important to actually model things like dialogue state and the longer conversational history to improve ASR performance. That’s the theme of the third paper.”
Speech at AWS
The diversity of those papers’ topics is a good indicator of the breadth of speech research at Amazon Web Services (AWS).
“My teams work on a wide range of science topics relevant to cloud-based spoken language processing, starting with robustness to different audio conditions like noise and reverberation, all the way to different machine learning techniques,” Kirchhoff says. “We look into unsupervised, semi-supervised, and self-supervised learning.”
“That’s actually a really broad trend these days, and also a trend that I see everywhere at Interspeech this year. Our machine learning models are very data-hungry, and labeled data is difficult to produce for speech. For a lot of tasks and a lot of languages, we simply don’t have those kinds of data resources.
Amazon at Interspeech
Read more about Amazon’s involvement at Interspeech — papers, organizing-committee membership, workshops and special sessions, and more.
“So everybody’s training self-supervised representations these days, which means that we use proxy tasks to make models learn something about the input signal without having explicit ground truth labels — by, say, predicting certain frequency bands from others, or by masking time slices and then trying to predict the content from the surrounding signal, or teaching the model which speech segments are from the same signal as opposed to different signals.
“The question is, is there a single representation that’s universally best for various downstream processing tasks? That is, can you use the same representation as a starting point for tasks like ASR, speaker recognition, and language identification? And then taking that one step further, can we actually use that, not only for speech, but for audio processing more generally? So at AWS, we’re starting to look into that.
“Other areas of interest for us are fields like continual learning or few-shot learning, which means, again, ‘How can you learn models without a lot of labeled data?’ But rather than going the completely unsupervised way, we look at what you can do with just a very small number of samples from a given class or from a given task.
“ASR systems often need to process speech collected in vastly different scenarios and domains, which can include proper names or particular phrases, stylistic patterns, et cetera, that are rare overall but frequent in a particular application. You need to figure how to prime your system to recognize them accurately, and how to do that with just a handful of observed samples.”
Non-autoregressive processing
Some of the research in Kirchhoff’s organization involves real-time processing of short audio snippets, but several AWS products — such as Amazon Transcribe, Amazon Transcribe Medical, and Contact Lens — require transcription of longer audio files, such as movies, lectures, and dictations. In this context, the ASR model has the entire speech signal available to it before it begins transcribing.
This has fueled Kirchhoff’s interest in the topic of non-autoregressive processing. In fact, together with colleagues at Yahoo and Carnegie Mellon University, Kirchhoff is co-organizing a special session at Interspeech titled Non-Autoregressive Sequential Modeling for Speech Processing.
Non-autoregressive processing means that all decoding steps are conducted in parallel. The question is, how do you get the same performance when you’re not conditioning each step on all of the previous steps?
“Traditionally, you have a decoder in an ASR system that combines different knowledge sources and then generates an output hypothesis in a step-by-step fashion, where each step is conditioned on the previous time step,” Kirchhoff explains. “You essentially run over the speech signal in one direction, left to right, and each processing step is conditioned on the previous one.
“Non-autoregressive processing means that all decoding steps are conducted in parallel. So all steps happen simultaneously, and each step can be conditioned on a context in both directions. This challenges the intuitive notion that speech is generated sequentially in time and that, therefore, decoding should work in the same way. But it also means that the decoding process can be very heavily parallelized, and it can be much more efficient and much faster than traditional decoding approaches. And since it’s heavily parallelizable, it can also benefit much more from developments in deep-learning hardware.
“The question is, how do you get the same performance when you’re not conditioning each step on all of the previous steps? Because there’s clearly information flow that needs to happen across these different time steps. How do you still model that interaction?”
Some of the papers at the special Interspeech session will address that question, but Kirchhoff’s group provided one provisional answer to it in June, at the annual meeting of the North American branch of the Association for Computational Linguistics (NAACL), in a paper titled “Align-Refine: Non-Autoregressive Speech Recognition via Iterative Realignment”.
“That is applying non-autoregressive decoding to speech recognition,” Kirchhoff says. “We call our approach ‘align-refine’. We essentially iterate the process: each iteration takes the decoding hypothesis from the previous iteration and tries to improve and refine it, rather than doing it in a single step. Since all decoding steps happen in parallel for each iteration, there’s still a vast gain in efficiency.”
“What I really liked about the special session is that we had submissions both from ASR and from other areas of speech processing, like TTS [text-to-speech],” Kirchhoff adds. “It’s very interesting that you can generalize approaches across different fields, because traditionally they’ve been quite separate — non-autoregressive decoding originated in machine translation. So there’s increasingly a convergence between natural-language processing, ASR, and TTS. There’s a lot of commonality in the approaches that we use.”
window.fbAsyncInit = function() { FB.init({
appId : '1024652704536162',
xfbml : true, version : 'v2.9' }); };
(function(d, s, id){
var js, fjs = d.getElementsByTagName(s)[0];
if (d.getElementById(id)) {return;}
js = d.createElement(s); js.id = id;
js.src = "https://connect.facebook.net/en_US/sdk.js";
fjs.parentNode.insertBefore(js, fjs);
}(document, 'script', 'facebook-jssdk'));
[ad_2]
Source link