As an Amazon Associate I earn from qualifying purchases.

ICASSP: What “signal processing” has come to mean

[ad_1]

The International Conference on Acoustics, Speech, and Signal Processing (ICASSP), which starts today, is now in its 45th year, and according to Google Scholar’s rankings, it’s the highest-impact conference in the field of signal processing.

But as speech-related technologies have matured, the definition of signal processing has expanded. “ICASSP is a mix of a lot of different tracks,” says Ariya Rastrow, an Alexa principal research scientist who attended his first ICASSP in 2006. “It has the whole spectrum, from very low-level signal processing all the way to interpretation and natural-language understanding.”

Alexa senior principal scientist Ariya Rastrow

Credit: Jordan Stead

This diversity, Rastrow explains, simply reflects that of the human audio-processing system. The brain doesn’t rely exclusively on acoustic signals to recognize words, and neither should computer systems.

“The interaction between language and acoustics is very dynamic from the human perspective,” Rastrow says. “If I’m talking to you in a very clean environment, we are capable of following on the acoustic level at very high resolution. But if we’re sitting in a noisy bar, you as a human are going to rely more on your prior — on a semantic level, what are the things that the other person might say? what are the topics that they might talk about? —and use that to enhance your recognition.”

Traditionally, the task of spoken-language understanding has been broken into two components: automatic speech recognition (ASR), which converts an acoustic speech signal into text, and natural-language understanding (NLU), which makes sense of the text.

But in fact, speech recognition usually relies on higher-level linguistic features to identify words. The traditional ASR system consists of an acoustic model, which translates acoustic signals into low-level phonetic representations; a lexicon, which maps sequences of low-level phonetic representations to words; and a language model, which uses high-level statistics about words’ co-occurrence to adjudicate between competing interpretations of the acoustic signal.

“Twenty, twenty-five years ago, there was this pragmatic idea to build factored systems,” Rastrow explains. “You have clear-cut boundaries between components of the system. Traditional speech recognition systems are built over an architecture that we call a hidden Markov model (HMM) architecture. The HMM architecture will put these multiple knowledge sources together at inference time. But the acoustic model and the language model are trained separately.”

Shared representations

Recently, however, this approach has begun to give way to end-to-end training of large, neural-network-based architectures. That is, a single neural network is trained on examples that consist of acoustic inputs and fully transcribed outputs, and it directly learns the relationships previously encoded in the ASR system’s separate components.

“This has many benefits” Rastrow says, “one being that by doing joint training you build systems that are more optimized in terms of accuracy. If you build factored systems, often you train each component for a specific objective function, and at inference time, they don’t know how to handle disfluencies and errors. By virtue of advances in architectures and doing joint training and multitask training, the systems are becoming more robust to those types of confusions.”

“That’s one benefit,” Rastrow continues. “Another is that the system gains in efficiency. By having a mechanism to do knowledge transfer, joint training, or shared representation, you get to the point where different parts of the systems can rely on the same types of representations or shared layers [of the network]. This can result in compression of the overall size of the system, execution speedups, and opportunities to deploy such systems on low-resource devices and hardware.

“For example, if you’re doing acoustic-event detection, and you’re also doing wake word detection and whisper detection, which are different types of audio-based classification tasks, one way is to build all the systems separately. The other way is that you can do knowledge transfer and shared representation learning, and by virtue of those shared network components and layers, you can gain efficiency beyond the obvious accuracy improvements.

“Also, the whole system is done in neural-network execution that we know how to accelerate both on the software and the hardware side, versus this explicit knowledge representation — lexicon versus language model. Traditionally, these are not deep-learning based, so we could not leverage these efficiency mechanisms. For the last two to three years, we have been pursuing this direction.”

Total integration

Allowing a single large model to integrate the ASR system’s low-level acoustic-signal processing and high-level language modeling raises the prospect of taking advantage of still higher-level linguistic features. In one of the 19 Amazon papers at this year’s ICASSP, for instance, Alexa researchers report using semantic features to help distinguish between utterances intended for Alexa and those that are not, where in the past, Alexa’s “device directedness” detector relied solely on acoustic features.

The end point of all this integration, of course, would be a single neural network that executed the entire task of spoken-language understanding — both ASR and NLU.

“There is emerging research that shows that at least for a subset of interactions, you can build a single, small-footprint network that can directly translate audio to the semantic level,” Rastrow says. “You get even better latency. You don’t have to do stage-wise execution. Also, there are studies showing that humans don’t do recognition word by word. We carry information on the parts of the speech that are semantically important for the topic, for the conversation.”

“But challenges remain,” Rastrow says. “These all-neural systems thrive on data. And once you move closer to the understanding layer, you have to cope more and more with data sparsity and the nuances of unique interactions. On the acoustic level, for the sound <p>, even across languages, you can get a lot of examples. But as you go closer to the semantic and sentence-level understanding, the patterns become more unique.

“One challenge is how we combine these new architectures for doing direct audio to NLU with our advances in semi-supervised learning and unsupervised learning. Another challenge is how to combine very data-oriented learning systems with some kind of reasoning or logic.

“I’ll give you an example. If you say, ‘Alexa turn on the bedroom light’, and Alexa misinterprets and turns on the kitchen light, and you follow that by saying, ‘No, Alexa, don’t turn on the kitchen light,’ now you have the negation problem. When you say ‘Don’t turn it on’, you really mean ‘Turn it off’. It is very hard to find those examples in data. Traditionally, we know how to address that problem with rules and logic and reasoning, but relying merely on data might not give us a good representation of those unique patterns. So the questions in the next two, three years of research will be how to combine those systems with either semi-supervised or unsupervised learning and how to combine them with knowledge and logic.”



[ad_2]

Source link

We will be happy to hear your thoughts

Leave a reply

Discover Your Essential Style at NovaEssentials
Logo