As an Amazon Associate I earn from qualifying purchases.

Advances in text-to-speech technologies help computers find their voice

[ad_1]

Editor’s Note: The Alexa team recently introduced a new longform speaking style so Alexa sounds more natural when reading long pieces of content, like this article. If you prefer to listen to this story rather than read it, below is this article utilizing the longform speaking style.

The spoken word is important to people. We love the sound of our child’s voice, of a favorite song, or of our favorite movie star reciting a classic line.

Computer-generated, synthesized spoken words also are becoming increasingly common. Alexa, Amazon’s popular voice service, has been responding to customers’ questions and requests for more than five years, and is now available on hundreds of millions of devices from Amazon and third-party device manufacturers. Other businesses also are taking advantage of computer-generated speech to handle customer service calls, market products, and more.

How we make Alexa sound more human-like

Language and speech are incredibly complex. Words have meaning, sure. So does the context of those words, the emotion behind them, and the response of the person listening. It would seem the subtleties of the spoken word would be beyond the reach of even the most sophisticated computers. But in recent years, advances in text-to-speech (TTS) technologies – the ability of computers to convert sequences of words into natural sounding, intelligible audio responses – have made it possible for computers to sound more human-like.

Amazon scientists and engineers are helping break new ground in an era where computers sound not only friendly and knowledgeable, but also predict how the sentiment of an utterance might sound to an average listener, for example, and respond with human-like intonations.

A revolution within the field occurred in 2016, when WaveNet – a technology for generating raw audio – was introduced. Created by researchers at London-based artificial intelligence firm DeepMind, the technique could generate realistic voices using a neural network trained with recordings of real speech.

Andrew Breen, senior manager, TTS research

“This early research suggested that a new machine learning method offered equal or greater quality and the potential for more flexibility,” says Andrew Breen, senior manager of the TTS research team in Cambridge, UK. Breen has long worked on the problem of making computerized speech more responsive and authentic. Before joining Amazon in 2018, he was director of TTS research for Nuance, a Massachusetts-based company that develops conversational artificial intelligence solutions.

Modeled loosely on the human neural system, neural nets are networks of simple but densely interconnected processing nodes. Typically, those nodes are arranged into layers, and the output of each layer passes to the layer above it. The connections between layers have associated “weights” that determine how much the output of one node contributes to the computation performed by the next.

Combined with machine learning, neural networks have accelerated progress in improving computerized speech. “It’s really a gold rush of invention,” says Breen.

Generating natural-sounding speech

Generating natural sounding, human-like speech has been a goal of scientists for decades. In the 1930s Bell Labs scientist Homer Dudley developed the Voder, a primitive synthetic-speech machine that an operator worked like a piano keyboard – except rather than music, out came a squawking mechanical voice. In the 1980s, a computerized TTS application called DECTalk, developed by the Digital Equipment Corporation, had progressed to the point where the late Stephen Hawking could use a version of it, paired with a keyboard to “talk”. The results were artificial-sounding, but intelligible words that many people still associated with a talking machine.

It’s really been a gold rush of invention.

Andrew Breen, senior manager, TTS research

By the early 2000s, more accurate speech synthesis became common. The foremost approach taken then: hybrid unit concatenation. Amazon, for instance, used this approach until 2015 to build early versions of Alexa’s voice or to build voice capabilities into products like the Fire Tablet. Says Nikhil Sharma, a principal product manager in Amazon’s TTS group: “To create some of the early Alexa voices, we worked with voice talents in a studio for hours and had them say a wide variety of phrases. We broke that speech data down into a single diphone (a single diphone is a combination of halves of two phonemes, a distinct unit of sound) and put that in a large audio database. Then, when a request came to generate speech, we could tap into that database and select the best diphones to stitch together and create a sentence spoken by Alexa.”

Nikhil Sharma, principal product manager, TTS

That process worked fairly well. But hybrid unit concatenation has its limits. It needs large amounts of pre-recorded sounds from professional voice talent for reference – sort of like a tourist constantly flipping through a large French book to find particular phrases. “Because of that, we really couldn’t say a hybrid unit concatenation system ‘learned’ a language,” says Breen.

Creating a computer that actually learns a language – not just memorizes phrases – became a goal of researchers. “That has been the Holy Grail, but nobody knew how to do it,” says Breen. “We were close but had a quality ceiling that limited its viability.”

Neural networks offered a way to do just that. In 2018, Amazon scientists demonstrated that by using a generative neural network approach to creating synthetic speech, they could produce natural sounding speech. Using the generative neural network approach, Alexa could also flex the way she speaks about certain content. For example, Amazon scientists created Alexa’s newscaster style of speech from just a few hours of training data, allowing customers to hear the news in a style to which they’ve become accustomed. This advance paved the way for Alexa and other Amazon services to adopt different speaking styles in different contexts, improving customer experiences.

Comparisons of Alexa synthesized speech

Amazon recently announced a new Amazon Polly feature called Brand Voice, which provides the opportunity for organizations to work with the Amazon Polly team of AI research scientists and linguists to build an exclusive, high-quality, neural TTS voice that represents their brand’s persona. Early adopters Kentucky Fried Chicken (KFC) Canada and National Australia Bank (NAB) have utilized the service to each create two unique brand voices that utilize the same deep learning technology that powers the voice of Alexa.

Amazon Polly is an AWS service that turns text into lifelike speech, allowing customers to build entirely new categories of speech-enabled products. Polly provides dozens of lifelike voices across a broad set of languages, allowing customers to build speech-enabled applications that work in many different countries.

Looking forward, Amazon researchers are working toward teaching computers to understand the meaning of a set of words, and speak those words using the appropriate affect. “If I gave a computer a news article, it would do a reasonable job of rendering the words in the article,” says Breen. “But it’s missing something. What is missing is the understanding of what is in the article, whether it’s good news or bad, and what is the focal point. It lacks that intuition.”

That is changing. Now, computers can be taught to say the same sentence with varying kinds of inflection. In the future, it’s possible they’ll recognize how they should be saying those words based simply on the context of the words, or the words themselves. “We want computers to be sensitive to the environment and to the listener, and adapt accordingly,” says Breen.

There are numerous potential TTS applications, from customer service and remote learning to narration of news articles. Driving improvements in this technology is one approach Amazon scientists and engineers are taking to create better experiences, not only for Alexa customers, but for organizations worldwide.

“The ability for Alexa to adapt her speaking style based on the context of a customer’s request opens the possibility to deliver new and delightful experiences that were previously unthinkable,” says Breen. “These are really exciting times.”



[ad_2]

Source link

We will be happy to hear your thoughts

Leave a reply

Discover Your Essential Style at NovaEssentials
Logo