As an Amazon Associate I earn from qualifying purchases.

More-natural prosody for synthesized speech

[ad_1]

At this year’s Interspeech, the Amazon text-to-speech team presented two new papers about controlling prosody — the rhythm, emphasis, melody, duration, and loudness of speech — in speech synthesis.

One paper, “CopyCat: many-to-many fine-grained prosody transfer for neural text-to-speech”, is about transferring prosody from recorded speech to speech synthesized in a different voice. In particular, it addresses the problem of “source speaker leakage”, in which the speech synthesis model sometimes produces speech in the source speaker’s voice, rather than the target speaker’s voice.

According to listener studies using the industry-standard MUSHRA (multiple stimuli with hidden reference and anchor) methodology, the speech produced by our model improved over the state-of-the-art system’s by 47% in terms of naturalness and 14% in retention of speaker identity.

Speech with target identity + source prosody

Speech with target identity + source prosody

The other paper, “Dynamic prosody generation for speech synthesis using linguistics-driven acoustic embedding selection”, is about achieving more dynamic and natural intonation in synthesized speech from TTS systems. It describes a model that uses syntactic and semantic properties of the utterance to determine the prosodic features.

Again according to tests using the MUSHRA methodology, our model reduced the discrepancy between the naturalness of synthesized speech and that of recorded speech by about 6% for complex utterances and 20% on the task of long-form reading.

CopyCat

When prosody transfer (PT) involves very fine-grained characteristics — the inflections of individual words, as opposed to general speaking styles — it’s more likely to suffer from source speaker leakage. This issue is exacerbated when the PT model is trained on non-parallel data — i.e., without having the same utterances spoken by the source and target speaker.

The core of CopyCat is a novel reference encoder, whose inputs are a mel-spectrogram of the source speech (a snapshot of the frequency spectrum); an embedding, or vector representation, of the source speech phonemes (the smallest units of speech); and a vector indicating the speaker’s identity. 

The reference encoder outputs speaker-independent representations of the prosody of the input speech. These prosodic representations are robust to source speaker leakage despite being trained on non-parallel data. In the absence of parallel data, we train the model to transfer prosody from speakers onto themselves. 

The CopyCat architecture.

During inference, the phonemes of the speech to be synthesized pass first through a phoneme encoder and then to the reference encoder. The output of the reference encoder, together with the encoded phonemes and the speaker identity vector, then passes to the decoder, which generates speech with the target speaker’s voice and the source speaker’s prosody.

In order to evaluate the efficacy of our method, we compared CopyCat to a state-of-the-art model over five target voices, onto which the source prosody from 12 different unseen speakers had been transferred. CopyCat showed a statistically significant 47% increase in prosody transfer quality over the baseline. In another evaluation involving native speakers of American English, CopyCat showed a statistically significant 14% improvement over baseline in its ability to retain the target speaker’s identity. CopyCat achieves both the results with a significantly simpler decoder than the baseline requires, with no drop in naturalness. 

Prosody Selection 

Text-to-speech (TTS) has improved dramatically in recent years, but it still lacks the dynamic variation and adaptability of human speech.

One popular way to encode prosody in TTS systems is to use a variational autoencoder (VAE), which learns a distribution of prosodic characteristics from sample speech. Selecting a prosodic style for a synthetic utterance is a matter of picking a point — an acoustic embedding — in that distribution. 

In practice, most VAE-based TTS systems simply choose a point in the center of the distribution — a centroid — for all utterances. But rendering all the samples with the exact same prosody gets monotonous. 

In our Interspeech paper, we present a novel way of exploiting linguistic information to select acoustic embeddings in VAE systems to achieve a more dynamic and natural intonation in TTS systems, particularly for stylistic speech such as the newscaster speaking style.

Syntax, semantics, or both?

We experiment with three different systems for generating vector representations of the inputs to a TTS system, which allows us to explore the impact of both syntax and semantics on the overall quality of speech synthesis.

The first system uses syntactic information only; the second relies solely on BERT embeddings, which capture semantic information about strings of text, on the basis of word co-occurrence in large text corpora; and the third uses a combination of BERT and syntactic information. Based on these representations, our model selects acoustic embeddings to characterize the prosody of synthesized utterances.

To explore whether syntactic information can aid prosody selection, we use the notion of syntactic distance, a measure based on constituency trees, which map syntactic relationships between the words of a sentence. Large syntactic distances correlate with acoustically relevant events such as phrasing breaks or prosodic resets.

A constituency tree featuring syntactic-distance measures (orange circles).

credit: Glynis Condon

At left is the constituency tree of the sentence “The brown fox is quick, and it is jumping over the lazy dog”. Parts of speech are labeled according to the Penn part-of-speech tags: “DT”, for instance, indicates a determiner; “VBZ” indicates a third-person singular present verb, while “VBG” indicates a gerund or present participle; and so on.

The structure of the tree indicates syntactic relationships: for instance, “the”, “brown”, and “fox” together compose a noun phrase (NP), while “is” and “quick” compose a verb phrase (VP). 

Syntactic distance is a rank ordering that indicates the difference in the heights, within the tree, of the common ancestors of consecutive words; any values that preserve that ordering are valid.

One valid distance vector for this sentence is d = [0 2 1 3 1 8 7 6 5 4 3 2 1]. The completion of the subject noun phrase (after “fox”) triggers a prosodic reset, reflected in the distance of 3 between “fox” and “is”. There should also be a more emphasized reset at the end of the first clause, represented by the distance of 8 between “quick” and “and”.

We compared VAE models with linguistically informed acoustic-embedding selection against a VAE model that uses centroid selection on two tasks, sentence synthesis and long-form reading.

The sentence synthesis data set had four categories: complex utterances, sentences with compound nouns, and two types of questions, with their characteristic prosody (the rising inflection at the end, for instance): questions beginning with “wh” words (who, what, why, etc.) and “or” questions, which present a choice.

The model that uses syntactic information alone improves on the baseline model across the board, while the addition of semantic information improves performance still further in some contexts. 

On the “wh” questions, the combination of syntactic and semantic data delivered an 8% improvement over the baseline, and on the “or” questions, the improvement was 21%. This demonstrates that questions have closely related syntactic structures, information that can be used to achieve better prosody.

On long-form reading, the syntactic model alone delivered the best results, reducing the gap between the baseline and recorded speech by approximately 20%.



[ad_2]

Source link

We will be happy to hear your thoughts

Leave a reply

Discover Your Essential Style at NovaEssentials
Logo