[ad_1]
Interspeech is a technical conference focused on speech processing and application, emphasizing interdisciplinary approaches addressing all aspects of speech science and technology, ranging from basic theories to advanced applications. Amazon was a platinum sponsor of the 2020 event, held Oct. 25-29. To showcase some of the more recent advancements in the fields of speech science, Amazon Alexa scientists Shiv Vitaladevuni, Ariya Rastrow, Andrew Breen, and Chao Wang hosted a lightning talks session and live Q&A.
Their talks included observations on the current state, new developments, and recent announcements surrounding advancements in Alexa speech technologies. They also shared some of the research presented at this year’s Interspeech. Watch their talks below to learn more about the aspects of speech research bringing Alexa to life — including wake word, automatic speech recognition (ASR), text-to-speech (TTS), and acoustic event detection — as well as the teams behind it all.
An overview of wake word technology
Amazon Alexa speech lightning talks: Shiv Vitaladevuni, director of applied science
Automatic speech recognition
Amazon Alexa speech lightning talks: Ariya Rastrow, senior principal scientist
Making Alexa sound more natural
Amazon Alexa speech lightning talks: Andrew Breen, senior manager, TTS research
Acoustic event detection
Amazon Alexa speech lightning talks: Chao Wang, senior manager, applied science
window.fbAsyncInit = function() { FB.init({
appId : '1024652704536162',
xfbml : true, version : 'v2.9' }); };
(function(d, s, id){
var js, fjs = d.getElementsByTagName(s)[0];
if (d.getElementById(id)) {return;}
js = d.createElement(s); js.id = id;
js.src = "https://connect.facebook.net/en_US/sdk.js";
fjs.parentNode.insertBefore(js, fjs);
}(document, 'script', 'facebook-jssdk'));
[ad_2]
Source link