[ad_1]
Alexa launched in 2014, and in the more than six years since, we’ve been making good on our promise to make Alexa smarter every day. In addition to foundational improvements in Alexa’s core AI technologies, such as speech recognition and natural-language-understanding systems, Alexa scientists have developed technologies that continue to delight our customers, such as whispered speech and Alexa’s new live translation service.
But some of the technologies we’ve begun to introduce, together with others we’re now investigating, are harbingers of a step change in Alexa’s development — and in the field of AI itself. Collectively, these technologies will bring a new level of generalizability and autonomy to both the Alexa voice service and the tools available to Alexa developers, ushering in what I like to think of as a new “age of self” in artificial intelligence, an age in which AI systems such as Alexa become more self-aware and more self-learning, and in which they lend themselves to self-service by experienced developers and even end users.
By self-awareness, I mean the ability to maintain an awareness of ambient state (e.g., time of day, thermostat readings, and recent actions) and to employ commonsense reasoning to make inferences that reflect that awareness and prior/world knowledge. Alexa hunches can already recognize anomalies in customers’ daily routines and suggest corrections — noticing that a light was left on at night and offering to turn it off, for instance. Powered by commonsense reasoning, self-awareness goes further: for instance, if a customer turns on the television five minutes before the kids’ soccer practice is scheduled to end, an AI of the future might infer that the customer needs a reminder about pickup.
Self-learning is Alexa’s ability to improve and expand its capabilities without human intervention. And like self-awareness, self-learning employs reasoning: for example, does the customer’s response to an action indicate dissatisfaction with that action? Similarly, when a customer issues an unfamiliar command, a truly self-learning Alexa would be able to infer what it might mean — perhaps by searching the web or exploring a knowledge base — and suggest possibilities.
Self-service means, essentially, the democratization of AI. Alexa customers with no programming experience should be able to customize Alexa’s services and even create new Alexa capabilities, and skill developers without machine learning experience should be able to build complex yet robust conversational skills. Colloquially, these are the conversational-AI equivalents of no-code and low-code development environments.
To be clear, the age of self is not yet upon us, and its dawning will require the maturation of technologies still under development, at Amazon and elsewhere. But some of Alexa’s recently launched capabilities herald a lightening in the Eastern sky.
Self-awareness
In 2018, we launched Alexa hunches for the smart home, with Alexa suggesting actions to take in response to anomalous sensor data. By early 2021, the science has advanced adequately for us to launch an opt-in service in which Alexa can take action immediately and automatically. In the meantime, we’ve also been working to expand hunches to Alexa services other than the smart home.
Technologies will bring a new level of generalizability and autonomy to both the Alexa voice service and the tools available to Alexa developers, ushering in what I like to think of as a new ‘age of self’ in artificial intelligence.
But commonsense reasoning requires something more — the ability to infer customers’ implicit intentions from observable temporal patterns. For instance, what does it mean if the customer turns down the thermostat, turns out the lights, locks the front door, and opens the garage? What if the customer initiates an interaction with a query like “Alexa, what’s playing at Rolling Hills Cine Plaza?”
In 2020, we took steps toward commonsense reasoning with a new Alexa function that can infer a customer’s latent goal— the ultimate aim that lies behind a sequence of requests. When a customer asks for the weather at the beach, for instance, Alexa might use that query, in combination with other contextual information, to infer that the customer may be interested in a trip to the beach. Alexa could then offer the current driving time to the beach.
To retrieve that information, Alexa has to know to map the location of the weather request to the destination variable in the route-planning function. This illustrates another aspect of self-awareness: the ability to track information across contexts.
That ability is at the core of the night-out experience we’ve developed, which engages the customer in a multiturn conversation to plan a complete night out, from buying movie tickets to making restaurant and ride-share reservations. The night-out experience tracks times and locations across skills, revising them on the fly as customers evaluate different options. To build the experience, we leveraged the machinery of Alexa Conversations, a service that enables developers to quickly and easily create dialogue-driven skills, and we drew on our growing body of research on dialogue state tracking.
Self-awareness, however, includes an understanding not only of the conversational context but also of the customer’s physical context. In 2020, we demonstrated natural turn-taking on Alexa-enabled devices with cameras. When multiple speakers are engaging with Alexa, Alexa can use visual cues to distinguish between speech the customers are directing at each other and speech they’re directing at Alexa. In ongoing work, we’re working to expand this functionality to devices without cameras, by relying solely on acoustic and linguistic signals.
Finally, self-awareness also entails the capacity for self-explanation. Today, most machine learning models are black boxes; even their creators have no idea how they’re doing what they do. That uncertainty has turned explainable or interpretable AI into a popular research topic.
Amazon actively publishes on explainable-AI topics. In addition, the Alexa Fund, an Amazon venture capital investment program, invested in fiddler.ai, a startup that uses techniques based on the game-theoretical concept of Shapley values to do explainable AI.
Self-learning
Historically, the AI development cycle has involved collection of data, annotation of that data, and retraining of models on the newly annotated data — all of which add up to a laborious process.
In 2019, we launched Alexa’s self-learning system, which automatically learns to correct errors — both customer errors and errors in Alexa’s language-understanding models — without human involvement. The system relies on implicit signals that a request was improperly handled, as when a customer interrupts a response and rephrases the same request.
Currently, that fully automatic system is correcting 15% of defects. But those are defects that occur across a spectrum of users; only when enough people implicitly identify the same flaw does the system address it. We are working to adapt the same machinery to individual customers’ preferences — so that, for instance, Alexa can learn that when a particular customer asks for the song “Wow”, she means not the Post Malone hit from 2019 but the 1978 Kate Bush song.
Customers today also have the option of explicitly teaching Alexa their preferences. In the fall of 2020, we launched interactive teaching by customers, a capability that enables customers to instruct Alexa how they want certain requests to be handled. For instance, the customer can teach Alexa that the command “reading mode” means lights turned all the way up, while “movie mode” means only twenty percent up.
Self-service
Interactive teaching is also an early example of how Alexa is enabling more self-service. It extends prior Alexa features, like blueprints, which let customers build their own simple skills from preexisting templates, and routines, which let customers chain together sequences of actions under individual commands.
In March 2021, we announced the public release of Alexa Conversations, which allows developers to create dialogue-driven skills by uploading sample dialogues. Alexa Conversations’ sophisticated machine learning models use those dialogues as templates for generating larger corpora of synthetic training data. From that data, Alexa Conversations automatically trains a machine learning model.
Alexa Conversations does, however, require the developer to specify the set of entities that the new model should act upon and an application programming interface for the skill. So while it requires little familiarity with machine learning, it assumes some programming experience.
We are steadily chipping away at even that requirement, by making development for Alexa easier and more intuitive. As Alexa’s repertory of skills grows, for instance, entities are frequently reused, and we already have systems that can inform developers about entity types that they might not have thought to add to their skills. This is a step toward a self-service model in which developers no longer have to provide exhaustive lists of entities — or, in some cases, any entities at all.
Another technique that makes it easier to build machine learning models is few-shot learning, in which an existing model is generalized to a related task using only a handful of new training examples. This is an active area of research at Alexa: earlier this year, for example, we presented a paper at the Spoken Language Technologies conference that described a new approach to few-shot learning for natural-language-understanding tasks. Compared to its predecessors, our approach reduced the error rate on certain natural-language-understanding tasks by up to 12.4%, when each model was trained on only 10 examples.
These advances, along with the others reported on Amazon Science, demonstrate that the Alexa AI team continues to accelerate its pace of invention. More exciting announcements lie just over the horizon. I’ll be stopping back here every once in a while to update you on Alexa’s journey into the age of self.
window.fbAsyncInit = function() { FB.init({
appId : '1024652704536162',
xfbml : true, version : 'v2.9' }); };
(function(d, s, id){
var js, fjs = d.getElementsByTagName(s)[0];
if (d.getElementById(id)) {return;}
js = d.createElement(s); js.id = id;
js.src = "https://connect.facebook.net/en_US/sdk.js";
fjs.parentNode.insertBefore(js, fjs);
}(document, 'script', 'facebook-jssdk'));
[ad_2]
Source link