As an Amazon Associate I earn from qualifying purchases.

A general approach to solving bandit problems

[ad_1]

Bandit problems are problems in which an agent interacting with the environment tries to simultaneously maximize some reward and learn how to maximize that reward. The name comes from the situation of a gambler trying to discover which of a casino’s slot machines — its one-armed bandits — offers the best payout, while minimizing the amount of money fed into machines that offer a lower chance of winning.

Bandit problems take their name from one-armed bandits, or slot machines. The problem structure is that of a gambler trying to discover which of a casino’s slot machines offers the best payout, while minimizing the amount of money fed into machines that offer a lower chance of winning.

Credit: mbbirdy/Getty Images/iStockphoto

Bandit problems arise in a wide range of contexts, but designing and deploying machine learning systems to solve them is often too complicated to be practical. So we’ve developed a simple, flexible framework for solving bandit problems, which lets us bring the benefits of powerful statistical tools to applications that are not as high-impact as ranking content on the Amazon home page but still make a difference in the quality of our customers’ experiences.

At this year’s International Conference on Information and Knowledge Management (CIKM), we are presenting two applications of this framework, as an initial demonstration of its flexibility and ease of use. In ongoing work, we’re applying the framework to other problems, as well.

The first CIKM paper is about the learning-to-rank problem, or determining the order in which a list of items should be presented to a customer. The classic learning-to-rank problem focuses on ordering search results, but the same methods apply to any presentation of information, such as the layout of a web page or ranking music recommendations.

The second paper is about a specific application of learning-to-rank to natural-language understanding (NLU), of the kind Alexa does when processing customer requests. Where an utterance has multiple possible NLU interpretations, learning-to-rank allows us to select the best one for a specific customer. If, for instance, a customer says to Alexa, “Play Dark Side of the Moon”, it’s unclear whether this refers to the album by Pink Floyd or the song by Lil Wayne. Alexa’s NLU models output lists of possible NLU interpretations, scored by probability. Our system re-ranks those lists on the basis of individual customers’ listening histories.

We tested our learning-to-rank approach by using it to determine the order in which Amazon Music presents music recommendations to customers. Compared to a learning-to-rank algorithm that uses matrix factorization, our method led to a 7.6% increase in the frequency with which customers selected one of the recommended songs for playback and a 7.2% increase in the listening duration of the selected songs.

Similarly, we tested the NLU re-ranking system on spoken-language music requests, using accepted playbacks as an implicit signal that a song was correctly selected.

The re-ranking was limited to a relatively small percentage of traffic, where the top NLU interpretation had not worked well in the past. On those requests, we observed a dramatic increase in accepted playbacks, in the range of 50% to 70%.

Where’s the action?

Our framework models each interaction in the bandit setting as the ordering of a finite list of given actions. An action could be playing a song or displaying a search result or a layout element at a particular location on-screen.

We model each action as a vector of fixed length. This permits the later addition of actions that are not yet known when the model is created. The vector may also include contextual information, which can lead the model to make different choices in different situations. For instance, if a customer says to a voice agent, “Play exile”, the model might rank either the Taylor Swift song “Exile” or music by the band Exile higher, depending on what the contextual information indicates about the customer’s listening history.

After the model presents its list of actions, it receives feedback about one or more of the actions. If the voice agent plays a song, and the customer cuts it off after only a few seconds, that’s an indication of dissatisfaction with the song choice. If a website presents the customer with a list of song options, and the customer clicks on three of them, that’s an indication that those songs should have been at the top of the list.

In the bandit setting, the goal is to both explore the environment — to learn what actions elicit the greatest rewards — and exploit the knowledge gained — to maximize the reward. After each interaction with the environment, the agent has new information on which to base the ordering of its next list. The idea is to select the sequence of orderings that best manages the explore/exploit trade-off.

In our CIKM papers, we adapted two well-known learning algorithms to work with our bandit model: the upper-confidence-bound (UCB) algorithm and Thompson sampling. But the framework is flexible enough permit the use of other algorithms as well.

In the learning-to-rank paper, we extended our model to account for position bias, or the influence of an item’s position in a list on the customer’s decision to select it: items toward the top of a list tend to be selected more frequently, even if they’re not the best matches for the customer’s query. We thus model the probability that an item is selected as a combination of both its relevance to the query and its position on the list.

In the NLU interpretation paper, the crucial adaptation was determining which contextual information to include in the action vector. The popularity of the song or album to be played is one such factor, as is an indication of the customer’s “affinity” for the artist, based on listening history.

Interested readers can consult the papers for more details. But these are just two illustrative applications of a framework we are using to improve the quality of the experiences we provide our customers.



[ad_2]

Source link

We will be happy to hear your thoughts

Leave a reply

Discover Your Essential Style at NovaEssentials
Logo