[ad_1]
As deep neural networks have come to dominate AI, the Conference on Neural Information Processing Systems (NeurIPS 2020) has become the most popular conference in the field.
And at the most popular conference in the field, one of the most popular topics is reinforcement learning: at this year’s NeurIPS, 95 accepted papers use the term in their titles.
“Reinforcement learning is very, very powerful, because you can kind of learn anything, adaptively from the feedback, and by exploring the decision space,” says Shipra Agrawal, an Amazon Scholar, an assistant professor in Columbia University’s Industrial Engineering and Operations Research Department, and an area chair at NeurIPS, who studies reinforcement learning. “In concept, it’s very akin to how humans learn, by trial and error, and how they adapt to what they see — without requiring a loss function and so on, just by some kind of rewards or positive feedback.”
In reinforcement learning, an agent explores its environment, trying out different responses to different states of affairs, gradually learning a set of policies that will enable it to maximize some reward.
“Usually, when we say ‘classification’ or ‘regression’, it’s a very specific pattern recognition problem, and then you have to think carefully about where you can apply it,” Agrawal says. “But when you say ‘reinforcement learning,’ at a high level, it’s like, ‘Oh, I will just make some decisions, see some feedback, and then I’ll change my decisions accordingly.’ That’s it.
“But theoretically, it also has a lot of structure, which makes it possible to analyze it in a rigorous manner. So I think there’s this unique combination of theoretical curiosity and practical potential that makes it very attractive. I teach a course on reinforcement learning [at Columbia], which is a PhD class of 40, and it usually has a waiting list of 100.”
Markov decision processes
What gives the problem of reinforcement learning its theoretical structure, Agrawal explains, is a model of sequential decision making called a Markov decision process (MDP). An MDP consists of a sequence of decisions, and each decision elicits feedback: if, for instance, the decision is what products to display on a shopping site, the feedback might be whether a customer buys one of the displayed products.
Each new decision is based not only on the feedback to the previous decision but on the whole history of decisions and feedback that preceded it. That history is captured in a variable called the state.
“That makes the decision-making problem complex, because now you need to think not only about what decisions would generate good immediate feedback but also what decisions could take you to good states,” Agrawal says. “A decision may produce a good immediate reaction, but it may take the customer to a bad state later on. The Markov decision process is specified by the state transition model and the reward model, because both of them matter when you’re deciding what is a good policy to make decisions.”
Framing the problem of reinforcement learning in terms of MDPs is straightforward, Agrawal says. “Just take MDP and say, ‘I don’t know the state transition model, and I don’t know the reward model, and I need to learn them from the feedback itself,” she says. “I can observe the state at the last time step, I know what the state is right now, and if I’ve seen enough of these transitions, I can infer what the state transition model must have been. In the same way, I can infer what the reward model must have been. The challenge is to combine this learning with the task of finding a good policy.”
MDPs have been studied since at least the 1950s. But, Agrawal says, “once you make machine learning a central feature in that problem, then suddenly, the goal shifts. How do I properly combine these two things? The initial solutions were based on a two-phase process: you learn the model first, and then you optimize it. But that doesn’t scale. How well do you need to learn the model? You can figure out how well you need to learn it in the worst case, but that’s too much: you can’t really learn the model that well, everywhere. And then you say, ‘Well, I don’t need to learn it everywhere.’ Who cares what happens in some situation which is never going to occur or which I never really want to go into anyway? Conceptually, the problem becomes very, very different once you combine these two things.”
Explore/exploit
In her own work, Agrawal is exploring another trade-off posed by reinforcement learning: the explore-exploit dilemma. During reinforcement learning, the agent is trying to learn how to maximize some reward (exploration) — but it’s also trying to maximize the reward (exploitation). How does it decide when to risk an untried response that may provide useful information but may also reduce its reward?
The explore-exploit dilemma can be modeled as a bandit problem, named after the situation of a gambler trying to identify the one slot machine (one-armed bandit) among many with the highest average payout. Bandit problems are another popular topic at NeurIPS: the word “bandit” shows up in the titles of 45 accepted papers.
“A lot of my work is based on this idea of posterior sampling — Thompson sampling — which is a Bayesian way to do exploration/exploitation,” Agrawal says. “It’s a theoretically rigorous method — you can prove guarantees on it — and at the same time, with posterior sampling, the cost of exploration is often as low as in some practically good, easy methods. So it kind of provides best of both worlds, at least in bandits. I’m very excited about taking these techniques and using them for reinforcement learning, especially for deep reinforcement learning.”
Learn more about Amazon researchers’ involvement in NeurIPS 2020, the papers they’re presenting, and the workshops that Amazon is sponsoring and that our researchers are organizing and attending, and sign up to participate in a virtual panel on fairness in AI on December 11.
window.fbAsyncInit = function() { FB.init({
appId : '1024652704536162',
xfbml : true, version : 'v2.9' }); };
(function(d, s, id){
var js, fjs = d.getElementsByTagName(s)[0];
if (d.getElementById(id)) {return;}
js = d.createElement(s); js.id = id;
js.src = "https://connect.facebook.net/en_US/sdk.js";
fjs.parentNode.insertBefore(js, fjs);
}(document, 'script', 'facebook-jssdk'));
[ad_2]
Source link