As an Amazon Associate I earn from qualifying purchases.

Amazon at ACL: How to teach machines to reason

[ad_1]

As a senior area chair at this year’s meeting of the Association for Computational Linguistics (ACL), Dan Roth, who recently joined Amazon Web Services’ AI organization as science lead for natural-language processing, has a good vantage on paper submissions to the conference. On this year’s program, one theme leaped out at him.

Dan Roth, science lead for natural-language processing in Amazon Web Services’ AI organization and the Glandt Distinguished Professor in the University of Pennsylvania’s Department of Computer and Information Science.

“I looked at some statistics of papers in ACL, and I saw that there are dozens of papers now that have ‘reasoning’ in the title,” says Roth, who is also the Glandt Distinguished Professor in the University of Pennsylvania’s Department of Computer and Information Science. “The title ‘learning to reason’ is now becoming sort of hot. I think a lot of AI is going in that direction.”

Machine reasoning, Roth says, is “the ability to make inferences, especially in ‘sparse’ situations that are unlikely to have been observed before”. The classic example is deduction: from the facts that all women are mortal and that Sappho is a woman, a machine reasoning system should infer that Sappho is mortal.

Roth is well situated to review recent progress in the field, as it’s been a topic of his own research for more than 25 years. 

“This was actually my PhD work,” he says. “Learning theory was an emerging field at that time. The questions were basically, How can we formalize learning, and what does it mean that something is learnable or not learnable? What are the computational-complexity issues in learning? I was trying to move this towards questions in reasoning, which were never studied from a theoretical perspective or computational-complexity perspective.

“The assumption was that someone gives you an input — a knowledge base, for example — and you present reasoning queries to it, and in this context you want to show what can be computed. My PhD thesis was about showing that if you don’t start from a knowledge base, but you jointly do learning from data and reasoning from the resulting, intermediate representation, it’s easier than doing each one of them separately. You could say that end-to-end learning today is an instantiation of this learning-to-reason process, although just conceptually. Technically, the things are very, very different.”

Compositionality

Even though Roth is, in a sense, a pioneer of end-to-end reasoning models, he believes that more-complex reasoning problems will require more-complex modeling.

“We have a lot of hard problems that we are far from being able to address using just one model,” he says. “A lot of the problems will require thinking about things in a modular way. 

Amazon at ACL

Learn more about Amazon’s involvement at ACL 2021 — research papers, workshops and tutorials, and committee memberships.

“I’ll give you a simple example. I want to ask my virtual assistant, ‘Are we going to make it to dinner before the movie?’ What does this assistant need to do in order to respond to my question? It needs to know where I am now, where the movie is, how long it’s going to take to get there — that’s easy to do today. How long is dinner? I didn’t say anything about it, but we have some idea of the typical length of dinner, maybe as a function of where dinner is. Do I need to find parking? I didn’t mention parking. It’s an implicit event, but we know that I have to park, maybe next to the dinner place, maybe next to the movie. I have to factor this in.

“So I have to have models that know how to compute things, have some common sense — typical time of dinner, typical time of finding parking, driving between these places. And then I need a model that knows how to put this together. It’s not going to be the same model, because I’m not going to train on each question. Many of the problems that we want to address are like that, where there’s modularity, and we will never be able to move forward without realizing that there is modularity.”

Symbolic reasoning

Moreover, Roth says, the systems that integrate these separate modules will almost certainly need to use symbolic reasoning, or rule-based manipulation of symbolic representations.

“The growth and the excitement around neural networks has left symbols behind,” Roth says. “Some people think that symbols are an evil invention of the old AI people. But symbols were invented because they’re useful, necessary abstractions. And also, explanations are symbolic, right? When you ask me, ‘Why did you decide this?’ or ‘Why is this implied by that?’, I need to explain it to you, and I need to use symbols when I do this. So I think we are beginning to explore this interesting space between models that are continuous, if you like, and interactions that are largely symbolic.

Some people think that symbols are an evil invention of the old AI people. But symbols were invented because they’re useful, necessary abstractions

“I’ll give you an example. I’ve worked a lot on reasoning about time, as expressed in natural-language text. If you want to reason about events, you have to use the fact — and people do it all the time — that time is transitive. If A happens before B, and B happens before C, then A happens before C. This will never be written explicitly. So we kind of tell our models ‘Time is transitive’, and we can show that this helps a lot.”

The transitivity of time, however, is something that can be represented in the architecture of a neural network. That won’t always be the case, Roth explains.

“There are some cases where only in postprocessing are you aware of some declarative constraints,” Roth says. “Once you evaluate your model, once you decode, once you make the decision — only then do you want to impose a declarative constraint. Sometimes there are constraints that I was unaware of while I was training the model: the model is fixed, I trained it yesterday, but now I’m using it in a given situation where I’m aware of a constraint, and I want to be able to impose it. And there is very interesting theoretical work that people are doing now on trying to understand the advantages and disadvantage of these two paradigms — when which one is better. But the fact of the matter is that we need both.”

“In the last five years, deep neural networks have had a huge impact, especially in the context of natural language,” Roth adds. “There’s a lot of excitement, for good reason. But sooner or later, people get to the realization that that’s not sufficient. I think today, more and more people are beginning to think about reasoning problems and the need to decompose and compose to address them.”



[ad_2]

Source link

We will be happy to hear your thoughts

Leave a reply

Discover Your Essential Style at NovaEssentials
Logo