As an Amazon Associate I earn from qualifying purchases.

NeurIPS: Why causal-representation learning may be the future of AI

[ad_1]

In a conversation right before the 2021 Conference on Neural Information Processing Systems (NeurIPS), Amazon vice president and distinguished scientist Bernhard Schölkopf — according to Google Scholar, the most highly cited researcher in the field of causal inference — said that the next frontier in artificial-intelligence research was causal-representation learning.

Where existing approaches to causal inference use machine learning to discover causal relationships between variables — say, the latencies of various interrelated services on a website — causal-representation learning learns the variables themselves. “These kinds of causal representations will also go toward reasoning, which we will ultimately need if we want to move away from this pure pattern recognition view of intelligence,” Schölkopf said.

Senior applied scientist Francesco Locatello.

Francesco Locatello, a senior applied scientist with Amazon Web Services, leads Amazon’s research on causal-representation learning, and he’s a coauthor on four papers at this year’s NeurIPS.

“Assaying out-of-distribution generalization in transfer learning” concerns one of the most compelling applications of causal inference in machine learning: generalizing models trained on data with a particular probability distribution to real-world data with a different distribution.

“When you do standard machine learning, you are drawing independent samples from some probability distribution, and then you train a model that’s going to generalize to the same distribution,” Locatello explains. “You’re describing a physical system using a single probability distribution. Causal models are different because they model every possible state that this physical system can take as a result of an intervention. So instead of having a single probability distribution, you have a set of distributions.

Amazon Scholars Michael I. Jordan and Michael Kearns and Amazon distinguished scientist Bernhard Scholkopf NeurIPS Amazon Science.jpg

Related content

Amazon Science hosts a conversation with Amazon Scholars Michael I. Jordan and Michael Kearns and Amazon distinguished scientist Bernhard Schölkopf.

“What does it mean that your test data comes from a different distribution? You have the same underlying physical system; the causal structure is the same. It’s just a new intervention you have not seen. Your test distribution is different than the training, but now it’s not an arbitrary distribution. It’s well posed because it’s entailed by the causal structure, and it’s a meaningful distribution that may happen in the real world.”

In “Assaying out-of-distribution generalization in transfer learning”, Locatello explains, “what we do is to collect a huge variety of datasets that are constructed for or adapted to this scenario where you have a very narrow data set that you can use for transfer learning, and then you have a wide variety of test data that is all out of distribution. We look at the different approaches that have been studied in the literature and compare them on fair ground.”

Although none of the approaches canvassed in the paper explicitly considers causality, Locatello says, “causal approaches should eventually be able to do better on this benchmark, and this will allow us to evaluate our progress. That’s why we built it.”

Neural circuits

Today’s neural networks do representation learning as a matter of course: their inputs are usually raw data, and they learn during training which aspects of the data are most useful for the task at hand. As Schölkopf pointed out in conversation last year, causal-representation learning would simply bring causal machine learning models up to speed with conventional models.

Conditioning set.gif

Related content

New method goes beyond Granger causality to identify only the true causes of a target time series, given some graph constraints.

“The important thing to realize is that most machine learning applications don’t come structured as a set of well-defined random variables that fully align with the underlying functioning of a physical system,” Locatello explains. “We still want to model these systems in terms of abstract variables, but nobody gives these variables to us. So you may want to learn them in order to be able to perform causal inference.”

Among his and his colleagues’ NeurIPS papers, Locatello says, the one that comes closest to the topic of causal-representation learning is “Neural attentive circuits”. Causal models typically represent causal relationships using graphs, and a neural network, too, can be thought of as an enormous graph. Locatello and his collaborators are trying to make that analogy explicit, by training a neural network to mimic the structure of a causal network.

“This is a follow-up on a paper we had last year in NeurIPS,” Locatello says. “The inspiration was to design architectures that behave more similarly to causal models, where you have the noise variables — that’s the data — and then you have variables that are being manipulated by functions, and they simply communicate with each other in a graph. And this graph can change dynamically when a distribution changes, for example, because of an intervention.

“In the first paper, we developed an architecture that behaves exactly like that: you have a set of neural functions that can be composed on the fly, depending on the data and the problem. The functions, the routing, and the stitching of the functions are learned. Everything is learned. But it turns out that dynamic stitching is not very scalable.

“In this new work, we essentially compiled the stitching of the functions so that for each sample it’s decided beforehand — where it’s going to go through the network, how the functions are going to be composed. Instead of doing it on the fly one layer at a time, you decide for the overall forward pass. And we demonstrated that these sparse learned connectivity patterns improve out-of-distribution generalization.”

Success stories

Locatello’s other NeurIPS papers are on more-conventional machine learning topics. “Self supervised amodal video object segmentation” considers the problem of reconstructing the silhouette of an occluded object, which is crucial to robotics applications, including autonomous cars.

Segmentations of partially occluded objects, from “Self supervised amodal video object segmentation”.

“We exploit the principle that you can build information about an object over time in a video,” Locatello explains. “Perhaps in past frames you’ve seen parts of the objects that are now occluded. If you can remember that you’ve seen this object before, and this was its segmentation mask, you can build up your segmentation over time.”

The final paper, “Are two heads the same as one? Identifying disparate treatment in fair neural networks”, considers models whose training objectives are explicitly designed to minimize bias across different types of inputs. Locatello and his colleagues find that frequently, such models — purely through training, without any human intervention — develop two “heads”: that is, they learn two different pathways through the neural network, one for inputs in the sensitive class, and one for all other inputs.

Causal circuits 16x9.png

Related content

Amazon ICML paper proposes information-theoretic measurement of quantitative causal contribution.

The researchers argue that, since the network is learning two heads, anyway, it might as well be designed with a two-headed architecture: that would improve performance while meeting the same fairness standard. But this approach hasn’t been adopted, as it runs afoul of rules prohibiting disparate treatment of different groups. In this case, however, disparate treatment could be the best way to ensure fair treatment.

These last two papers are only obliquely related to causality. But, Locatello says, “causal-representation learning is a very young field. So we are trying to identify success stories, and I think these papers are going in that direction.”

“It’s clear that causality will have a role in future machine learning,” he adds, “because there are a lot of open problems in machine learning that can at least be partially addressed when you start looking at causal models. And my goal really is to realize the benefits of causal models in mainstream machine learning applications. That’s why some of these works are not necessarily about causality, but closer to machine learning. Because ultimately, that’s our goal.”



[ad_2]

Source link

We will be happy to hear your thoughts

Leave a reply

Discover Your Essential Style at NovaEssentials
Logo