As an Amazon Associate I earn from qualifying purchases.

ICLR: The AI conference that helped redefine the field

[ad_1]

The International Conference on Learning Representations (ICLR), which will be virtual this year and begins next week, is only eight years old. But according to Google Scholar’s rankings of the highest-impact publication venues in the field of AI, it’s second only to the enormously popular NeurIPS.

“That is quite impressive for a young conference,” says Stefano Soatto, the director of applied science for Amazon Web Services’ AI applications, who is on leave from the University of California, Los Angeles, where he’s a professor of computer science.

“ICLR was born as a niche conference but has become the mainstream,” Soatto explains. “It is specifically a conference on learning representations. Representations are functions of the data that are designed or learned so as to solve a given task. Because powerful data representations have been so central — thanks to the advent of deep learning — the difference between ICLR and the other AI conferences has shrunk.”

Stefano Soatto, director of applied science for Amazon Web Services’ AI applications

Credit: UCLA Samueli

Originally, Soatto explains, developing data representations required expertise in the relevant fields. For example, he says, consider SIFT, or the scale-invariant feature transform. As its name suggests, SIFT produces representations of visual features that are invariant with respect to scale: the features that characterize images of dogs, for example, should be the same whether the dog is photographed in long shot or closeup.

“SIFT comes from two disciplines that have deep roots,” Soatto says. “One is harmonic analysis — all the literature on wavelets, filter banks, multiscale Fourier analysis, and so forth. The other is computational neuroscience, where, going back to Marr, people have noticed there is a certain organization in the processing of data in the visual cortex. So SIFT is kind of the summa sensible implementation of ideas from neuroscience and harmonic analysis that really required specific domain knowledge.

“But then neural networks come about, and with relatively simple operations from linear algebra and optimization, all of a sudden you could obtain results that are state of the art. So that was really a game changer.”

“I’m not suggesting that neural networks are easy,” he adds. “You need to be an expert to make these things work. But that expertise serves you across a broader spectrum of applications. In a sense, all of the effort that previously went into feature design now goes into architecture design and loss function design and optimization scheme design. The manual labor has been raised to a higher level of abstraction.”

Versatility

Two of the four Amazon papers at ICLR are on the topic of meta-learning, or learning how to learn, and the other two are on transfer learning, or improving a network’s performance in a domain where data are sparse by pre-training it on a related domain where data are abundant. But all four papers are about adapting machine learning systems to new tasks.

This is natural, Soatto says, given the current state of the field of learning representations.

“If you ask the question, ‘Given a particular set of data and given a task, what is the best possible representation one could construct?’, we have a good handle on that, both theoretically and practically,” Soatto says. “What remains a challenge are two complementary problems. One is, ‘Given a task, what is the best data I can get for it?’ That’s the problem of active learning, which Amazon Web Services is covering with Ground Truth, autoML, and Custom Labels.”

“The other is when you want to use a model trained for a particular learning task on a different task,” Soatto continues. “This is the problem of transfer learning and domain adaptation, where you know that your training set will be misaligned from the test sets.” It’s also the problem that the three ICLR papers from Soatto’s group at Amazon address.

Benchmarks

“‘A Baseline for Few-Shot Image Classification’ speaks to the gap between academic research and real-world research,” Soatto says. “There is a field called few-shot learning. The idea is, basically, you want to learn how to solve learning tasks given very few samples. And there are some benchmark data sets.

“Benchmarks are a sanity check that allows you to objectively compare with others. But sometimes the benchmarks are detrimental to progress because they incentivize playing to the benchmark, developing algorithms that do well on the benchmarks.

“When we started looking at few-shot learning, we noticed that the benchmarks are very strange in the sense that they force you to make specific choices of how many images you train with: either one or five. But if we have a service for few-shot learning — which we do, called Custom Labels — people bring in however many images they have. It could be a million; it could be a hundred; it could be ten; it could be one.

“Obviously, you’re not going to be able to serve a different model for every possible number of samples they bring. So what we said was, ‘Why don’t we try the simplest thing that we can think of that would work no matter what the few-shot conditions?’ — with the expectation that this would be a baseline, the first thing that you can think of and easily implement that everybody should beat.

“And to our surprise, this trivial baseline beat every top-performing algorithm. Obviously, the paper is not saying this is how you should solve few-shot learning. It’s saying that we should rethink the way we evaluate few-shot learning, because if the simplest possible thing you can think of beats the state of the art, then there’s something wrong with the way we’re doing it.”

“We are at a time in history where industry leads academia, in the sense that it defines problems that just by sitting in your office and thinking of cool things to work on would not emerge,” Soatto adds. “These papers offer some examples, but there are many others.”



[ad_2]

Source link

We will be happy to hear your thoughts

Leave a reply

Discover Your Essential Style at NovaEssentials
Logo