As an Amazon Associate I earn from qualifying purchases.

3 questions with Marzia Polito: Performing computer vision tasks at scale with few-shot learning

[ad_1]

The first Amazon Web Services (AWS) Machine Learning Summit on June 2 will bring together customers, developers, and the science community to learn about advances in the practice of machine learning (ML). The event, which is free to attend, will feature four audience-focused tracks, including the Science of Machine Learning.

Register for the free ML Summit

The goal of the summit is to bring together customers, developers, and the science community to learn about advances in machine learning. Click here to register.

The science track is focused on the data science and advanced practitioner audience, and will highlight the work AWS and Amazon scientists are doing to advance machine learning. The track will comprise six sessions, each lasting 30 minutes, and a 45-minute fireside chat.

In the coming weeks, Amazon Science will feature interviews with speakers from the Science of Machine Learning track. For the first edition of the series, we spoke to Marzia Polito, senior manager of applied science at Amazon, about how AWS customers are training and deploying computer vision models in situations where there is a scarcity of training data.

Polito joined Amazon in June 2020. Previously, she spent a decade as senior staff software engineer at Google, served as founder and chief scientist at Whozat, and was a senior research scientist at Intel.

Q. What is the subject of your talk going to be at the ML Summit?

I will be talking about how we support AWS customers, even those without machine learning expertise, who are developing high quality computer vision models for visual tasks that can’t be solved with general purpose tools. In many of these cases, there is a scarcity of labelled training data that is commonly used to train machine learning models. 

We can get around the paucity of training data by using transfer learning. Transfer learning takes advantage of patterns shared across domains. Human beings use transfer learning all the time. For example, we don’t have to look at thousands of robins to identify a new one. We just need to look at a few before we create an association with a red breast. We transfer knowledge by complementing the robin sighting with our previous knowledge of what a bird is.

When it comes to computer vision, transfer learning can play an important role in specialized fields. To continue with our robin analogy, if you are an ornithologist, you’d want a computer vision model to look at an image of a robin and immediately identify the relevant subspecies. You’d even want it to be able to identify a new species, even if nobody has encountered that species before. That’s what our customers ultimately expect from our AI systems.

Today, we are making this happen with the use of “few-shot learning” techniques, which involve the ability to perform customized visual tasks, like image classification, with few or rare labels and with the aid of just a few samples of the desired outcome.

Q. Why is this topic especially relevant within the science community today?

Over the last few years, deep learning has allowed scientists to achieve results that were previously unthinkable. The community is thriving, and the maturity is there to make the science really useful to the world. Today, scientists are using computer vision to drive advancements in areas like self-driving navigation systems, medical imaging analysis, archaeology, and architecture.

There’s an urgent need to advance state of the art in areas like meta-learning, transductive learning, and semi-supervised learning.

However, there remain a plethora of unsolved problems that present a real opportunity to advance the science. Data is abundant, labels are not. The landscape of objects, people, places, scenes, and actions that we want to recognize is growing more complex, and people don’t have time or resources to label them. By thinking about these real-world use cases, we are forced to operate under strict constraints, and orchestrate complex solutions that go beyond training a simple model.

Few-shot image classification and object detection are great examples of what I’m talking about. Until very recently, we weren’t paying much attention to training computer vision models from a very limited set of images. And why would we? We are, after all, living in a world where the amount of video and images being shared is growing exponentially.

And yet, this explosion in video and images has meant that we don’t have the time or the resources to label all of this data. This is especially true for specialized fields like medicine or archaeology, where practitioners don’t have the bandwidth or the resources to label images, or develop deep neural networks to solve every task in isolation.

To function in a world of so many constraints, there’s an urgent need to advance state of the art in areas like meta-learning, transductive learning, and semi-supervised learning. A universe of new settings and constraints for traditional computer vision problems like image classification or object detection has opened up. The good news is that these problems are very real, and so is the reward when we solve them.

Q. Can you provide some examples of how AWS customers are using few-shot learning to scale and automate tasks?

All of AWS’ Custom Labels and Lookout for Vision customers benefit from transfer learning. Custom Labels are used to recognize images and objects specific to business needs. Lookout for Vision customers utilize the service to spot defects and anomalies in visual representations.

For every business that utilizes these services, we first determine the most appropriate models and techniques. This allows us to ensure that any pre-existing knowledge is leveraged to its fullest extent. We have developed and published research covering effective ways to perform that selection.

Today, media businesses are using few-shot learning techniques to analyze millions of frames and retrieve a particular character. Medical companies are using these techniques in diagnostic imaging. Electrical companies have been able to prevent wildfires by using computer vision to identify and shut off defective equipment. These businesses, and many more, are using few-shot learning to tackle problems that, when solved, will deliver benefits for people around the world.

You can learn about Marzia’s research here and watch her speak at the virtual AWS Machine Learning Summit on June 2 by registering at the link below.



[ad_2]

Source link

We will be happy to hear your thoughts

Leave a reply

Discover Your Essential Style at NovaEssentials
Logo