As an Amazon Associate I earn from qualifying purchases.

Amazon’s Janus framework lifts continual learning to the next level

[ad_1]

Watching items move down a conveyor belt toward a Robin robot arm at an Amazon fulfillment center is a great place to learn about the role continual learning plays in robotics.

The packages Robin encounters can include boxes, cylinders, and padded mailers of different shapes, sizes, and colors. Each group is different. Robin’s computer-vision system must make sense of them all by segmenting those packages into individual elements.

Robin's advanced perception system

Related content

An advanced perception system, which detects and learns from its own mistakes, enables Robin robots to select individual objects from jumbled packages — at production scale.

This is something a child can do instinctively. But it took months of training for the Robin robotic arm to distinguish among the different package types.

The types of packages we ship and the distribution of these packages changes frequently. Our models need to adapt to these changes while maintaining high performance. To do this, we require continual learning.

Scientists initially trained Robin to identify the different packages utilizing supervised learning, which graded the vision system’s accuracy as it tried to segment piles of packages from tens of thousands of images. Eventually, the system’s accuracy improved to the point where the robotics arms could be deployed in Amazon fulfillment centers.

Yet, there was a catch — the packages that Amazon delivers arrive in a constantly shifting variety of shapes and sizes.

“The problem with machine learning is that models must adapt to continually changing data conditions,” says Cassie Meeker, an Amazon Robotics applied scientist who is an expert user of Amazon’s continuous learning system. “Amazon is a global company — the types of packages we ship and the distribution of these packages changes frequently. Our models need to adapt to these changes while maintaining high performance. To do this, we require continual learning.”

To get there, Meeker’s team created a new learning system—a framework powerful and smart enough to adapt to distribution shifts in real time.

The framework, called Janus, automates some aspects of the retraining process. Named after the Roman god of transitions, Janus provides a robust framework for retraining Robin robotic arms and represents a major step toward development of a continual learning platform that will help Amazon retrain all its robots in the future.

A complex challenge

The concept of continual learning appears deceptively simple, says Hank Chen, an Amazon machine learning engineer who has worked on Janus since its inception. Robin, whose accuracy generally tops 99%, encounters some unexpected packaging. Then, via continual learning, it adapts to account for that. But the challenge is far more complex than that.

The first hurdle involves deciding which anomalous events require retraining. Chen breaks these into two different classes. The first involves the robot’s environment. Perhaps a light failed and it is too dark to identify packages or maybe a camera was knocked out of focus. These types of anomalies are fairly easy to identify and technicians can usually fix them quickly.

The second type of anomaly is informational.

“These events happen when something changes,” Chen says. “We might have a new package type, holiday art on packages, or a hot new toy with transparent packaging. Recently, our European fulfillment centers began using black bags and that threw Robin for a loop. These are the types of novel data we want to learn from and model.”

Amazon trains its models on images featuring those packages. Once they are identified, the continual learning team annotates the novel images. This involves labeling the location, boundaries, shape, and classification of the packages in the scene.

When the team gathers enough annotated images, it can begin to retrain Robin’s models with fresh data, maintaining and even improving Robin’s ability to recognize both known and new packages.

Efficiently training models, however, requires a wide variety of examples.

“When we get a good initial raw image, we do what is called augmentation,” explains Larry Li, a software development manager who manages the Janus team. “We shrink the image, flip it, rotate it, make it darker or brighter, discolor it, make it blurry, and juxtapose with other images. This multiplies every image many times, giving the large number of images we need to train our model.”

Watch Amazon's mobile robots in action

Related content

Amazon fulfillment centers use thousands of mobile robots. To keep products moving, Amazon Robotics researchers have crafted unique solutions.

To ensure that new data does not reduce the accuracy of existing models, Amazon tests retrained models on historical data to see if the machine retains — or, better still, improves — its level of performance. If the model succeeds, it moves to live testing.

This takes place on a special station set up for testing prototype robots. Researchers create piles of test packages to ensure the robot can handle them all. If it can, they beta test it on one or two lines within the company’s fulfillment centers. Only after a robot proves its performance does Amazon deploy it more broadly.

Automating processes

Simultaneously capturing novel events, categorizing them based on recurrence, annotating images, creating training decks, and performing model training is a lot to manage — Janus has been designed to automate these processes.

“We want to automate how we retrain our models in response to changing conditions and new data,” Meeker says.

Janus, for example, automatically monitors when robots such as Robin encounter novel events.

“If a human was uncertain about something, they could tell us what caused that confusion,” Meeker notes. “But a robot can’t tell us what the problem was. Instead, we have to use other metrics to figure out when and why a model is not confident.

Robin’s advanced perception system

“When presented with a cluttered scene, for example, Robin’s model will segment the scene into individual packages — each box, padded mailer, et cetera is individually labeled and the package boundaries are marked. If the robot fails to pick up the package, drops the package, or picks up a different package, we can look at how the model segmented the scene to identify the problem.”

Janus automatically identifies problematic packages for annotation. Those annotations make it easier to identify and rank the packages most likely to cause Robin challenges.

Performing these tasks in real time is computationally intensive. At the same time, Amazon’s fleet of Robin robots is growing. To minimize computing overhead, continual learning relies on Amazon Web Services to tap functions from the cloud on an as-needed basis.

updated_flats_photo.png

Related content

Scientists and engineers are developing a new generation of simulation tools accurate enough to develop and test robots virtually.

“We leverage AWS components to create an ‘assembly line’ for computer learning,” Li says. “We also use a novel image detector to detect significant changes in our targets and environment. When those conditions happen, it triggers a batch job to sample the raw images and preserve them for potential retraining.”

Reinforcement learning

Ultimately, Chen says, the continual learning team wants to transform Janus from a set of code libraries into an integrated service that any user could pull off the shelf and plug into their robot.

“Once they have the model, it would look for anomalies, pick out the most frequent novel events, and learn from them,” he says.

Humans, for example, might move a pile of packages around to pick one up, but how do we capture that capability with a robot and not slow down the line? Reinforcement learning might give us a way to do this.

Janus may also evolve to embrace reinforcement learning.

“In reinforcement learning, it is up to the model to explore the possibilities and find the proper solution,” Li explains. “The results are markedly different than supervised learning because there is a closer coupling between perception and action. The actions a robot takes can be used to generate best outcomes. Humans, for example, might move a pile of packages around to pick one up, but how do we capture that capability with a robot and not slow down the line? Reinforcement learning might give us a way to do this.”

Olivier Toupet is seen standing next to a model of one of the Mars rovers

Related content

Zoox principal software engineer Olivier Toupet on company’s autonomous robotaxi technology

Robin’s ability to interpret images is already very good, Meeker says. Her group now wants to extend those capabilities to other robots.

“We want to create universal models that can segment packages with less training data,” Meeker explains. “We do this by pre-training a model with a large dataset collected from across different environments, different tasks and different backgrounds. Then we fine tune the model with small amounts of data from a new environment. With a relatively small amount of data, you can get high segmentation performance. A continuous learning framework like Janus allows us to scale our universal model, so we can train across many different tasks and environments.”



[ad_2]

Source link

We will be happy to hear your thoughts

Leave a reply

Discover Your Essential Style at NovaEssentials
Logo