[ad_1]
A fundamental theme at Amazon is movement. Obtaining a product ordered by a customer and moving that product as quickly and efficiently as possible from its source to the customer’s doorstep.
This video shows robots moving packages around an Amazon fulfillment center.
That journey will often take a package through multiple warehouses and include loadings, unloadings, sortings, and routings. Human associates are crucial to this process and so, increasingly, are robotic manipulators. A rising star in this department is the Robin robotic arm and the computer vision system that makes it possible.
Robin’s visual-perception algorithms can identify and locate packages on a conveyor belt below it, for example, and even distinguish individual packages and their type within a cluttered pile.
This perceptive ability is known as segmentation, and it is central to the development of flexible and adaptive robotic processes for Amazon fulfillment centers. That’s because packages vary enormously in their dimensions and physical characteristics, moving amid an ever-changing mix of packages and against varying backdrops.
Robin is a maturing technology, but there is a constant simmering of new ideas just below the surface at Amazon, with teams of scientists and engineers across the Amazon Robotics AI group and beyond collaborating to develop AI-powered robotic solutions to improve warehouse efficiency. A new modeling approach aims to serve them all.
An abundance of packages — but not data
The initial challenge for these early-stage collaborations is often the same.
“The biggest problem that new project teams usually face is data scarcity,” says Cassie Meeker, an Amazon Robotics AI applied scientist, based in Seattle. Obtaining images relevant to a warehouse process of interest takes time and resources, but that’s just the beginning.
“For some machine learning models, you must annotate each training image manually by drawing multiple polygons around the various packages in the picture,” Meeker explains. “It can take five minutes to annotate just one image if it’s cluttered.”
The lack of task-specific training data means teams might base their perceptual models on just a few hundred images, says Meeker: “If they’re lucky, they have a thousand. But even a thousand images aren’t a lot for training a model.”
If new projects do not have sufficient variety in their training data, that’s a challenge.
“The production environment is typically very different to a prototyping environment, so when they go into the production phase on the warehouse floor, they will suddenly see all these things they’ve never seen before and that their perception system can’t identify,” says Meeker. “They could be setting themselves up for failure.”
This difficulty in obtaining data to train segmentation models is partly due to the very specific subject matter: packages. Many computer vision models are trained on enormous, publicly available datasets full of annotated imagery, including everything from aardvarks to zabaglione. A social media company might want to segment faces, or dogs or cats, because that’s what people have lots of pictures of.
“Many publicly available datasets are perfect for that,” says Meeker. “But at Amazon, we have such a specific application and annotation requirements. It just doesn’t translate well from cat pics.”
A ’universal model’ for packages
In short, building a dataset big enough to train a demanding machine learning model requires time and resources, with no guarantee that the novel robotic process you are working toward will prove successful. This became a recurring issue for Amazon Robotics AI. So this year, work began in earnest to address the data scarcity problem. The solution: a “universal model” able to generalize to virtually any package segmentation task.
To develop the model, Meeker and her colleagues first used publicly available datasets to give their model basic classification skills — being able to distinguish boxes or packages from other things, for example. Next, they honed the model, teaching it to distinguish between many types of packaging in warehouse settings — from plastic bags to padded mailers to cardboard boxes of varying appearance — using a trove of training data compiled by the Robin program and half a dozen other Amazon teams over the last few years. This dataset comprised almost half a million annotated images.
Meet the Amazon robot improving safety
Crucially, these images of packages were snapped from a variety of angles — not only straight down from above a conveyor belt — and against a variety of backgrounds. The sheer number and variation of images make the dataset useful in virtually any warehouse location that may benefit from robotic perception and manipulation.
Meeker estimates that starting a project with the universal model can slash the setup time required to develop vision-based ML solutions from between six to twelve months to just one or two. And it has been made available to other Amazon teams in a user-friendly form, so extensive machine learning expertise is not required.
The universal model has already demonstrated its prowess, courtesy of a project run by Amazon Robotics, called Cardinal. Cardinal is a prototype robotic arm-based system that perceives and picks up packages and places them neatly into large containers ready for transport on delivery trucks. Cardinal’s perception system was developed before the universal model was available, so the team spent a lot of time creating a bespoke training dataset for it, says Cardinal’s perception lead, Jeroen van Baar, an Amazon Robotics senior applied scientist, based in North Reading, Massachusetts.
This video shows Cardinal training itself to distinguish between package types.
“We trained the system using 25,000 annotated training images that we created ourselves. But those early training images were taken using a setup with a different appearance to our prototype Cardinal workstation,” van Baar says. “To achieve the performance that we initially desired, we had to fine-tune our model using a thousand new training images taken from that prototype setting.”
After being updated with only those new images, the universal model was as accurate for performing Cardinal’s task as the workstation’s own robust model.
“Had it been available sooner, I would only have captured data specific to our setup and fine-tuned the universal model from there,” says van Baar. “Being able to shorten training time so significantly is a major benefit.”
And that’s the point. The universal model can quickly capitalize on any training data produced by a new-project team. This means that when new ideas are tested on the warehouse floor, or existing methods are transplanted to a new Amazon region where things are done slightly differently, the model will have enough data diversity to handle the differences.
Siddhartha Srinivasa, director of Robotics AI, thinks of the universal model as a supportive scaffold that you can use to build your house.
“We’re not advocating that everybody live in the same house,” he says. “We’re advocating that Amazon teams leverage the scaffolding we’re providing to build whatever house they want, because it’s already very powerful, and it is getting better every day.”
Tipping point
Only recently has all this become possible.
“The Robotics AI program is young,” says Meeker. “In the beginning, there was no reason to use other teams’ data, because no one had very much.” But a tipping point has arrived. “We now have enough mature teams in production that we are seeing a real diversity and scaling of data. It is finally generalizable.”
Indeed, while the immediate focus of universal models is identifying and localizing various package types, diverse image data is now accumulating across a range of Amazon programs that cover more aspects of fulfillment centers.
The universal model now includes images of unpackaged items, too, allowing it to perform segmentation across a greater diversity of warehouse processes. Initiatives such as multimodal identification, which aims to visually identify items without needing to see a barcode, and the automated damage detection program are accruing product-specific data that could be fed into the universal model, as well as images taken on the fulfillment center floor by the autonomous robots that carry crates of products.
“We’re moving towards a situation in which even data collected by small projects run by interns can be fed into the universal base model, incrementally improving the productivity of the entire robot fleet,” says Srinivasa.
We’re moving towards a situation in which even data collected by small projects run by interns can be fed into the universal base model, incrementally improving the productivity of the entire robot fleet.
This diversity of data and its aggregation is particularly important for robotic perception within Amazon, especially given customers’ shifting needs, frequently novel Amazon packaging, and the company’s commitment to sustainability that means shipping more items in their own unique packaging.
All of this increases the visual variety of products and packages, making it harder for robots to identify from an image where one package ends and another begins.
Feeding the universal model in this way and having it available to new teams will accelerate the experimentation and deployment of future robotic processes. The use of the universal model is factored into Amazon’s immediate operational plans.
“We’re not doing this because it’s cool — though it really is cool — but because it is inevitable,” says Srinivasa.
window.fbAsyncInit = function() { FB.init({
appId : '1024652704536162',
xfbml : true, version : 'v2.9' }); };
(function(d, s, id){
var js, fjs = d.getElementsByTagName(s)[0];
if (d.getElementById(id)) {return;}
js = d.createElement(s); js.id = id;
js.src = "https://connect.facebook.net/en_US/sdk.js";
fjs.parentNode.insertBefore(js, fjs);
}(document, 'script', 'facebook-jssdk'));
[ad_2]
Source link