[ad_1]
The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) is the premier conference in the field of computer vision, and the Amazon papers accepted there this year range in topic from neural-architecture search to human-pose tracking to handwritten-text generation.
But retail sales are still at the heart of what Amazon does, and three of Amazon’s 10 CVPR papers report ways in which computer vision could help customers shop for clothes.
One paper describes a system that lets customers sharpen a product query by describing variations on a product image. The customer could, for instance, alter the image by typing or saying “I want it to have a light floral pattern”.
A second paper reports a system that suggests items to complement those the customer has already selected, based on features such as color, style, and texture.
The third paper reports a system that can synthesize an image of a model wearing clothes from different product pages, to demonstrate how they would work together as an ensemble. All three systems use neural networks.
Visiolinguistic product discovery
Using text to refine an image that matches a product query poses three main challenges. The first is finding a way to fuse textual descriptions and image features into a single representation. The second is performing that fusion at different levels of resolution: the customer should be able to say something as abstract as “Something more formal” or as precise as “change the neck style”. And the third is training the network to preserve some image features while following customers’ instructions to change others.
Yanbei Chen, a graduate student at Queen Mary University of London, who was an intern at Amazon when the work was done; Chen’s advisor, professor of visual computation Shaogang Gong; and Loris Bazzani, a senior computer vision scientist at Amazon, address these challenges with a neural network that’s trained on triples of inputs: a source image, a textual revision, and a target image that matches the revision.
Essentially, the three inputs pass through three different neural networks in parallel. But at three distinct points in the pipeline, the current representation of the source image is fused with the current representation of the text, and the fused representation is correlated with the current representation of the target image.
Because the lower levels of a neural network tend to represent lower-level features of the input (such as textures and colors) and higher levels higher-level features (such as sleeve length or tightness of fit), using this “hierarchical matching” objective to train the model ensures that it can handle textual modifications of different resolutions.
Each fusion of linguistic and visual representations is performed by a neural network with two components. One component uses a joint attention mechanism to identify visual features that should be the same in the source and target images. The other is a transformer network that uses self-attention to identify features that should change.
In tests, the researchers found that the new system could find a valid match to a textual modification 58% more frequently than its best-performing predecessor.
Complementary-item retrieval
In the past, researchers have developed systems that took outfit items as inputs and predicted their compatibility, but these systems were not optimized for large-scale data retrieval.
Amazon applied scientist Yen-Liang Lin and his colleagues wanted a system that would enable product discovery at scale, and they wanted it to take multiple inputs, so that a customer could, for instance, select shirt, pants, and jacket and receive a recommendation for shoes.
The network they devised takes as inputs any number of garment images, together with a vector indicating the category of each — such as shirt, pants, or jacket. It also takes the category vector of the item the customer seeks.
The images pass through a convolutional neural network that produces a vector representation of each. Each representation then passes through a set of “masks”, which attenuate some representation features and amplify others.
The masks are learned during training, and the resulting representations encode product information (such as color and style) relevant to only a subset of complementary items. That is, some of the representations that result from the masking — called subspace representations — will be relevant to shoes, others to handbags, others to hats, and so on.
In parallel, another network takes as input the category for each input image and the category of the target item. Its output is a set of weights, for prioritizing the subspace representations.
The network is trained using an evaluation criterion that operates on the entire outfit. Each training example includes an outfit, an item that goes well with that outfit, and a group of items that do not.
Once the network has been trained, it can produce a vector representation of every item in a catalogue. Finding the best complement for a particular outfit is then just a matter of looking up the corresponding vectors.
In experiments that used two standard measures in the literature on garment complementarity — fill-in-the-blank accuracy and compatibility area under the curve — the researchers’ system outperformed its three top predecessors, while enabling much more efficient item retrieval.
Virtual try-on network
Previously, researchers have trained machine learning systems to synthesize images of figures wearing clothes from different sources by using training data that featured the same garment photographed from different perspectives. But that kind of data is extremely labor intensive to produce.
Senior applied scientist Assaf Neuberger and his colleagues at Amazon’s Lab126 instead built a system that can be trained on single images, using generative adversarial networks, or GANs. A GAN has a component known as a discriminator, which, during training, learns to distinguish network-generated images from real images. Simultaneously, the generator learns to fool the discriminator.
The researchers’ system has three components. The first is the shape generation network, whose inputs are a query image, which will serve as the template for the final image, and any number of reference images, which depict clothes that will be transferred to the model from the query image.
In preprocessing, established techniques segment all the input images and compute the query figure’s body model, which represents pose and body shape. The segments selected for inclusion in the final image pass to the shape generation network, which combines them with the body model and updates the query image’s shape representation. That shape representation passes to a second network, called the appearance generation network.
The architecture of the appearance generation network is much like that of the shape generation network, except that it encodes information about texture and color rather than shape. The representation it produces is combined with the shape representation to produce a photorealistic visualization of the query model wearing the reference garments.
The third component of the network fine-tunes the parameters of the appearance generation network to preserve features such as logos or distinctive patterns without compromising the silhouette of the model.
The outputs of the new system are more natural looking than those of previous systems. In the figure below, the first column is the query image, the second the reference image, the third the output of the best-performing previous system, and the fourth and fifth the outputs of the new system, without and with appearance refinement, respectively.
window.fbAsyncInit = function() { FB.init({
appId : '1024652704536162',
xfbml : true, version : 'v2.9' }); };
(function(d, s, id){
var js, fjs = d.getElementsByTagName(s)[0];
if (d.getElementById(id)) {return;}
js = d.createElement(s); js.id = id;
js.src = "https://connect.facebook.net/en_US/sdk.js";
fjs.parentNode.insertBefore(js, fjs);
}(document, 'script', 'facebook-jssdk'));
[ad_2]
Source link