As an Amazon Associate I earn from qualifying purchases.

Prime Video’s work on 3-D scene reconstruction, image representation

[ad_1]

At this year’s Conference on Computer Vision and Pattern Recognition (CVPR), Prime Video is presenting a pair of papers that indicate the range of problems we work on.

In one paper, “Depth-guided sparse structure-from-motion for movies and TV shows”, we present a method for determining the camera movement and 3-D geometry of scenes depicted in videos. An important application of this work is to enable the accurate insertion of digital objects into already recorded videos. Our approach, which leverages off-the-shelf depth estimators to enhance the standard geometric-optimization approach, results in improvements of 10% to 30% on six different performance measures, relative to the best-performing prior technique.

The Prime Video structure-from-motion system at work. At top is the input video. At lower left is the video with keypoints (colored circles) added. The keypoints are tracked accurately from frame to frame, and their color indicates their depth, as estimated by a machine learning model. At lower right is the 3-D model of the keypoints (whose rotation, to demonstrate the 3-D structure, is not synchronized with the video).

In the other paper, “Robust cross-modal representation learning with progressive self-distillation,” we expand on the CLIP method of using paired images and texts found online to train a model that produces image and text representations useful for downstream tasks, such as image classification or text-based image retrieval.

Where CLIP enforces a hard alignment between Web-crawled images and their associated texts, our method is more flexible, allowing for partial correspondences between a given image and texts associated with other images. We also use a self-distillation technique, in which our model progressively creates some of its own training targets, to steadily refine its representations.

Block Corruption Detection.gif

Related content

Detectors for block corruption, audio artifacts, and errors in audio-video synchronization are just three of Prime Video’s quality assurance tools.

In two different image classification settings, our method outperforms CLIP across the board, by significant margins — 30% to 90% — on some datasets. Our method also consistently outperforms its CLIP counterpart on the tasks of image-based text retrieval and text-based image retrieval.

Structure-from-motion

Structure-from-motion is the problem of determining the 3-D structure of a scene from parallax — the relative displacement of objects in the scene as the camera moves. There are robust solutions for videos with large camera movements, but they don’t work as well for feature films and TV shows, where the camera movements tend to be more restrained.

The standard approach to determining structure from motion uses geometric optimization. First, the method estimates the location of a set of 3-D points in the scene, and then, based on that estimation, it re-projects them onto a 2-D image corresponding to each camera location. The optimization procedure minimizes the distance between points in the original 2-D image and the corresponding points of the 2-D projection.

We improve on this approach by introducing depth estimates performed by off-the-shelf, pretrained models. Instead of minimizing only the difference between the original and the projected 2-D points, our approach minimizes both the reprojection error of the 2-D points and the depth measurement error, relative to the output of the depth estimation model.

Our approach jointly minimizes 2-D reprojection error and depth estimate error.

Our approach begins by using a standard method to detect image keypoints — salient points in the image, usually at object corners and other edge intersections — and identify their correspondences across successive frames of video. Then, through bilinear interpolation, we use the depth map obtained from an off-the-shelf depth estimator to determine the ground-truth keypoint depths. We use the depth information not only during optimization but also during the initialization stage of the process, when we produce our initial estimates of 3-D scene structure and relative camera pose.

The Prime Video structure-from-motion technique identifies keypoints in input video, finds their correspondences across frames, and then estimates their depth using bilinear interpolation on a dense depth map.

We experimented with several different depth estimation models and found that the results of our approach were essentially the same with all of them. And, in all cases, our approach improved substantially on the state of the art.

Cross-modal representations

In natural-language processing, the best-performing models in recent years have been built on top of language models that learn generic linguistic representations from huge corpora of unannotated public texts. The language models can then be fine-tuned for specific tasks with minimal additional data.

CLIP (contrastive language-image pretraining) seeks to do something similar for computer vision, learning generic visual representations from images harvested from the Web and their associated texts.

Debug overlay.png

Related content

The switch to WebAssembly increases stability, speed.

Like many such weakly supervised models, CLIP is trained through contrastive learning. Intuitively, for each training image, the model is fed two texts: one, the positive training example, is the text associated with the image online; the other text, the negative example, is randomly chosen. CLIP learns a data representation that pulls the image and the positive text together in the representation space and pushes the image and the negative text apart.

Although CLIP has yielded impressive results on downstream computer vision tasks, its training approach has two drawbacks. First, the web-harvested data is noisy: the text associated with an image may in fact be semantically unrelated to it. Conversely, the text randomly selected as a negative example may in fact be semantically related to the image. CLIP can thus steer the model toward erroneous associations and away from correct ones.

Our method attempts to address this problem. Rather than learn a hard alignment between image and text, we learn a soft alignment, which gives the resulting model more interpretive flexibility.

For example, in one of our experiments, both the CLIP baseline and our model were trained on datasets that included images of goldfish. When presented with an image of a stained-glass window depicting a goldfish — a type of image not included in the training data — CLIP guessed that it was a guinea pig or maybe a beer glass, while our model guessed that it was a goldfish or possibly a clown fish. That is, our model learned a representation general enough to accommodate the stylization of the stained-glass artist’s rendering style.

CLIP’s contrastive-learning procedure enforces connections between web-harvested images and their associated texts (green lines, at left) while dissociating them from other images’ texts (red lines). Our approach instead privileges associated texts but also learns softer, probabilistic alignments with other images’ texts (dotted blue lines).

Our model learns its soft alignments through a self-distillation process. First, the model learns an initial data representation through the same contrastive-loss function that CLIP uses.

Over the course of training, however, we use the model itself to make predictions about the training examples and use those predictions as additional training targets. At first, the loss function gives these self-predictions little weight, but it gradually increases the weight as training progresses.

BugBearScreenshot.large.png

Related content

In a pilot study, an automated code checker found about 100 possible errors, 80% of which turned out to require correction.

The idea is that, over time, the model learns more reliable correlations between training images and texts. Self-distillation reinforces those correlations, so the model isn’t encouraged to break semantic connections between images and texts that may very well be present in the data. Similarly, over time, the model learns to give less weight to spurious connections between images and the texts initially associated with them.

The great virtue of general representation models like ours and CLIP is that they can be applied to a wide variety of computer vision problems. So the accuracy improvements that our approach affords should pay dividends for Prime Video customers in a range of contexts over the next few years.



[ad_2]

Source link

We will be happy to hear your thoughts

Leave a reply

Discover Your Essential Style at NovaEssentials
Logo