As an Amazon Associate I earn from qualifying purchases.

Video classifiers learn to recognize actions they’ve never seen

[ad_1]

Zero-shot learning is a way to train deep-learning models to generalize to categories they’ve never seen before. The way it’s typically done, the model learns to map inputs — say, videos — to a semantic space, where words are clustered according to their meanings. If all goes well, the model can classify videos it wasn’t trained on, by mapping them to the semantic space and picking the closest word. The technique has great promise for cases where the exact classes of interest are not available during training.

Zero-shot-learning systems map inputs — in this case, videos — to a semantic space, where words are clustered according to meaning. This image shows the video whose mapping is closest to each of six words. The words are labels from both training data (“windsurfing”, “snowboarding”, and so on) and classes of video that were removed from the training data (green border). The model is able to map a video of a kayak (blue border) close to the unseen label “kayaking”.

Research on zero-shot image recognition has seen great successes with end-to-end training, in which a single deep-learning model is trained to map raw inputs directly to outputs. But to our knowledge, this approach has never been applied to the related problem of video classification.

Instead, zero-shot video classifiers typically start with a standard video classifier — one trained to recognize only a limited set of actions — and pass its outputs through a number of special-purpose subsidiary networks that learn to map them to a semantic space. This has been seen as a necessary concession to the computational complexity of processing video.

In a paper that we’re presenting (virtually) at the IEEE Conference on Computer Vision and Pattern Recognition, we apply end-to-end training to the problem of zero-shot video classification and find that it outperforms previous methods by a large margin.

When we compare our network to predecessors of the same capacity and depth, we find that, with around 500,000 training examples, it reduces the error rate of its best-performing predecessor by 29%.

Our end-to-end model (far right) is much simpler than its best-performing predecessors.

Our model is also simpler — and therefore easier to reproduce — than its predecessors. Creating a powerful baseline that is also easy to reproduce is key to our research: our goal is not only to develop a new model but to stimulate future work from other research teams, accelerating progress and maybe catching up with the zero-shot-learning systems that classify static images.

In evaluating our model, we used a new approach to separating data into training and test sets, which better approximates real-world settings. Typically, researchers simply divide a single data set in two, using one part to train a model and the other test it.

A distance threshold of .05 (red line) removes almost 40 classes from our training set.

We instead use different data sets for training and testing. But first, we calculate the distance in the semantic space between the classes in the training set and their nearest neighbors in the test set. Then we throw out all the classes in the training set whose distance falls below some threshold.

This project grew out of the realization that existing methods for doing zero-shot video classification have prioritized the ability to handle long input videos. Hence the need to reduce computational complexity by using pretrained classifiers and special-purpose modules.

But many of the most successful methods in traditional video classification — which are not zero-shot systems but handle a prescribed subset of classes — do exactly the opposite, extracting a small snapshot of the input video while training the full network end to end. We adapted the same concept to zero-shot learning. Among other advantages, this makes it possible to train the model on large amounts of data.

We hope that our contribution will inspire other research teams to push the boundaries of zero-shot-learning video classification and that soon we will see this technology in commercially available products.



[ad_2]

Source link

We will be happy to hear your thoughts

Leave a reply

Discover Your Essential Style at NovaEssentials
Logo