As an Amazon Associate I earn from qualifying purchases.

Sizing neural networks to the available hardware

[ad_1]

Neural-architecture search means automatically finding the best neural-network architecture for a particular task. Often, that involves finding an architecture that runs efficiently on a particular hardware platform.

This week, at the IEEE Conference on Computer Vision and Pattern Recognition, my colleagues and I presented a new way to explore neural architectures for convolutional neural networks, a type of neural net common in computer vision applications. In tests, we found that networks trained using our approach matched the run-time efficiency of their best-performing predecessors but increased accuracy.

Every layer of a convolutional neural net (CNN) applies multiple transformations to its inputs, and each of those transformations establishes a “channel” through the layer. In computer vision applications, a typical CNN might have three channels in the input layer — one per primary color — but 1,000 channels in the output layer.

Our technique finds the optimal number of channels for each layer of the network, given some latency constraint — a maximum allowable time for the completion of a computation.

With each training example in a data set, Amazon Go researchers’ new machine learning method randomly varies the channel width of one layer (blue blocks) of a convolutional neural network being trained on a computer vision task.

Previous methods trained a network with a generous set of channels and then greedily pruned away the channels in each layer whose loss didn’t compromise accuracy too badly. Our method optimizes the number of channels in all network layers simultaneously during training. This enables a better trade-off between the number of computations performed in each network layer and final accuracy.

A typical CNN divides input data into overlapping chunks and applies the same set of analyses — or “filters” — to each chunk. In the case of an image, a CNN might step through the image in eight-by-eight blocks of pixels, examining each block for the same visual features.

Those features are learned during training, but they might include things like the orientations of color gradations within the block (horizontal, vertical, diagonal, etc.); object shapes (circular, rectangular, etc.); or even distinctive visual features of animals like cats and dogs.

Each filter defines a separate channel, and at each layer of the network, several filters might be applied to the same inputs. The number of channels at each layer is known as the channel width.

Previous work on optimizing channel width used a model based on a 14-layer CNN called MobileNet (v1), trained to recognize objects in images. In general, removing even a single channel from a network might result in a significant accuracy drop, so the researchers trained the network in such a way that it could lose some channels with only gradual decrease in accuracy.

They then removed channels in a greedy way, layer by layer. That ended up improving the network’s accuracy, but it’s an approach that could still result in suboptimal network configurations.

Tailor made

We reasoned that determining channel width earlier in the training process would improve the network’s performance, as the settings of each channel would be better tailored to the final channel configuration. Furthermore, instead of greedy layer-by-layer channel selection, we perform a global optimization over channel configurations for all layers. This allows us to accurately capture dependencies between channel widths across all network layers.

The advantage of greedily pruning a fully trained network is that latency effects on the entire network can be measured directly. Our global-optimization approach, however, requires information about the latency effects of different channel widths at different layers. Directly measuring the latency of individual layers is nontrivial and computationally impractical, so we estimate it instead.

Past research has estimated latencies based on the numbers of operations performed in each layer. But this ignores idiosyncrasies of both hardware and software implementation, such as mechanisms for parallelizing computations, caching and memory allocation, and optimization of functions in standard software libraries.

We instead performed an empirical analysis, repeatedly running the same data through the network with different numbers of channels each time. The resulting measurements gave us a system of linear equations that we could solve to very accurately estimate the latencies incurred by various channel widths at different network layers. We saved the resulting per-layer, per-channel estimates in a table where they can be efficiently looked up during network training.

Random mutation

As in the earlier work, we used MobilNet (v1) as our base network. During the initial rounds of training, with each new training example, we randomly varied the channel width of one network layer, from 20% to 150% of its original value. (It could be that expanding the channel width of one layer enabled reductions in the widths of other layers.)

For each training example, we compared the accuracy of the network with its current channel configuration to the accuracy of the full network. Over time, this gave us an aggregate measure of how much accuracy each channel configuration sacrificed.

Finally, we used a standard statistical model called a Markov random field to model the effects of combining different channel widths at successive layers of the network. On the basis of both our measurements of accuracy loss and our estimate of latency variation, we solved for the combination of channel widths that would yield optimal performance within our latency constraints. Compared to the baseline that uses greedy channel width search, our method resulted in a 4% relative accuracy improvement at the same latency.

Over successive rounds of training, our approach identified some channel width variations that had disastrous effects on accuracy or latency. Thus, after an initial baselining phase, we steadily narrowed the range of configurations that the model could explore. In experiments, this gradual reduction of the search space brought another 1% relative improvement in accuracy.



[ad_2]

Source link

We will be happy to hear your thoughts

Leave a reply

Discover Your Essential Style at NovaEssentials
Logo