As an Amazon Associate I earn from qualifying purchases.

Accelerating parallel training of neural nets

[ad_1]

Earlier this year, we reported a speech recognition system trained on a million hours of data, a feat possible through semi-supervised learning, in which training data is annotated by machines rather than by people.

These sorts of massive machine learning projects are becoming more common, and they require distributing the training process across multiple processors. Otherwise, training becomes too time consuming.

In theory, doubling the number of processors should halve the training time, but in practice, it doesn’t work that way: the need to synchronize distributed computations results in some inevitable communication overhead. Parallelization schemes always involve some trade-off between training efficiency and the accuracy of the resulting model.

In a paper we’re presenting at this year’s Interspeech, we describe a new approach to parallelizing the training of neural networks that combines two state-of-the-art methods and improves on both. One of the existing methods prioritizes model accuracy, and the other prioritizes training efficiency. In tests that involved training an acoustic model — a vital component of a speech recognition system — our hybrid method proved both more accurate than the accuracy-prioritizing method and more efficient than the efficiency-prioritizing method.

The accuracy-prioritizing method was proposed in 2015 by Amazon senior principal scientist Nikko Strom, who’s also a coauthor on the new paper. At the time, the method — known as synchronous GTC, for gradient threshold compression — was a breakthrough, enabling parallelization to as many as 32 processors with little loss in model accuracy. Our experiments indicate, however, that beyond 32 processors, communications overhead can make GTC much less efficient.

When we needed to increase the processor count to handle a million hours of data, we switched to a different parallelization method, called BMUF, for blockwise model update filtering. BMUF scales better than GTC, but at the price of accuracy. In experiments we report in our new paper, at 64 processors, BMUF triples the training rate achievable with GTC — but it also triples the loss in accuracy.

In our new work, we split the distributed processors into groups, and each group performs GTC within itself. Every so often, however, the groups share their latest models using BMUF, which re-synchronizes models across all processors.

Our new method splits distributed processors into groups, and within each group, the processors use the highly accurate GTC method to synchronize their models. At regular intervals, designated representatives from all the groups share their models and update their own local models accordingly. Finally, each representative broadcasts its updated model to the rest of its group.

A neural network consists of thousands or even millions of very simple processors, often called neurons. Each neuron receives data from several other neurons, processes it, and passes the results to still other neurons. Connections between neurons have associated weights, which determine how big a role the outputs of one neuron play in the computations performed by the next.

During training, a network will process a small amount of data, known as a minibatch, evaluate the results, and update its weights accordingly. Then it will process the next minibatch, and so on.

The natural way to parallelize this procedure is to give each processor its own copy of the initial network weights and its own set of minibatches. After each minibatch, each processor will broadcast the updates it needs to make to its weights (technically, “gradients”) to all the other processors. A given processor simply combines all the updates it receives and applies them to its own, local copy of the network. Then it processes its next minibatch.

This method is equivalent to training a model on a single processor. But broadcasting a full set of weight updates after each minibatch is so bandwidth intensive that it eats up the time savings from parallelization. GTC modifies this procedure in a few crucial ways.

First, it exploits the fact that the weight updates that follow the processing of a single minibatch tend to make major modifications to only a few neural connections. GTC requires the establishment of a threshold — the T in GTC — below which weight updates will not be broadcast, saving bandwidth. (Weights that fall below the threshold are, however, stored locally, where they may still factor into later computations.)

The weight threshold, denoted by the Greek letter tau, is determined empirically and varies from application to application. In our experiments, the optimal setting of tau turned out to be 8.

Next, when broadcasting its weight updates, each processor sends only one of two values, tau or -tau. Those two values can be represented by a single bit of information, compressing (the C in GTC) the update message. (Like updates that fall below the threshold, the residuals of weights above the threshold are stored locally and factor into later computations.)

These two modifications do sacrifice some accuracy. That’s why, in our experiments, we evaluate relative increase in error rate. All three methods we compare — GTC, BMUF, and our GTC-BMUF hybrid — increase error rate. The question is how much, and what we gain in efficiency in exchange.

With BMUF, each processor continually updates its own local copy of the neural model. After a fixed number of minibatches — say, 50 — it broadcasts its model, and all the processors update their models accordingly. This drastically cuts down on bandwidth consumption, but it decreases model accuracy.

By combining these approaches — GTC locally, BMUF globally — we get the best of both worlds. On the acoustic-modeling task we used for testing, our hybrid is slightly less efficient than BMUF at 32 cores, offering a 31-fold speedup relative to single-core processing; BMUF’s speedup is actually superlinear, at 36.6-fold. But our method reduces the error rate, while BMUF increases it by 3.5%. GTC, with a 26-fold speedup, increases error rate by 1.4%.

At 64 cores, GTC offers the best accuracy, with an error rate increase of 2.8%, versus 3.1% for our method and 8.9% for BMUF. But GTC’s efficiency actually falls, to 17-fold, versus 57-fold for BMUF and 42-fold for our method.

At 128 cores, our method is the undisputed champion, offering a 97-fold speedup and a 4.7% error rate increase, to 80-fold/9.6% for BMUF and 11-fold/15.6% for GTC.



[ad_2]

Source link

We will be happy to hear your thoughts

Leave a reply

Discover Your Essential Style at NovaEssentials
Logo