As an Amazon Associate I earn from qualifying purchases.

Differential privacy for deep learning at GPT scale

[ad_1]

Deep-learning models are data driven, and that data may contain sensitive information that requires privacy protection. Differential privacy (DP) is a formal framework for ensuring the privacy of individuals in datasets, so that adversarial users can’t learn whether any given data sample was or was not used to train a machine learning model. Employing DP in deep learning typically means capping the contribution that each training sample makes to the model’s parameter adjustments, an approach known as per-sample gradient clipping.

Per-sample gradient clipping, however, makes deep learning much more time consuming than it would be otherwise, impeding the development of large DP models — for instance, at the level of the GPT language models, with billions of parameters.

In 2022, in workshops at the International Conference on Machine Learning (ICML) and the Conference on Neural Information Processing Systems (NeurIPS), we presented two papers that advance DP for deep learning. In the first paper, “Automatic clipping: Differentially private deep learning made easier and stronger”, we described an automatic method that improves the efficiency of tuning the gradient-clipping process by an order of magnitude (say, 5-10 times).

Typically, gradient clipping involves an expensive ablation study to select a clipping threshold above which a data sample’s contribution to the model’s parameter adjustments is cut off, or clipped. Our approach instead uses normalization, completely eliminating the tuning of the clipping threshold.

DP.CV.jpeg

Related content

Technique that mixes public and private training data can meet differential-privacy criteria while cutting error increase by 60%-70%.

In the second paper, “Differentially private bias-term only fine-tuning of foundation models” (DP-BiTFiT), which won the Best Paper Award at the NeurIPS Workshop on Trustworthy and Socially Responsible Machine Learning (TSRML), we introduced BiTFiT, a parameter-efficient method for doing fine-tuning during DP learning.

Generally speaking, a neural network has two types of parameters: the weights, which constitute more than 99% of the parameters and capture most of the information from the training data, and the biases, which shift (offset) the model output. We show that privately fine-tuning the bias terms alone is enough to achieve high accuracy under DP constraints, make DP learning 2 to 30 times faster, reduce memory use by 50% to 88%, and incur only 1/1000 the communication cost in the distributed environment.

Together, these two techniques have made fine-tuning a DP-GPT-2 as efficient as fine-tuning a standard GPT-2 in a parameter-efficient manner. We have made both methods publicly available, to encourage researchers to experiment with and benefit from faster DP deep learning.

Automatic clipping

The deep-learning process includes a tunable hyperparameter called the learning rate, which determines the degree to which the model weights can change during updates. The per-sample gradient clipping threshold is similar, but it imposes a limit on a per-sample basis. The existing approach to DP training requires an ablation study to simultaneously tune the clipping threshold and the learning rate. As such, if K (say, five, in practice) different clipping thresholds are evaluated, this makes the model’s hyperparameter tuning stage K times more expensive.

Two sample ablation studies, considering different learning rates and per-gradient clipping thresholds. Left: GPT-2’s BLEU scores on the E2E dataset, trained with DP-AdamW. Right: Classification accuracy of ResNet18 on the ImageNet dataset, trained with DP-SGD. The different patterns of results illustrate the need to tune both hyperparameters simultaneously.

Calibrated noise addition.gif

Related content

Calibrating noise addition to word density in the embedding space improves utility of privacy-protected text.

To solve this problem, we introduced automatic clipping, using gradient normalization instead of per-sample gradient clipping. This (1) eliminates the clipping threshold, (2) enlarges the small gradients that were not clipped, and (3) provably optimizes performance. Equipped with our automatic clipping, the DP stochastic-gradient-descent-optimization algorithm (DP-SGD) has the same asymptotic convergence rate as the standard (non-DP) SGD, even in the nonconvex-optimization setting, where the deep learning optimization lies.

Our experiments across several computer vision and language tasks show that automatic clipping can achieve state-of-the-art DP accuracy, on par with per-sample clipping methods, without sacrificing the training efficiency or the privacy guarantee.

Performance of GPT-2 on the E2E dataset, measured by BLEU and ROUGE scores under DP and non-DP settings (higher is better). We compare full fine-tuning with automatic clipping to state-of-the-art fine-tuning methods such as LoRA. Additional performance measures are included in the full paper. The best two GPT-2 models for each row are marked in bold.

DP-BiTFiT

The first advantage of differentially private bias-term fine-tuning (DP-BiTFiT) is that it’s model-agnostic; we can apply it to any model by simply freezing all weights during fine-tuning, updating only the bias terms. In sharp contrast, prior alternatives such as low-rank adaption (LoRA) and adapter are applicable exclusively to transformers and involve extra tuning of the adaption ranks.

The second advantage of DP-BiTFiT is its parameter efficiency. In a study that spanned a range of foundation models, we found that the bias terms constitute only around 0.1% of model parameters. This means that DP-BiTFiT provides large efficiency improvements in terms of training time, memory footprint, and communication cost in the distributed-learning setting.

Parameter efficiency of DP-BiTFiT. The last two columns count the total number of parameters and the percentage of trainable parameters. Note that DP-BiTFiT optimizes only about 0.1% of the total parameters.

The third advantage of DP-BiTFiT is its computational advantage over other parameter-efficient approaches, such as DP-LoRA. Even if both approaches fine-tune roughly the same number of parameters, DP-BiTFiT still enjoys a great advantage in memory saving, as it does not need to store and access expensive activation tensors when computing the bias gradients; that’s unavoidable when computing weight gradients. We verify this rigorously through the chain rules of the back-propagation, where DP-BiTFiT has a much simpler computation graph because the activation tensors are not used.

The same computation graph of back-propagation (black) with modifications by three different DP procedures (red). Because DP-BiTFiT (lower right) modifies only the model biases, it requires far less computational overhead than prior approaches (left: GhostClip; top right: Opacus) and consequently has a simpler computation graph.

Empirically, we have observed a substantial boost in efficiency when switching from DP full fine-tuning to DP-BiTFiT, while still maintaining state-of-the-art accuracy on large foundation models such as GPT-2-large, ResNet 152, RoBERTa-large, and Vision Transformers. For instance, we compare DP-BiTFiT to DP full fine-tuning and observe a four- to tenfold speedup and a two- to tenfold memory saving on GPT-2.

Maximum throughput and batch size by different fine-tuning methods. At left: E2E dataset with GPT2-small/medium/large. At right: 50,000 images of 512×512 pixels with ResNet 50/101/152. The speed and memory saving offered by DP-BiTFiT is substantial, especially on large models.

Acknowledgements: We would like to acknowledge our coauthors on the papers for their contributions: Sheng Zha and George Karypis. We thank Huzefa Rangwala for reviewing this post.



[ad_2]

Source link

We will be happy to hear your thoughts

Leave a reply

Discover Your Essential Style at NovaEssentials
Logo