Introduction

There are several strategies used to train a deep learning model with multi devices. In order to train a model across multiple devices, deep learning frameworks provide some features for distributed training such as:

  1. Data Parallelism
  2. Model Parallelism
  3. Pipeline Parallelism

Each parallelism scheme has pros and cons, and engineers should decide among these to efficiently exploit their devices.

Data Parallelism

Data Parallelism is well-known distributed method for training deep learning model. The notion of data parallelism is not only in deep learning domain but in plenty of other domains. SIMD instructions process multiple data simultaneously within one instruction, which is one of the data parallelism. Also, SPMD programming model supports engineers to effectively do parallel programming. Data parallelism with multiple devices is known as batch-splitting meaning that the task is splited into subtasks and each device conducts a subtask. For example, with (256, 32, 32, 3)-shaped input and 4 GPUs, it is easy to divide input into 4 (64, 32, 32, 3)-shaped inputs because there is no dependence among batch axes in common deep learning task.

Of course, layers like Batch Normalization have to be synchronized across all subtasks so that means and variances are the same across multiple devices. We will going to talk about this later.

Implementation

The implementation of data parallelism varies. Here I introduce common concept and algorithm of batch-splitting.

  1. Copy all parameters to each device.
  2. For each iteration, split the training batch into sub-batches.
  3. Distribute one sub-batch for one device.
  4. Each device computes the forward and backward passes on its-batch.
  5. Sum all the gradients on devices and distribute the sum.
  6. Update the model parameters.

Model Parallelism

Although data parallelism is dominant strategy for training on multiple devices, it suffers from the inability to train very large models due to memory constraints of GPU.

Pipeline Parallelism

Collective Commnuication

Frameworks

In Data Parallelism

In Model Parallelism

In Pipeline Parallelism

Frameworks for Parallelism

Tensorflow

Mesh-Tensorflow

PyTorch

DeepSpeed

Conclusion

References

  1. PyTorch Distributed: Experiences on Accelerating Data Parallel Training
  2. PyTorch Distributed Overview
  3. ZeRO: Memory Optimizations Toward Training Trillion Parameter Models
  4. Mesh-TensorFlow: Deep Learning for Supercomputers
  5. GPipe: Easy Scaling with Micro-Batch Pipeline Parallelism
  6. PipeDream: Fast and Efficient Pipeline Parallel DNN Training