1.DataParallel
DataParallel更易于使用(只需簡單包裝單GPU模型)。
model = nn.DataParallel(model)
它使用一個進程來計算模型參數,然后在每個批處理期間將分發到每個GPU,然后每個GPU計算各自的梯度,然后匯總到GPU0中進行求平均,然后由GPU0進行反向傳播更新參數,然后再把模型的參數由GPU0傳播給其他的GPU。
特點:
(1)broadcast 的是模型的參數,因此速度慢,效率低
(2)操作簡單
因此通信很快成為一個瓶頸,GPU利用率通常很低。nn.DataParallel要求所有的GPU都在同一個節點上(不支持分布式),而且不能使用Apex進行混合精度訓練。
https://zhuanlan.zhihu.com/p/113694038
1.DistributedDataParallel支持模型并行,而DataParallel并不支持,這意味如果模型太大單卡顯存不足時只能使用前者;
2.DataParallel是單進程多線程的,只用于單機情況,而DistributedDataParallel是多進程的,適用于單機和多機情況,真正實現分布式訓練;
3.DistributedDataParallel的訓練更高效,因為每個進程都是獨立的Python解釋器,避免GIL問題,而且通信成本低其訓練速度更快,基本上DataParallel已經被棄用;
4.必須要說明的是DistributedDataParallel中每個進程都有獨立的優化器,執行自己的更新過程,但是梯度通過通信傳遞到每個進程,所有執行的內容是相同的;
2. DistributedDataParallel
https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html#torch.nn.parallel.DistributedDataParallel
官網鏈接
main_proc = Truedevice = torch.device("cuda")is_distributed = os.environ.get("LOCAL_RANK") # If local rank exists, distributed envprint("distributed: ", is_distributed)if is_distributed:device_id = args.local_ranktorch.cuda.set_device(device_id)print(f"Setting CUDA Device to {device_id}")os.environ['NCCL_IB_DISABLE'] = '0'dist.init_process_group(backend="nccl")print("distributed finished........")main_proc = device_id == 0 # Main process handles saving of models and reportingif is_distributed:train_sampler = torch.utils.data.distributed.DistributedSampler(train_set, shuffle=True) #train_sampler = db2sampler(SequentialSampler(train_set), batch_size, False, bucket_size_multiplier=len(train_set)//batch_size) else:train_sampler = torch.utils.data.RandomSampler(train_set)#train_sampler = db1sampler(SequentialSampler(train_set), batch_size, False, bucket_size_multiplier=len(train_set)//batch_size)train_loader = torch.utils.data.DataLoader(train_set, batch_size, sampler=train_sampler, num_workers=args.workers, collate_fn = pad_collate)valid_loader = torch.utils.data.DataLoader(valid_set, valid_batch_size, num_workers=args.workers, collate_fn = pad_collate)if is_distributed:WAP_model = torch.nn.SyncBatchNorm.convert_sync_batchnorm(WAP_model) #解決了batchnormal的問題if is_distributed:WAP_model = torch.nn.parallel.DistributedDataParallel(WAP_model, device_ids=[device_id],find_unused_parameters=True)for eidx in range(max_epochs):n_samples = 0ud_epoch = time.time()if is_distributed:train_sampler.set_epoch(epoch=eidx) for i, (x, y,x_idx, x_name) in enumerate(train_loader):WAP_model.train()
注意:在 DataParallel 中,batch size 設置必須為單卡的 n 倍,但是在 DistributedDataParallel 內,batch size 設置于單卡一樣即可
比DataParallel,DistributedDataParallel訓練時間縮減了好幾倍。
一定要用DistributedDataParallel
if is_distributed:train_sampler.set_epoch(epoch=eidx)
https://zhuanlan.zhihu.com/p/97115875
pytorch(分布式)數據并行個人實踐總結
坑:
(1)DistributedDataParallel 內,batch size 設置于單卡一樣即可