又是好久沒更新博客,最近瑣事纏身,寫文檔寫到吐。沒時間學習新的知識,剛空閑下來立刻就學習之前忘得差不多得Pytorch。Pytorch和tensorflow差不多,具體得就不多啰嗦了,覺得還有疑問的童鞋可以自行去官網學習,官網網址包括:
1.https://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html
2.https://pytorch-cn.readthedocs.io/zh/latest/package_references/torch-nn/#class-torchnndataparallelmodule-device_idsnone-output_devicenone-dim0source
?
個人覺得值得一說的是關于數據并行這部分:
根據官網的教程代碼總結如下所示:
# -*- coding: utf-8 -*-
"""
Created on Tue Apr 16 14:32:58 2019@author: kofzh
"""import torch
import torch.nn as nn
from torch.utils.data import Dataset, DataLoader# Parameters and DataLoaders
input_size = 5
output_size = 2batch_size = 30
data_size = 100device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")class RandomDataset(Dataset):def __init__(self, size, length):self.len = lengthself.data = torch.randn(length, size)def __getitem__(self, index):return self.data[index]def __len__(self):return self.lenrand_loader = DataLoader(dataset=RandomDataset(input_size, data_size),batch_size=batch_size, shuffle=True)class Model(nn.Module):# Our modeldef __init__(self, input_size, output_size):super(Model, self).__init__()self.fc = nn.Linear(input_size, output_size)def forward(self, input):output = self.fc(input)print("\tIn Model: input size", input.size(),"output size", output.size())return outputmodel = Model(input_size, output_size)
if torch.cuda.device_count() > 1:print("Let's use", torch.cuda.device_count(), "GPUs!")# dim = 0 [30, xxx] -> [10, ...], [10, ...], [10, ...] on 3 GPUsmodel = nn.DataParallel(model)model.to(device)
for data in rand_loader:input = data.to(device)output = model(input)print("Outside: input size", input.size(),"output_size", output.size())
對于上述代碼分別是 batchsize= 30在CPU和4路GPU環境下運行得出的結果:
CPU with batchsize= 30結果:
In Model: input size torch.Size([30, 5]) output size torch.Size([30, 2])
Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])In Model: input size torch.Size([30, 5]) output size torch.Size([30, 2])
Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])In Model: input size torch.Size([30, 5]) output size torch.Size([30, 2])
Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])In Model: input size torch.Size([10, 5]) output size torch.Size([10, 2])
Outside: input size torch.Size([10, 5]) output_size torch.Size([10, 2])
4路?GPU with batchsize= 30?結果:
Let's use 4 GPUs!In Model: input size torch.Size([8, 5]) output size torch.Size([8, 2])In Model: input size torch.Size([8, 5]) output size torch.Size([8, 2])In Model: input size torch.Size([6, 5]) output size torch.Size([6, 2])In Model: input size torch.Size([8, 5]) output size torch.Size([8, 2])
Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])In Model: input size In Model: input sizetorch.Size([8, 5]) In Model: input size torch.Size([8, 5])output size In Model: input size torch.Size([8, 5]) output size torch.Size([8, 2])torch.Size([6, 5]) output size torch.Size([6, 2])output size torch.Size([8, 2])torch.Size([8, 2])
Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])In Model: input size In Model: input size torch.Size([8, 5]) output size torch.Size([8, 2])
torch.Size([8, 5]) output size torch.Size([8, 2])In Model: input size torch.Size([8, 5]) output size torch.Size([8, 2])In Model: input size torch.Size([6, 5]) output size torch.Size([6, 2])
Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])In Model: input size In Model: input sizetorch.Size([3, 5]) In Model: input size torch.Size([3, 5]) output size torch.Size([3, 2])torch.Size([3, 5]) output size torch.Size([3, 2])output size torch.Size([3, 2])In Model: input size torch.Size([1, 5]) output size torch.Size([1, 2])
Outside: input size torch.Size([10, 5]) output_size torch.Size([10, 2])
對于上述代碼分別是 batchsize= 25在CPU和4路GPU環境下運行得出的結果:
CPU with batchsize= 25結果:
In Model: input size torch.Size([25, 5]) output size torch.Size([25, 2])
Outside: input size torch.Size([25, 5]) output_size torch.Size([25, 2])In Model: input size torch.Size([25, 5]) output size torch.Size([25, 2])
Outside: input size torch.Size([25, 5]) output_size torch.Size([25, 2])In Model: input size torch.Size([25, 5]) output size torch.Size([25, 2])
Outside: input size torch.Size([25, 5]) output_size torch.Size([25, 2])In Model: input size torch.Size([25, 5]) output size torch.Size([25, 2])
Outside: input size torch.Size([25, 5]) output_size torch.Size([25, 2])
4路?GPU with batchsize= 25?結果:
Let's use 4 GPUs!In Model: input size In Model: input size torch.Size([7, 5]) output size torch.Size([7, 2])torch.Size([7, 5]) output size torch.Size([7, 2])In Model: input size In Model: input size torch.Size([7, 5]) output size torch.Size([7, 2])torch.Size([4, 5]) output size torch.Size([4, 2])
Outside: input size torch.Size([25, 5]) output_size torch.Size([25, 2])In Model: input size torch.Size([7, 5]) output size torch.Size([7, 2])In Model: input size torch.Size([7, 5]) output size torch.Size([7, 2])In Model: input size In Model: input size torch.Size([7, 5]) output size torch.Size([7, 2])torch.Size([4, 5]) output size torch.Size([4, 2])
Outside: input size torch.Size([25, 5]) output_size torch.Size([25, 2])In Model: input size In Model: input size torch.Size([7, 5]) output size torch.Size([7, 2])In Model: input size torch.Size([7, 5]) output size torch.Size([7, 2])
torch.Size([7, 5]) output size torch.Size([7, 2])In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
Outside: input size torch.Size([25, 5]) output_size torch.Size([25, 2])In Model: input size torch.Size([7, 5]) output size torch.Size([7, 2])In Model: input size In Model: input size torch.Size([7, 5]) output size torch.Size([7, 2])torch.Size([7, 5]) output size torch.Size([7, 2])In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
個人總結:batchsize在實際訓練時對于每個GPU的分配基本上采用的是平均分配數據的原則,每個GPU分配的數據相加 =?batchsize。
備注:
1.官網上的CPU結果圖是錯誤的,可能是圖截錯了,在此對于某些博客直接抄襲官網的結果圖表示鄙視。
2.本文針對單服務器多GPU的情況,請不要混淆。
如有錯誤,敬請指正!