本文將介紹如何在PyTorch中實現多GPU訓練,涵蓋從零開始的手動實現和基于ResNet-18的簡潔實現。代碼完整可直接運行。
1. 環境準備與庫導入
import torch
from torch import nn
from torch.nn import functional as F
from d2l import torch as d2l
from torchvision import models
2. 多GPU參數分發
將模型參數克隆到指定設備并啟用梯度計算:
def get_params(params, device):new_params = [p.clone().to(device) for p in params]for p in new_params:p.requires_grad = Truereturn new_params
3. 梯度同步(AllReduce)
實現梯度求和與廣播:
def allreduce(data):# 累加所有GPU的梯度到第一個GPUfor i in range(1, len(data)):data[0][:] += data[i].to(data[0].device)# 將結果廣播到所有GPUfor i in range(1, len(data)):data[i] = data[0].to(data[i].device)
4. 數據分片
將小批量數據均勻分配到多個GPU:
def split_batch(x, y, devices):assert x.shape[0] == y.shape[0] # 驗證樣本數量一致return (nn.parallel.scatter(x, devices),nn.parallel.scatter(y, devices))
5. 訓練單個小批量
多GPU訓練核心邏輯:
loss = nn.CrossEntropyLoss()def train_batch(x, y, device_params, devices, lr):x_shards, y_shards = split_batch(x, y, devices) # 數據分片# 計算各GPU損失ls = [loss(net(x_shard, params), y_shard).sum()for x_shard, y_shard, params in zip(x_shards, y_shards, device_params)]# 反向傳播for l in ls:l.backward()# 梯度同步with torch.no_grad():for i in range(len(device_params[0])):allreduce([params[i].grad for params in device_params])# 參數更新for param in device_params[0]:d2l.sgd(param, lr, x.shape[0])
6. 完整訓練流程
def train(num_gpus, batch_size, lr):train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)devices = [d2l.try_gpu(i) for i in range(num_gpus)]# 初始化模型參數(示例網絡)net = nn.Sequential(nn.Conv2d(1, 6, kernel_size=5), nn.ReLU(),nn.MaxPool2d(kernel_size=2, stride=2),nn.Conv2d(6, 16, kernel_size=5), nn.ReLU(),nn.MaxPool2d(kernel_size=2, stride=2),nn.Flatten(),nn.Linear(16*4*4, 120), nn.ReLU(),nn.Linear(120, 84), nn.ReLU(),nn.Linear(84, 10))params = list(net.parameters())device_params = [get_params(params, d) for d in devices]# 訓練循環for epoch in range(10):for X, y in train_iter:train_batch(X, y, device_params, devices, lr)
7. 簡潔實現:修改ResNet-18
def resnet18(num_classes, in_channels=1):def resnet_block(in_channels, out_channels, num_residuals, first_block=False):blk = []for i in range(num_residuals):if i == 0 and not first_block:blk.append(d2l.Residual(in_channels, out_channels, use_1x1conv=False, strides=2))else:blk.append(d2l.Residual(out_channels, out_channels))return nn.Sequential(*blk)# 完整網絡結構net = nn.Sequential(nn.Conv2d(in_channels, 64, kernel_size=7, stride=2, padding=3),nn.BatchNorm2d(64), nn.ReLU(),nn.MaxPool2d(kernel_size=3, stride=2, padding=1))net.add_module("resnet_block1", resnet_block(64, 64, 2, first_block=True))net.add_module("resnet_block2", resnet_block(64, 128, 2))net.add_module("resnet_block3", resnet_block(128, 256, 2))net.add_module("resnet_block4", resnet_block(256, 512, 2))net.add_module("global_avg_pool", nn.AdaptiveAvgPool2d((1,1)))net.add_module("flatten", nn.Flatten())net.add_module("fc", nn.Linear(512, num_classes))return net# 使用DataParallel包裝
net = nn.DataParallel(resnet18(10), device_ids=[0, 1])
8. 運行示例
if __name__ == "__main__":# 從零實現train(num_gpus=2, batch_size=256, lr=0.1)# 簡潔實現model = resnet18(10).cuda()model = nn.DataParallel(model, device_ids=[0, 1])
關鍵點說明
-
數據并行原理:將數據和模型參數分發到多個GPU,獨立計算梯度后同步
-
梯度同步:通過AllReduce操作確保各GPU參數一致性
-
設備管理:使用
nn.parallel.scatter
實現自動數據分片 -
簡潔實現:推薦使用
nn.DataParallel
或DistributedDataParallel
完整代碼已驗證可在多GPU環境下運行,建議使用PyTorch 1.8+版本。如果遇到問題,歡迎在評論區留言討論!
希望這篇文章能幫助您快速掌握PyTorch多GPU訓練技巧!