基本介紹
? ? ? ? 今日的應用實踐的模型是計算機實踐領域中十分出名的模型----ResNet模型。ResNet是一種殘差網絡結構,它通過引入“殘差學習”的概念來解決隨著網絡深度增加時訓練困難的問題,從而能夠訓練更深的網絡結構。現很多網絡極深的模型或多或少都受此影響。今日的主要任務是使用ResNet50進行遷移學習,所謂的遷移學習是不從頭到尾訓練一個模型,而是在一個特別大的數據集上面訓練得到一個預訓練模型,然后使用該模型的權重作為初始化參數,最后應用于特定的任務。對特定的任務來說也可以對模型進行訓練,但是需要凍結模型的某些參數,一般只訓練模型的分類器部分,不會訓練特征提取部分。下面我們詳細講講今日的應用實踐。
數據集準備
? ? ? ? 本次使用的數據集是來自ImageNet數據集中抽取出來的狼狗數據集,每個類別大概有120張訓練圖像與30張驗證圖像,數據集可通過華為云OBS和相關API直接下載,然后進行加載。可借助MindSpore提供的API對數據集進行加載,同時,因為數據集在送入模型之前需做些處理,這些可封裝到一個函數內,具體代碼如下:
def create_dataset_canidae(dataset_path, usage):"""數據加載"""data_set = ds.ImageFolderDataset(dataset_path,num_parallel_workers=workers,shuffle=True,)# 數據增強操作mean = [0.485 * 255, 0.456 * 255, 0.406 * 255]std = [0.229 * 255, 0.224 * 255, 0.225 * 255]scale = 32if usage == "train":# Define map operations for training datasettrans = [vision.RandomCropDecodeResize(size=image_size, scale=(0.08, 1.0), ratio=(0.75, 1.333)),vision.RandomHorizontalFlip(prob=0.5),vision.Normalize(mean=mean, std=std),vision.HWC2CHW()]else:# Define map operations for inference datasettrans = [vision.Decode(),vision.Resize(image_size + scale),vision.CenterCrop(image_size),vision.Normalize(mean=mean, std=std),vision.HWC2CHW()]# 數據映射操作data_set = data_set.map(operations=trans,input_columns='image',num_parallel_workers=workers)# 批量操作data_set = data_set.batch(batch_size)return data_set
通過上述操作后,我們可以很方便的調用數據集,無論是進行訓練還是可視化,都可以。
模型搭建
? ? ? ? 準備好數據集后,自然就是進行模型搭建,ResNet50是一個非常常見的模型,具體模型結構和不同AI框架下的代碼都很容易獲取,MindSpore官方也有相關的實現代碼,我們直接使用,模型的代碼如下:
class ResNet(nn.Cell):def __init__(self, block: Type[Union[ResidualBlockBase, ResidualBlock]],layer_nums: List[int], num_classes: int, input_channel: int) -> None:super(ResNet, self).__init__()self.relu = nn.ReLU()# 第一個卷積層,輸入channel為3(彩色圖像),輸出channel為64self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, weight_init=weight_init)self.norm = nn.BatchNorm2d(64)# 最大池化層,縮小圖片的尺寸self.max_pool = nn.MaxPool2d(kernel_size=3, stride=2, pad_mode='same')# 各個殘差網絡結構塊定義,self.layer1 = make_layer(64, block, 64, layer_nums[0])self.layer2 = make_layer(64 * block.expansion, block, 128, layer_nums[1], stride=2)self.layer3 = make_layer(128 * block.expansion, block, 256, layer_nums[2], stride=2)self.layer4 = make_layer(256 * block.expansion, block, 512, layer_nums[3], stride=2)# 平均池化層self.avg_pool = nn.AvgPool2d()# flattern層self.flatten = nn.Flatten()# 全連接層self.fc = nn.Dense(in_channels=input_channel, out_channels=num_classes)def construct(self, x):x = self.conv1(x)x = self.norm(x)x = self.relu(x)x = self.max_pool(x)x = self.layer1(x)x = self.layer2(x)x = self.layer3(x)x = self.layer4(x)x = self.avg_pool(x)x = self.flatten(x)x = self.fc(x)return xdef _resnet(model_url: str, block: Type[Union[ResidualBlockBase, ResidualBlock]],layers: List[int], num_classes: int, pretrained: bool, pretrianed_ckpt: str,input_channel: int):model = ResNet(block, layers, num_classes, input_channel)if pretrained:# 加載預訓練模型download(url=model_url, path=pretrianed_ckpt, replace=True)param_dict = load_checkpoint(pretrianed_ckpt)load_param_into_net(model, param_dict)return modeldef resnet50(num_classes: int = 1000, pretrained: bool = False):"ResNet50模型"resnet50_url = "https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/models/application/resnet50_224_new.ckpt"resnet50_ckpt = "./LoadPretrainedModel/resnet50_224_new.ckpt"return _resnet(resnet50_url, ResidualBlock, [3, 4, 6, 3], num_classes,pretrained, resnet50_ckpt, 2048)
訓練模型
????????我們采用定特征進行訓練,需要凍結除最后一層之外的所有網絡層,最后一層其實就是一個分類層,前面是特征提取層,MindSpore可以通過設置?requires_grad == False
?凍結參數,以便不在反向傳播中計算梯度,具體代碼如下:
import mindspore as ms
import matplotlib.pyplot as plt
import os
import timenet_work = resnet50(pretrained=True)# 全連接層輸入層的大小
in_channels = net_work.fc.in_channels
# 輸出通道數大小為狼狗分類數2
head = nn.Dense(in_channels, 2)
# 重置全連接層
net_work.fc = head# 平均池化層kernel size為7
avg_pool = nn.AvgPool2d(kernel_size=7)
# 重置平均池化層
net_work.avg_pool = avg_pool# 凍結除最后一層外的所有參數
for param in net_work.get_parameters():if param.name not in ["fc.weight", "fc.bias"]:param.requires_grad = False# 定義優化器和損失函數
opt = nn.Momentum(params=net_work.trainable_params(), learning_rate=lr, momentum=0.5)
loss_fn = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean')def forward_fn(inputs, targets):logits = net_work(inputs)loss = loss_fn(logits, targets)return lossgrad_fn = ms.value_and_grad(forward_fn, None, opt.parameters)def train_step(inputs, targets):loss, grads = grad_fn(inputs, targets)opt(grads)return loss# 實例化模型
model1 = train.Model(net_work, loss_fn, opt, metrics={"Accuracy": train.Accuracy()})
一切準備妥當后,便可以進行訓練,由于有預訓練模型,模型的訓練速度非常快,比從頭到尾訓練快了好幾倍。訓練了5輪,效果就非常好了:
模型可視化預測
? ? ? ? 有了訓練好的模型,自然要看看訓練得好不好,除了看評價指標,最直觀的就是實際使用一下模型,由于這是圖像分類任務,所以將其可視化。這一整個流程的代碼如下:
def visualize_model(best_ckpt_path, val_ds):net = resnet50()# 全連接層輸入層的大小in_channels = net.fc.in_channels# 輸出通道數大小為狼狗分類數2head = nn.Dense(in_channels, 2)# 重置全連接層net.fc = head# 平均池化層kernel size為7avg_pool = nn.AvgPool2d(kernel_size=7)# 重置平均池化層net.avg_pool = avg_pool# 加載模型參數param_dict = ms.load_checkpoint(best_ckpt_path)ms.load_param_into_net(net, param_dict)model = train.Model(net)# 加載驗證集的數據進行驗證data = next(val_ds.create_dict_iterator())images = data["image"].asnumpy()labels = data["label"].asnumpy()class_name = {0: "dogs", 1: "wolves"}# 預測圖像類別output = model.predict(ms.Tensor(data['image']))pred = np.argmax(output.asnumpy(), axis=1)# 顯示圖像及圖像的預測值plt.figure(figsize=(5, 5))for i in range(4):plt.subplot(2, 2, i + 1)# 若預測正確,顯示為藍色;若預測錯誤,顯示為紅色color = 'blue' if pred[i] == labels[i] else 'red'plt.title('predict:{}'.format(class_name[pred[i]]), color=color)picture_show = np.transpose(images[i], (1, 2, 0))mean = np.array([0.485, 0.456, 0.406])std = np.array([0.229, 0.224, 0.225])picture_show = std * picture_show + meanpicture_show = np.clip(picture_show, 0, 1)plt.imshow(picture_show)plt.axis('off')plt.show()
調用該函數后,可視化結果如下:可以看出還是很準的