文章目錄
深度學習Week18——學習殘差網絡和ResNet50V2算法
一、前言
二、我的環境
三、論文解讀
3.1 預激活設計
3.2 殘差單元結構
四、模型復現
4.1 Residual Block
4.2 堆疊Residual Block
4.3. ResNet50V2架構復現
一、前言
- 🍨 本文為🔗365天深度學習訓練營 中的學習記錄博客
- 🍖 原作者:K同學啊 | 接輔導、項目定制
本周由于臨近期末,被各種各樣事情耽誤,學習效果很差,但是仍要堅持打卡,展示自己的學校成果,或許我會選擇休息一周,整理一下事情,再重新學習本周內容,因此本周主要是代碼復現,更深層次的學習放在未來兩周,包括數據集的驗證、Pytorch復現代碼。
二、我的環境
- 電腦系統:Windows 10
- 語言環境:Python 3.8.0
- 編譯器:Pycharm2023.2.3
深度學習環境:TensorFlow
顯卡及顯存:RTX 3060 8G
三、論文解讀
我花了一周的時間大致閱讀了何愷明大佬的論文,由于時間問題,我只能給出我的理解,可能有錯誤,歡迎大家指正。
1、預激活設計
ResNet
:采用傳統的后激活設計,即批量歸一化(Batch Normalization,簡稱BN)和ReLU激活函數位于卷積層之后。
ResNetV2
:引入了預激活設計,將BN和ReLU移動到卷積層之前。這種設計被稱為“Pre-Activation”,它改變了信息流和梯度流,有助于優化過程。
從上圖中,我們可以很明顯的看出原始殘差單元、批量歸一化后加法、加法前ReLU、僅ReLU預激活、完全預激活,何愷明大佬進行了4種新的嘗試,可以看出最好的結果是(e)full pre-activation,其次到(a)original。
對于這個抽象的概念,我咨詢了Kimi.ai,讓他幫我解釋,我試著理解(由于時間問題,本周很多內容都是請教AI的,但AI我覺得不一定準確,還需要小心求證)
- 原始方法:每個隊員跑完后,我們會給他們一個鼓勵的拍手(ReLU激活函數),讓他們振奮精神,然后他們把接力棒交給下一個隊員,并且下一個隊員在接棒前會做一些熱身運動(批量歸一化,BN)。
- 改變后的第一種方法:這次我們讓隊員跑完后先做熱身運動,然后再給他們拍手鼓勵。這樣隊員們在接力時可能會有點混亂,表現不如原來好。
- 改變后的第二種方法:我們讓隊員在接棒前就給他們拍手鼓勵,這樣他們在跑的時候可能更有動力,但可能因為熱身不充分,效果一般。
- 改變后的第三種方法:我們只給隊員拍手鼓勵,不做熱身運動。這樣隊員們的表現和原來差不多,但可能因為沒有熱身,潛力沒有完全發揮出來。
- 改變后的第四種方法:我們讓隊員在做熱身運動和拍手鼓勵之后再接棒。這樣他們既做好了準備,又得到了鼓勵,跑得更快,表現最好。
因此我們發現,預激活可以簡化信息流并提高優化的容易度
2、 殘差單元結構
在咱們深度學習中,當我們增加網絡的層數時,理論上網絡的性能應該更好,因為有更多的數據可以用于學習復雜的特征。但實際情況是,過深的網絡會變得難以訓練,性能反而下降。殘差單元因此誕生。
一個殘差單元包Identity Path
和Residual Function
。
Identity Path就是將輸入直接傳遞到單元的輸出,不做任何處理,就像是一個"shortcut"或者“跳躍連接”。如下圖,何愷明大佬提出了6種不同的shortcut在殘差網絡中的使用方式,以及它們是如何影響信息傳遞的
分別是原始,0,5倍縮放因子(減弱信息)c,d,e不理解和f應用dropout技術來隨機丟棄一些信息,我覺得目的主要都是防止過擬合、增加模型效率,他們的結果如下:
最原始的(a)original 結構是最好的,也就是 identity mapping 恒等映射是最好的
四 、模型復現
(這部分代碼我由于最近事情太多就直接復制粘貼了,很不好,我會盡快改正!!)
- 官方調用
tf.keras.applications.resnet_v2.ResNet50V2(include_top=True,weights='imagenet',input_tensor=None,input_shape=None,pooling=None,classes=1000,classifier_activation='softmax'
)# ResNet50V2、ResNet101V2與ResNet152V2的搭建方式完全一樣,區別在于堆疊Residual Block的數量不同。import tensorflow as tf
import tensorflow.keras.layers as layers
from tensorflow.keras.models import Model
4.1 Residual Block
"""
殘差塊Arguments:x: 輸入張量filters: integer, filters of the bottleneck layer.kernel_size: 默認是3, kernel size of the bottleneck layer.stride: default 1, stride of the first layer.conv_shortcut: default False, use convolution shortcut if True,otherwise identity shortcut.name: string, block label.Returns:Output tensor for the residual block.
"""
def block2(x, filters, kernel_size=3, stride=1, conv_shortcut=False, name=None):preact = layers.BatchNormalization(name=name + '_preact_bn')(x)preact = layers.Activation('relu', name=name + '_preact_relu')(preact)if conv_shortcut:shortcut = layers.Conv2D(4 * filters, 1, strides=stride, name=name + '_0_conv')(preact)else:shortcut = layers.MaxPooling2D(1, strides=stride)(x) if stride > 1 else xx = layers.Conv2D(filters, 1, strides=1, use_bias=False, name=name + '_1_conv')(preact)x = layers.BatchNormalization(name=name + '_1_bn')(x)x = layers.Activation('relu', name=name + '_1_relu')(x)x = layers.ZeroPadding2D(padding=((1, 1), (1, 1)), name=name + '_2_pad')(x)x = layers.Conv2D(filters,kernel_size,strides=stride,use_bias=False,name=name + '_2_conv')(x)x = layers.BatchNormalization(name=name + '_2_bn')(x)x = layers.Activation('relu', name=name + '_2_relu')(x)x = layers.Conv2D(4 * filters, 1, name=name + '_3_conv')(x)x = layers.Add(name=name + '_out')([shortcut, x])return x
# ResNet50
if conv_shortcut:shortcut = layers.Conv2D(4 * filters, 1, strides=stride, name=name + '_0_conv')(x)shortcut = layers.BatchNormalization(axis=bn_axis, epsilon=1.001e-5, name=name + '_0_bn')(shortcut)
else:shortcut = x# ResNet50V2 區別還是很顯然的if conv_shortcut:shortcut = layers.Conv2D(4 * filters, 1, strides=stride, name=name + '_0_conv')(preact)
else:# 注意后面還多了if語句shortcut = layers.MaxPooling2D(1, strides=stride)(x) if stride > 1 else x、
2. 堆疊Residual Block
def stack2(x, filters, blocks, stride1=2, name=None):x = block2(x, filters, conv_shortcut=True, name=name + '_block1')for i in range(2, blocks):x = block2(x, filters, name=name + '_block' + str(i))x = block2(x, filters, stride=stride1, name=name + '_block' + str(blocks))return x
3. ResNet50V2架構復現
def ResNet50V2(include_top=True, # 是否包含位于網絡頂部的全連接層preact=True, # 是否使用預激活use_bias=True, # 是否對卷積層使用偏置weights='imagenet',input_tensor=None, # 可選的keras張量,用作模型的圖像輸入input_shape=None,pooling=None,classes=1000, # 用于分類圖像的可選類數classifier_activation='softmax'): # 分類層激活函數img_input = layers.Input(shape=input_shape)x = layers.ZeroPadding2D(padding=((3, 3), (3, 3)), name='conv1_pad')(img_input)x = layers.Conv2D(64, 7, strides=2, use_bias=use_bias, name='conv1_conv')(x)if not preact:x = layers.BatchNormalization(name='conv1_bn')(x)x = layers.Activation('relu', name='conv1_relu')(x)x = layers.ZeroPadding2D(padding=((1, 1), (1, 1)), name='pool1_pad')(x)x = layers.MaxPooling2D(3, strides=2, name='pool1_pool')(x)x = stack2(x, 64, 3, name='conv2')x = stack2(x, 128, 4, name='conv3')x = stack2(x, 256, 6, name='conv4')x = stack2(x, 512, 3, stride1=1, name='conv5')if preact:x = layers.BatchNormalization(name='post_bn')(x)x = layers.Activation('relu', name='post_relu')(x)if include_top:x = layers.GlobalAveragePooling2D(name='avg_pool')(x)x = layers.Dense(classes, activation=classifier_activation, name='predictions')(x)else:if pooling == 'avg':# GlobalAveragePooling2D就是將每張圖片的每個通道值各自加起來再求平均,# 最后結果是沒有了寬高維度,只剩下個數與平均值兩個維度。# 可以理解為變成了多張單像素圖片。x = layers.GlobalAveragePooling2D(name='avg_pool')(x)elif pooling == 'max':x = layers.GlobalMaxPooling2D(name='max_pool')(x)model = Model(img_input, x)return model
if __name__ == '__main__':model = ResNet50V2(input_shape=(224, 224, 3))model.summary()
Model: "model"
__________________________________________________________________________________________________
conv5_block1_1_relu (Activation (None, 7, 7, 512) 0 conv5_block1_1_bn[0][0]
__________________________________________________________________________________________________
conv5_block1_2_pad (ZeroPadding (None, 9, 9, 512) 0 conv5_block1_1_relu[0][0]
__________________________________________________________________________________________________
conv5_block1_2_conv (Conv2D) (None, 7, 7, 512) 2359296 conv5_block1_2_pad[0][0]
__________________________________________________________________________________________________
conv5_block1_2_bn (BatchNormali (None, 7, 7, 512) 2048 conv5_block1_2_conv[0][0]
__________________________________________________________________________________________________
conv5_block1_2_relu (Activation (None, 7, 7, 512) 0 conv5_block1_2_bn[0][0]
__________________________________________________________________________________________________
conv5_block1_0_conv (Conv2D) (None, 7, 7, 2048) 2099200 conv5_block1_preact_relu[0][0]
__________________________________________________________________________________________________
conv5_block1_3_conv (Conv2D) (None, 7, 7, 2048) 1050624 conv5_block1_2_relu[0][0]
__________________________________________________________________________________________________
conv5_block1_out (Add) (None, 7, 7, 2048) 0 conv5_block1_0_conv[0][0] conv5_block1_3_conv[0][0]
__________________________________________________________________________________________________
conv5_block2_preact_bn (BatchNo (None, 7, 7, 2048) 8192 conv5_block1_out[0][0]
__________________________________________________________________________________________________
conv5_block2_preact_relu (Activ (None, 7, 7, 2048) 0 conv5_block2_preact_bn[0][0]
__________________________________________________________________________________________________
conv5_block2_1_conv (Conv2D) (None, 7, 7, 512) 1048576 conv5_block2_preact_relu[0][0]
__________________________________________________________________________________________________
conv5_block2_1_bn (BatchNormali (None, 7, 7, 512) 2048 conv5_block2_1_conv[0][0]
__________________________________________________________________________________________________
conv5_block2_1_relu (Activation (None, 7, 7, 512) 0 conv5_block2_1_bn[0][0]
__________________________________________________________________________________________________
conv5_block2_2_pad (ZeroPadding (None, 9, 9, 512) 0 conv5_block2_1_relu[0][0]
__________________________________________________________________________________________________
conv5_block2_2_conv (Conv2D) (None, 7, 7, 512) 2359296 conv5_block2_2_pad[0][0]
__________________________________________________________________________________________________
conv5_block2_2_bn (BatchNormali (None, 7, 7, 512) 2048 conv5_block2_2_conv[0][0]
__________________________________________________________________________________________________
conv5_block2_2_relu (Activation (None, 7, 7, 512) 0 conv5_block2_2_bn[0][0]
__________________________________________________________________________________________________
conv5_block2_3_conv (Conv2D) (None, 7, 7, 2048) 1050624 conv5_block2_2_relu[0][0]
__________________________________________________________________________________________________
conv5_block2_out (Add) (None, 7, 7, 2048) 0 conv5_block1_out[0][0] conv5_block2_3_conv[0][0]
__________________________________________________________________________________________________
conv5_block3_preact_bn (BatchNo (None, 7, 7, 2048) 8192 conv5_block2_out[0][0]
__________________________________________________________________________________________________
conv5_block3_preact_relu (Activ (None, 7, 7, 2048) 0 conv5_block3_preact_bn[0][0]
__________________________________________________________________________________________________
conv5_block3_1_conv (Conv2D) (None, 7, 7, 512) 1048576 conv5_block3_preact_relu[0][0]
__________________________________________________________________________________________________
conv5_block3_1_bn (BatchNormali (None, 7, 7, 512) 2048 conv5_block3_1_conv[0][0]
__________________________________________________________________________________________________
conv5_block3_1_relu (Activation (None, 7, 7, 512) 0 conv5_block3_1_bn[0][0]
__________________________________________________________________________________________________
conv5_block3_2_pad (ZeroPadding (None, 9, 9, 512) 0 conv5_block3_1_relu[0][0]
__________________________________________________________________________________________________
conv5_block3_2_conv (Conv2D) (None, 7, 7, 512) 2359296 conv5_block3_2_pad[0][0]
__________________________________________________________________________________________________
conv5_block3_2_bn (BatchNormali (None, 7, 7, 512) 2048 conv5_block3_2_conv[0][0]
__________________________________________________________________________________________________
conv5_block3_2_relu (Activation (None, 7, 7, 512) 0 conv5_block3_2_bn[0][0]
__________________________________________________________________________________________________
conv5_block3_3_conv (Conv2D) (None, 7, 7, 2048) 1050624 conv5_block3_2_relu[0][0]
__________________________________________________________________________________________________
conv5_block3_out (Add) (None, 7, 7, 2048) 0 conv5_block2_out[0][0] conv5_block3_3_conv[0][0]
__________________________________________________________________________________________________
post_bn (BatchNormalization) (None, 7, 7, 2048) 8192 conv5_block3_out[0][0]
__________________________________________________________________________________________________
post_relu (Activation) (None, 7, 7, 2048) 0 post_bn[0][0]
__________________________________________________________________________________________________
avg_pool (GlobalAveragePooling2 (None, 2048) 0 post_relu[0][0]
__________________________________________________________________________________________________
predictions (Dense) (None, 1000) 2049000 avg_pool[0][0]
==================================================================================================
Total params: 25,613,800
Trainable params: 25,568,360
Non-trainable params: 45,440