SPP與SPPF
一、SPP的應用的背景
在卷積神經網絡中我們經常看到固定輸入的設計,但是如果我們輸入的不能是固定尺寸的該怎么辦呢?
通常來說,我們有以下幾種方法:
(1)對輸入進行resize操作,讓他們統統變成你設計的層的輸入規格那樣。但是這樣過于暴力直接,可能會丟失很多信息或者多出很多不該有的信息(圖片變形等),影響最終的結果。
(2)替換網絡中的全連接層,對最后的卷積層使用global average pooling,全局平均池化只和通道數有關,而與特征圖大小沒有關系
(3)最后一個當然是我們要講的SPP結構
Note:
但是在yolov5中SPP/SPPF作用是:實現局部特征和全局特征的featherMap級別的融合。
二、SPP結構分析
SPP結構又被稱為空間金字塔池化,能將任意大小的特征圖轉換成固定大小的特征向量。
接下來我們來詳述一下SPP是怎么處理滴~
輸入層:首先我們現在有一張任意大小的圖片,其大小為w * h。
輸出層:21個神經元 – 即我們待會希望提取到21個特征。
分析如下圖所示:分別對1 * 1分塊,2 * 2分塊和4 * 4子圖里分別取每一個框內的max值(即取藍框框內的最大值),這一步就是作最大池化,這樣最后提取出來的特征值(即取出來的最大值)一共有1 * 1 + 2 * 2 + 4 * 4 = 21個。得出的特征再concat在一起。
而在YOLOv5中SPP的結構圖如下圖所示:
其中,前后各多加一個CBL,中間的kernel size分別為1 * 1,5 * 5,9 * 9和13 * 13。
三、SPPF結構分析
CBL(conv+BN+Leaky relu)改成CBS(conv+BN+SiLU)哈,之前沒注意它的名稱變化。
四、YOLOv5中SPP/SPPF結構源碼解析(內含注釋分析)
代碼注釋與上圖的SPP結構相對應。
class SPP(nn.Module):def __init__(self, c1, c2, k=(5, 9, 13)):#這里5,9,13,就是初始化的kernel sizesuper().__init__()c_ = c1 // 2 # hidden channelsself.cv1 = Conv(c1, c_, 1, 1)#這里對應第一個CBLself.cv2 = Conv(c_ * (len(k) + 1), c2, 1, 1)#這里對應SPP操作里的最后一個CBLself.m = nn.ModuleList([nn.MaxPool2d(kernel_size=x, stride=1, padding=x // 2) for x in k])#這里對應SPP核心操作,對5 * 5分塊,9 * 9分塊和13 * 13子圖分別取最大池化def forward(self, x):x = self.cv1(x)with warnings.catch_warnings():warnings.simplefilter('ignore') # suppress torch 1.9.0 max_pool2d() warning忽略警告return self.cv2(torch.cat([x] + [m(x) for m in self.m], 1))#torch.cat對應concat
# SPPF結構
class SPPF(nn.Module):# Spatial Pyramid Pooling - Fast (SPPF) layer for YOLOv5 by Glenn Jocherdef __init__(self, c1, c2, k=5): # equivalent to SPP(k=(5, 9, 13))super().__init__()c_ = c1 // 2 # hidden channelsself.cv1 = Conv(c1, c_, 1, 1)self.cv2 = Conv(c_ * 4, c2, 1, 1)self.m = nn.MaxPool2d(kernel_size=k, stride=1, padding=k // 2)def forward(self, x):x = self.cv1(x)#先通過CBL進行通道數的減半with warnings.catch_warnings():warnings.simplefilter('ignore') # suppress torch 1.9.0 max_pool2d() warningy1 = self.m(x)y2 = self.m(y1)#上述兩次最大池化return self.cv2(torch.cat([x, y1, y2, self.m(y2)], 1))#將原來的x,一次池化后的y1,兩次池化后的y2,3次池化的self.m(y2)先進行拼接,然后再CBL
實驗對比
下面做個簡單的小實驗,對比下SPP和SPPF的計算結果以及速度,代碼如下(注意這里將SPPF中最開始和結尾處的1x1卷積層給去掉了,只對比含有MaxPool的部分):
import time
import torch
import torch.nn as nnclass SPP(nn.Module):def __init__(self):super().__init__()self.maxpool1 = nn.MaxPool2d(5, 1, padding=2)self.maxpool2 = nn.MaxPool2d(9, 1, padding=4)self.maxpool3 = nn.MaxPool2d(13, 1, padding=6)def forward(self, x):o1 = self.maxpool1(x)o2 = self.maxpool2(x)o3 = self.maxpool3(x)return torch.cat([x, o1, o2, o3], dim=1)class SPPF(nn.Module):def __init__(self):super().__init__()self.maxpool = nn.MaxPool2d(5, 1, padding=2)def forward(self, x):o1 = self.maxpool(x)o2 = self.maxpool(o1)o3 = self.maxpool(o2)return torch.cat([x, o1, o2, o3], dim=1)def main():input_tensor = torch.rand(8, 32, 16, 16)spp = SPP()sppf = SPPF()output1 = spp(input_tensor)output2 = sppf(input_tensor)print(torch.equal(output1, output2))t_start = time.time()for _ in range(100):spp(input_tensor)print(f"spp time: {time.time() - t_start}")t_start = time.time()for _ in range(100):sppf(input_tensor)print(f"sppf time: {time.time() - t_start}")if __name__ == '__main__':main()"""運行結果"""
True
spp time: 0.5373051166534424
sppf time: 0.20780706405639648
更多類型的SPP
1.1 SPP(Spatial Pyramid Pooling)
SPP模塊是何凱明大神在2015年的論文《Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition》中被提出。
SPP全程為空間金字塔池化結構,主要是為了解決兩個問題:
有效避免了對圖像區域裁剪、縮放操作導致的圖像失真等問題;
解決了卷積神經網絡對圖相關重復特征提取的問題,大大提高了產生候選框的速度,且節省了計算成本。
class SPP(nn.Module):# Spatial Pyramid Pooling (SPP) layer https://arxiv.org/abs/1406.4729def __init__(self, c1, c2, k=(5, 9, 13)):super().__init__()c_ = c1 // 2 # hidden channelsself.cv1 = Conv(c1, c_, 1, 1)self.cv2 = Conv(c_ * (len(k) + 1), c2, 1, 1)self.m = nn.ModuleList([nn.MaxPool2d(kernel_size=x, stride=1, padding=x // 2) for x in k])def forward(self, x):x = self.cv1(x)with warnings.catch_warnings():warnings.simplefilter('ignore') # suppress torch 1.9.0 max_pool2d() warningreturn self.cv2(torch.cat([x] + [m(x) for m in self.m], 1))
1.2 SPPF(Spatial Pyramid Pooling - Fast)
這個是YOLOv5作者Glenn Jocher基于SPP提出的,速度較SPP快很多,所以叫SPP-Fast
class SPPF(nn.Module):# Spatial Pyramid Pooling - Fast (SPPF) layer for YOLOv5 by Glenn Jocherdef __init__(self, c1, c2, k=5): # equivalent to SPP(k=(5, 9, 13))super().__init__()c_ = c1 // 2 # hidden channelsself.cv1 = Conv(c1, c_, 1, 1)self.cv2 = Conv(c_ * 4, c2, 1, 1)self.m = nn.MaxPool2d(kernel_size=k, stride=1, padding=k // 2)def forward(self, x):x = self.cv1(x)with warnings.catch_warnings():warnings.simplefilter('ignore') # suppress torch 1.9.0 max_pool2d() warningy1 = self.m(x)y2 = self.m(y1)return self.cv2(torch.cat((x, y1, y2, self.m(y2)), 1))
1.3 SimSPPF(Simplified SPPF)
美團YOLOv6提出的模塊,感覺和SPPF只差了一個激活函數,簡單測試了一下,單個ConvBNReLU速度要比ConvBNSiLU快18%
class SimConv(nn.Module):'''Normal Conv with ReLU activation'''def __init__(self, in_channels, out_channels, kernel_size, stride, groups=1, bias=False):super().__init__()padding = kernel_size // 2self.conv = nn.Conv2d(in_channels,out_channels,kernel_size=kernel_size,stride=stride,padding=padding,groups=groups,bias=bias,)self.bn = nn.BatchNorm2d(out_channels)self.act = nn.ReLU()def forward(self, x):return self.act(self.bn(self.conv(x)))def forward_fuse(self, x):return self.act(self.conv(x))class SimSPPF(nn.Module):'''Simplified SPPF with ReLU activation'''def __init__(self, in_channels, out_channels, kernel_size=5):super().__init__()c_ = in_channels // 2 # hidden channelsself.cv1 = SimConv(in_channels, c_, 1, 1)self.cv2 = SimConv(c_ * 4, out_channels, 1, 1)self.m = nn.MaxPool2d(kernel_size=kernel_size, stride=1, padding=kernel_size // 2)def forward(self, x):x = self.cv1(x)with warnings.catch_warnings():warnings.simplefilter('ignore')y1 = self.m(x)y2 = self.m(y1)return self.cv2(torch.cat([x, y1, y2, self.m(y2)], 1))
1.4 ASPP(Atrous Spatial Pyramid Pooling)
受到SPP的啟發,語義分割模型DeepLabv2中提出了ASPP模塊(空洞空間卷積池化金字塔),該模塊使用具有不同采樣率的多個并行空洞卷積層。為每個采樣率提取的特征在單獨的分支中進一步處理,并融合以生成最終結果。該模塊通過不同的空洞率構建不同感受野的卷積核,用來獲取多尺度物體信息,具體結構比較簡單如下圖所示:
ASPP是在DeepLab中提出來的,在后續的DeepLab版本中對其做了改進,如加入BN層、加入深度可分離卷積等,但基本的思路還是沒變。
# without BN version
class ASPP(nn.Module):def __init__(self, in_channel=512, out_channel=256):super(ASPP, self).__init__()self.mean = nn.AdaptiveAvgPool2d((1, 1)) # (1,1)means ouput_dimself.conv = nn.Conv2d(in_channel,out_channel, 1, 1)self.atrous_block1 = nn.Conv2d(in_channel, out_channel, 1, 1)self.atrous_block6 = nn.Conv2d(in_channel, out_channel, 3, 1, padding=6, dilation=6)self.atrous_block12 = nn.Conv2d(in_channel, out_channel, 3, 1, padding=12, dilation=12)self.atrous_block18 = nn.Conv2d(in_channel, out_channel, 3, 1, padding=18, dilation=18)self.conv_1x1_output = nn.Conv2d(out_channel * 5, out_channel, 1, 1)def forward(self, x):size = x.shape[2:]image_features = self.mean(x)image_features = self.conv(image_features)image_features = F.upsample(image_features, size=size, mode='bilinear')atrous_block1 = self.atrous_block1(x)atrous_block6 = self.atrous_block6(x)atrous_block12 = self.atrous_block12(x)atrous_block18 = self.atrous_block18(x)net = self.conv_1x1_output(torch.cat([image_features, atrous_block1, atrous_block6,atrous_block12, atrous_block18], dim=1))return net
1.5 RFB(Receptive Field Block)
RFB模塊是在《ECCV2018:Receptive Field Block Net for Accurate and Fast Object Detection》一文中提出的,該文的出發點是模擬人類視覺的感受野從而加強網絡的特征提取能力,在結構上RFB借鑒了Inception的思想,主要是在Inception的基礎上加入了空洞卷積,從而有效增大了感受野
RFB和RFB-s的架構。RFB-s用于在淺層人類視網膜主題圖中模擬較小的pRF,使用具有較小內核的更多分支。
class BasicConv(nn.Module):def __init__(self, in_planes, out_planes, kernel_size, stride=1, padding=0, dilation=1, groups=1, relu=True, bn=True):super(BasicConv, self).__init__()self.out_channels = out_planesif bn:self.conv = nn.Conv2d(in_planes, out_planes, kernel_size=kernel_size, stride=stride, padding=padding, dilation=dilation, groups=groups, bias=False)self.bn = nn.BatchNorm2d(out_planes, eps=1e-5, momentum=0.01, affine=True)self.relu = nn.ReLU(inplace=True) if relu else Noneelse:self.conv = nn.Conv2d(in_planes, out_planes, kernel_size=kernel_size, stride=stride, padding=padding, dilation=dilation, groups=groups, bias=True)self.bn = Noneself.relu = nn.ReLU(inplace=True) if relu else Nonedef forward(self, x):x = self.conv(x)if self.bn is not None:x = self.bn(x)if self.relu is not None:x = self.relu(x)return xclass BasicRFB(nn.Module):def __init__(self, in_planes, out_planes, stride=1, scale=0.1, map_reduce=8, vision=1, groups=1):super(BasicRFB, self).__init__()self.scale = scaleself.out_channels = out_planesinter_planes = in_planes // map_reduceself.branch0 = nn.Sequential(BasicConv(in_planes, inter_planes, kernel_size=1, stride=1, groups=groups, relu=False),BasicConv(inter_planes, 2 * inter_planes, kernel_size=(3, 3), stride=stride, padding=(1, 1), groups=groups),BasicConv(2 * inter_planes, 2 * inter_planes, kernel_size=3, stride=1, padding=vision, dilation=vision, relu=False, groups=groups))self.branch1 = nn.Sequential(BasicConv(in_planes, inter_planes, kernel_size=1, stride=1, groups=groups, relu=False),BasicConv(inter_planes, 2 * inter_planes, kernel_size=(3, 3), stride=stride, padding=(1, 1), groups=groups),BasicConv(2 * inter_planes, 2 * inter_planes, kernel_size=3, stride=1, padding=vision + 2, dilation=vision + 2, relu=False, groups=groups))self.branch2 = nn.Sequential(BasicConv(in_planes, inter_planes, kernel_size=1, stride=1, groups=groups, relu=False),BasicConv(inter_planes, (inter_planes // 2) * 3, kernel_size=3, stride=1, padding=1, groups=groups),BasicConv((inter_planes // 2) * 3, 2 * inter_planes, kernel_size=3, stride=stride, padding=1, groups=groups),BasicConv(2 * inter_planes, 2 * inter_planes, kernel_size=3, stride=1, padding=vision + 4, dilation=vision + 4, relu=False, groups=groups))self.ConvLinear = BasicConv(6 * inter_planes, out_planes, kernel_size=1, stride=1, relu=False)self.shortcut = BasicConv(in_planes, out_planes, kernel_size=1, stride=stride, relu=False)self.relu = nn.ReLU(inplace=False)def forward(self, x):x0 = self.branch0(x)x1 = self.branch1(x)x2 = self.branch2(x)out = torch.cat((x0, x1, x2), 1)out = self.ConvLinear(out)short = self.shortcut(x)out = out * self.scale + shortout = self.relu(out)return out
1.6 SPPCSPC
該模塊是YOLOv7中使用的SPP結構,表現優于SPPF,但參數量和計算量提升了很多
class SPPCSPC(nn.Module):# CSP https://github.com/WongKinYiu/CrossStagePartialNetworksdef __init__(self, c1, c2, n=1, shortcut=False, g=1, e=0.5, k=(5, 9, 13)):super(SPPCSPC, self).__init__()c_ = int(2 * c2 * e) # hidden channelsself.cv1 = Conv(c1, c_, 1, 1)self.cv2 = Conv(c1, c_, 1, 1)self.cv3 = Conv(c_, c_, 3, 1)self.cv4 = Conv(c_, c_, 1, 1)self.m = nn.ModuleList([nn.MaxPool2d(kernel_size=x, stride=1, padding=x // 2) for x in k])self.cv5 = Conv(4 * c_, c_, 1, 1)self.cv6 = Conv(c_, c_, 3, 1)self.cv7 = Conv(2 * c_, c2, 1, 1)def forward(self, x):x1 = self.cv4(self.cv3(self.cv1(x)))y1 = self.cv6(self.cv5(torch.cat([x1] + [m(x1) for m in self.m], 1)))y2 = self.cv2(x)return self.cv7(torch.cat((y1, y2), dim=1))
#分組SPPCSPC 分組后參數量和計算量與原本差距不大,不知道效果怎么樣
class SPPCSPC_group(nn.Module):def __init__(self, c1, c2, n=1, shortcut=False, g=1, e=0.5, k=(5, 9, 13)):super(SPPCSPC_group, self).__init__()c_ = int(2 * c2 * e) # hidden channelsself.cv1 = Conv(c1, c_, 1, 1, g=4)self.cv2 = Conv(c1, c_, 1, 1, g=4)self.cv3 = Conv(c_, c_, 3, 1, g=4)self.cv4 = Conv(c_, c_, 1, 1, g=4)self.m = nn.ModuleList([nn.MaxPool2d(kernel_size=x, stride=1, padding=x // 2) for x in k])self.cv5 = Conv(4 * c_, c_, 1, 1, g=4)self.cv6 = Conv(c_, c_, 3, 1, g=4)self.cv7 = Conv(2 * c_, c2, 1, 1, g=4)def forward(self, x):x1 = self.cv4(self.cv3(self.cv1(x)))y1 = self.cv6(self.cv5(torch.cat([x1] + [m(x1) for m in self.m], 1)))y2 = self.cv2(x)return self.cv7(torch.cat((y1, y2), dim=1))
1.7 SPPFCSPC+
我借鑒了SPPF的思想將SPPCSPC優化了一下,得到了SPPFCSPC,在保持感受野不變的情況下獲得速度提升;我把這個模塊給v7作者看了,并沒有得到否定,詳細回答可以看4 Issue
目前這個結構被YOLOv6 3.0版本使用了,效果很不錯,大家可以看一下YOLOv6 3.0的論文,里面有詳細的實驗結果。
class SPPFCSPC(nn.Module):def __init__(self, c1, c2, n=1, shortcut=False, g=1, e=0.5, k=5):super(SPPFCSPC, self).__init__()c_ = int(2 * c2 * e) # hidden channelsself.cv1 = Conv(c1, c_, 1, 1)self.cv2 = Conv(c1, c_, 1, 1)self.cv3 = Conv(c_, c_, 3, 1)self.cv4 = Conv(c_, c_, 1, 1)self.m = nn.MaxPool2d(kernel_size=k, stride=1, padding=k // 2)self.cv5 = Conv(4 * c_, c_, 1, 1)self.cv6 = Conv(c_, c_, 3, 1)self.cv7 = Conv(2 * c_, c2, 1, 1)def forward(self, x):x1 = self.cv4(self.cv3(self.cv1(x)))x2 = self.m(x1)x3 = self.m(x2)y1 = self.cv6(self.cv5(torch.cat((x1,x2,x3, self.m(x3)),1)))y2 = self.cv2(x)return self.cv7(torch.cat((y1, y2), dim=1))
2 參數量對比
這里我在yolov5s.yaml中使用各個模型替換SPP模塊
搬運自知乎網址深度學習中小知識點系列(六) 解讀SPP / SPPF / SimSPPF / ASPP / RFB / SPPCSPC