yolov8漲點技巧,添加SwinTransformer注意力機制,提升目標檢測效果

目錄

摘要

SwinTransformer原理

代碼實現

YOLOv8詳細添加步驟

?ymal文件內容

one_swinTrans

three_swinTrans

啟動命令

完整代碼分享


摘要

Swin Transformer通過引入創新的分層注意力機制展現了其架構的獨特性,該機制通過將注意力區域劃分為塊并在這些塊內執行操作,從而有效降低了計算復雜性。其主要結構呈現分層形式,每個階段包括一組基礎塊,負責捕捉不同層次的特征表示,形成了分層的特征提取過程。采用多尺度的注意力機制使得模型能夠同時關注不同大小的特征,從而提高對圖像中不同尺度信息的感知。在多個圖像分類基準數據集上,Swin Transformer表現出與其他先進模型相媲美甚至更優的性能,且在相對較少的參數和計算成本下取得出色的結果。其模塊化設計使得它在目標檢測和語義分割等其他計算機視覺任務上也具備良好的通用性。

SwinTransformer原理

Swin Transformer 的一個關鍵設計元素是連續自注意力層之間窗口分區的移動,如圖所示。移動的窗口橋接了前一層的窗口,提供了它們之間的連接,從而顯著增強了建模能力。這種策略在現實世界的延遲方面也很有效:窗口內的所有查詢補丁共享相同的密鑰,這有利于硬件中的內存訪問。相比之下,早期基于滑動窗口的自注意力方法 由于不同查詢像素的鍵集不同,因此在通用硬件上延遲較低。

Swin Transformer 架構中計算自注意力的移位窗口方法的圖示

下圖概述了 Swin Transformer 架構,其中展示了微型版本 。它首先通過補丁分割模塊(如 ViT)將輸入 RGB 圖像分割成不重疊的補丁。每個補丁都被視為一個“token”,其特征被設置為原始像素 RGB 值的串聯。在我們的實現中,我們使用 4 × 4 的 patch 大小,因此每個 patch 的特征維度為 4 × 4 × 3 = 48。線性嵌入層應用于此原始值特征,將其投影到任意維度(記為C)

SwinTransformer結構

在這些補丁token上應用了幾個經過修改的自注意力計算的 Transformer 塊(Swin Transformer 塊)。 Transformer 塊維護tokens數量 ( H/4 ×W/4 ),與線性嵌入一起被稱為“階段 1”

兩個連續的 Swin 變壓器塊

Swin Transformer 是通過將 Transformer 塊中的標準多頭自注意力(MSA)模塊替換為基于移位窗口的模塊而構建的,其他層保持不變。如上圖所示,Swin Transformer 模塊由基于移位窗口的 MSA 模塊組成,后跟中間帶有 GELU 非線性的 2 層 MLP。在每個 MSA 模塊和每個 MLP 之前應用 LayerNorm (LN) 層,并在每個模塊之后應用殘差連接。

代碼實現
class WindowAttention(nn.Module):def __init__(self, dim, window_size, num_heads, qkv_bias=True, qk_scale=None, attn_drop=0., proj_drop=0.):super().__init__()self.dim = dimself.window_size = window_size  # Wh, Wwself.num_heads = num_headshead_dim = dim // num_headsself.scale = qk_scale or head_dim ** -0.5# define a parameter table of relative position biasself.relative_position_bias_table = nn.Parameter(torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads))  # 2*Wh-1 * 2*Ww-1, nH# get pair-wise relative position index for each token inside the windowcoords_h = torch.arange(self.window_size[0])coords_w = torch.arange(self.window_size[1])coords = torch.stack(torch.meshgrid([coords_h, coords_w]))  # 2, Wh, Wwcoords_flatten = torch.flatten(coords, 1)  # 2, Wh*Wwrelative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :]  # 2, Wh*Ww, Wh*Wwrelative_coords = relative_coords.permute(1, 2, 0).contiguous()  # Wh*Ww, Wh*Ww, 2relative_coords[:, :, 0] += self.window_size[0] - 1  # shift to start from 0relative_coords[:, :, 1] += self.window_size[1] - 1relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1relative_position_index = relative_coords.sum(-1)  # Wh*Ww, Wh*Wwself.register_buffer("relative_position_index", relative_position_index)self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)self.attn_drop = nn.Dropout(attn_drop)self.proj = nn.Linear(dim, dim)self.proj_drop = nn.Dropout(proj_drop)nn.init.normal_(self.relative_position_bias_table, std=.02)self.softmax = nn.Softmax(dim=-1)def forward(self, x, mask=None):B_, N, C = x.shapeqkv = self.qkv(x).reshape(B_, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)q, k, v = qkv[0], qkv[1], qkv[2]  # make torchscript happy (cannot use tensor as tuple)q = q * self.scaleattn = (q @ k.transpose(-2, -1))relative_position_bias = self.relative_position_bias_table[self.relative_position_index.view(-1)].view(self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1)  # Wh*Ww,Wh*Ww,nHrelative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous()  # nH, Wh*Ww, Wh*Wwattn = attn + relative_position_bias.unsqueeze(0)if mask is not None:nW = mask.shape[0]attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0)attn = attn.view(-1, self.num_heads, N, N)attn = self.softmax(attn)else:attn = self.softmax(attn)attn = self.attn_drop(attn)# print(attn.dtype, v.dtype)try:x = (attn @ v).transpose(1, 2).reshape(B_, N, C)except:# print(attn.dtype, v.dtype)x = (attn.half() @ v).transpose(1, 2).reshape(B_, N, C)x = self.proj(x)x = self.proj_drop(x)return xclass SwinTransformer(nn.Module):# CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworksdef __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5):  # ch_in, ch_out, number, shortcut, groups, expansionsuper(SwinTransformer, self).__init__()c_ = int(c2 * e)  # hidden channelsself.cv1 = Conv(c1, c_, 1, 1)self.cv2 = Conv(c1, c_, 1, 1)self.cv3 = Conv(2 * c_, c2, 1, 1)num_heads = c_ // 32self.m = SwinTransformerBlock(c_, c_, num_heads, n)# self.m = nn.Sequential(*[Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)])def forward(self, x):y1 = self.m(self.cv1(x))y2 = self.cv2(x)return self.cv3(torch.cat((y1, y2), dim=1))class SwinTransformerB(nn.Module):# CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworksdef __init__(self, c1, c2, n=1, shortcut=False, g=1, e=0.5):  # ch_in, ch_out, number, shortcut, groups, expansionsuper(Swin_Transformer_B, self).__init__()c_ = int(c2)  # hidden channelsself.cv1 = Conv(c1, c_, 1, 1)self.cv2 = Conv(c_, c_, 1, 1)self.cv3 = Conv(2 * c_, c2, 1, 1)num_heads = c_ // 32self.m = SwinTransformerBlock(c_, c_, num_heads, n)# self.m = nn.Sequential(*[Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)])def forward(self, x):x1 = self.cv1(x)y1 = self.m(x1)y2 = self.cv2(x1)return self.cv3(torch.cat((y1, y2), dim=1))class SwinTransformerC(nn.Module):# CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworksdef __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5):  # ch_in, ch_out, number, shortcut, groups, expansionsuper(Swin_Transformer_C, self).__init__()c_ = int(c2 * e)  # hidden channelsself.cv1 = Conv(c1, c_, 1, 1)self.cv2 = Conv(c1, c_, 1, 1)self.cv3 = Conv(c_, c_, 1, 1)self.cv4 = Conv(2 * c_, c2, 1, 1)num_heads = c_ // 32self.m = SwinTransformerBlock(c_, c_, num_heads, n)# self.m = nn.Sequential(*[Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)])def forward(self, x):y1 = self.cv3(self.m(self.cv1(x)))y2 = self.cv2(x)return self.cv4(torch.cat((y1, y2), dim=1))class Mlp(nn.Module):def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.SiLU, drop=0.):super().__init__()out_features = out_features or in_featureshidden_features = hidden_features or in_featuresself.fc1 = nn.Linear(in_features, hidden_features)self.act = act_layer()self.fc2 = nn.Linear(hidden_features, out_features)self.drop = nn.Dropout(drop)def forward(self, x):x = self.fc1(x)x = self.act(x)x = self.drop(x)x = self.fc2(x)x = self.drop(x)return xdef window_partition(x, window_size):B, H, W, C = x.shapeassert H % window_size == 0, 'feature map h and w can not divide by window size'x = x.view(B, H // window_size, window_size, W // window_size, window_size, C)windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C)return windowsdef window_reverse(windows, window_size, H, W):B = int(windows.shape[0] / (H * W / window_size / window_size))x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1)x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1)return xclass SwinTransformerLayer(nn.Module):def __init__(self, dim, num_heads, window_size=8, shift_size=0,mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0., drop_path=0.,act_layer=nn.SiLU, norm_layer=nn.LayerNorm):super().__init__()self.dim = dimself.num_heads = num_headsself.window_size = window_sizeself.shift_size = shift_sizeself.mlp_ratio = mlp_ratio# if min(self.input_resolution) <= self.window_size:#     # if window size is larger than input resolution, we don't partition windows#     self.shift_size = 0#     self.window_size = min(self.input_resolution)assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size"self.norm1 = norm_layer(dim)self.attn = WindowAttention(dim, window_size=(self.window_size, self.window_size), num_heads=num_heads,qkv_bias=qkv_bias, qk_scale=qk_scale, attn_drop=attn_drop, proj_drop=drop)self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()self.norm2 = norm_layer(dim)mlp_hidden_dim = int(dim * mlp_ratio)self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop)def create_mask(self, H, W):# calculate attention mask for SW-MSAimg_mask = torch.zeros((1, H, W, 1))  # 1 H W 1h_slices = (slice(0, -self.window_size),slice(-self.window_size, -self.shift_size),slice(-self.shift_size, None))w_slices = (slice(0, -self.window_size),slice(-self.window_size, -self.shift_size),slice(-self.shift_size, None))cnt = 0for h in h_slices:for w in w_slices:img_mask[:, h, w, :] = cntcnt += 1mask_windows = window_partition(img_mask, self.window_size)  # nW, window_size, window_size, 1mask_windows = mask_windows.view(-1, self.window_size * self.window_size)attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2)attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0))return attn_maskdef forward(self, x):# reshape x[b c h w] to x[b l c]_, _, H_, W_ = x.shapePadding = Falseif min(H_, W_) < self.window_size or H_ % self.window_size != 0 or W_ % self.window_size != 0:Padding = True# print(f'img_size {min(H_, W_)} is less than (or not divided by) window_size {self.window_size}, Padding.')pad_r = (self.window_size - W_ % self.window_size) % self.window_sizepad_b = (self.window_size - H_ % self.window_size) % self.window_sizex = F.pad(x, (0, pad_r, 0, pad_b))# print('2', x.shape)B, C, H, W = x.shapeL = H * Wx = x.permute(0, 2, 3, 1).contiguous().view(B, L, C)  # b, L, c# create mask from init to forwardif self.shift_size > 0:attn_mask = self.create_mask(H, W).to(x.device)else:attn_mask = Noneshortcut = xx = self.norm1(x)x = x.view(B, H, W, C)# cyclic shiftif self.shift_size > 0:shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2))else:shifted_x = x# partition windowsx_windows = window_partition(shifted_x, self.window_size)  # nW*B, window_size, window_size, Cx_windows = x_windows.view(-1, self.window_size * self.window_size, C)  # nW*B, window_size*window_size, C# W-MSA/SW-MSAattn_windows = self.attn(x_windows, mask=attn_mask)  # nW*B, window_size*window_size, C# merge windowsattn_windows = attn_windows.view(-1, self.window_size, self.window_size, C)shifted_x = window_reverse(attn_windows, self.window_size, H, W)  # B H' W' C# reverse cyclic shiftif self.shift_size > 0:x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2))else:x = shifted_xx = x.view(B, H * W, C)# FFNx = shortcut + self.drop_path(x)x = x + self.drop_path(self.mlp(self.norm2(x)))x = x.permute(0, 2, 1).contiguous().view(-1, C, H, W)  # b c h wif Padding:x = x[:, :, :H_, :W_]  # reverse paddingreturn xclass SwinTransformerBlock(nn.Module):def __init__(self, c1, c2, num_heads, num_layers, window_size=8):super().__init__()self.conv = Noneif c1 != c2:self.conv = Conv(c1, c2)# remove input_resolutionself.blocks = nn.Sequential(*[SwinTransformerLayer(dim=c2, num_heads=num_heads, window_size=window_size,shift_size=0 if (i % 2 == 0) else window_size // 2) for i inrange(num_layers)])def forward(self, x):if self.conv is not None:x = self.conv(x)x = self.blocks(x)return x
YOLOv8詳細添加步驟

1. 復制以上代碼在?ultralytics/nn/modules/conv.py 添加

2. 在ultralytics/nn/modules/init.py 注冊SwinTransformer

3. 在ultralytics/nn/task.py 注冊SwinTransformer(兩處注冊)

4. 成功添加SwinTransformer

?ymal文件內容
one_swinTrans
# Ultralytics YOLO 🚀, AGPL-3.0 license
# YOLOv8 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect# Parameters
nc: 6  # number of classes
scales: # model compound scaling constants, i.e. 'model=yolov8n.yaml' will call yolov8.yaml with scale 'n'# [depth, width, max_channels]n: [0.33, 0.25, 1024]  # YOLOv8n summary: 225 layers,  3157200 parameters,  3157184 gradients,   8.9 GFLOPss: [0.33, 0.50, 1024]  # YOLOv8s summary: 225 layers, 11166560 parameters, 11166544 gradients,  28.8 GFLOPsm: [0.67, 0.75, 768]   # YOLOv8m summary: 295 layers, 25902640 parameters, 25902624 gradients,  79.3 GFLOPsl: [1.00, 1.00, 512]   # YOLOv8l summary: 365 layers, 43691520 parameters, 43691504 gradients, 165.7 GFLOPsx: [1.00, 1.25, 512]   # YOLOv8x summary: 365 layers, 68229648 parameters, 68229632 gradients, 258.5 GFLOPs# YOLOv8.0n backbone
backbone:# [from, repeats, module, args]- [-1, 1, Conv, [64, 3, 2]]  # 0-P1/2- [-1, 1, Conv, [128, 3, 2]]  # 1-P2/4- [-1, 3, C2f, [128, True]]- [-1, 1, Conv, [256, 3, 2]]  # 3-P3/8- [-1, 6, SwinTransformer, [256, True]]- [-1, 1, Conv, [512, 3, 2]]  # 5-P4/16- [-1, 6, C2f, [512, True]]- [-1, 1, Conv, [1024, 3, 2]]  # 7-P5/32- [-1, 3, C2f, [1024, True]]- [-1, 1, SPPF, [1024, 5]]  # 9# YOLOv8.0n head
head:- [-1, 1, nn.Upsample, [None, 2, 'nearest']]- [[-1, 6], 1, Concat, [1]]  # cat backbone P4- [-1, 3, C2f, [512]]  # 12- [-1, 1, nn.Upsample, [None, 2, 'nearest']]- [[-1, 4], 1, Concat, [1]]  # cat backbone P3- [-1, 3, C2f, [256]]  # 15 (P3/8-small)- [-1, 1, Conv, [256, 3, 2]]- [[-1, 12], 1, Concat, [1]]  # cat head P4- [-1, 3, C2f, [512]]  # 18 (P4/16-medium)- [-1, 1, Conv, [512, 3, 2]]- [[-1, 9], 1, Concat, [1]]  # cat head P5- [-1, 3, C2f, [1024]]  # 21 (P5/32-large)- [[15, 18, 21], 1, Detect, [nc]]  # Detect(P3, P4, P5)
three_swinTrans
# Ultralytics YOLO 🚀, AGPL-3.0 license
# YOLOv8 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect# Parameters
nc: 6  # number of classes
scales: # model compound scaling constants, i.e. 'model=yolov8n.yaml' will call yolov8.yaml with scale 'n'# [depth, width, max_channels]n: [0.33, 0.25, 1024]  # YOLOv8n summary: 225 layers,  3157200 parameters,  3157184 gradients,   8.9 GFLOPss: [0.33, 0.50, 1024]  # YOLOv8s summary: 225 layers, 11166560 parameters, 11166544 gradients,  28.8 GFLOPsm: [0.67, 0.75, 768]   # YOLOv8m summary: 295 layers, 25902640 parameters, 25902624 gradients,  79.3 GFLOPsl: [1.00, 1.00, 512]   # YOLOv8l summary: 365 layers, 43691520 parameters, 43691504 gradients, 165.7 GFLOPsx: [1.00, 1.25, 512]   # YOLOv8x summary: 365 layers, 68229648 parameters, 68229632 gradients, 258.5 GFLOPs# YOLOv8.0n backbone
backbone:# [from, repeats, module, args]- [-1, 1, Conv, [64, 3, 2]]  # 0-P1/2- [-1, 1, Conv, [128, 3, 2]]  # 1-P2/4- [-1, 3, C2f, [128, True]]- [-1, 1, Conv, [256, 3, 2]]  # 3-P3/8- [-1, 6, SwinTransformer, [256, True]]- [-1, 1, Conv, [512, 3, 2]]  # 5-P4/16- [-1, 6, SwinTransformer, [512, True]]- [-1, 1, Conv, [1024, 3, 2]]  # 7-P5/32- [-1, 3, SwinTransformer, [1024, True]]- [-1, 1, SPPF, [1024, 5]]  # 9# YOLOv8.0n head
head:- [-1, 1, nn.Upsample, [None, 2, 'nearest']]- [[-1, 6], 1, Concat, [1]]  # cat backbone P4- [-1, 3, C2f, [512]]  # 12- [-1, 1, nn.Upsample, [None, 2, 'nearest']]- [[-1, 4], 1, Concat, [1]]  # cat backbone P3- [-1, 3, C2f, [256]]  # 15 (P3/8-small)- [-1, 1, Conv, [256, 3, 2]]- [[-1, 12], 1, Concat, [1]]  # cat head P4- [-1, 3, C2f, [512]]  # 18 (P4/16-medium)- [-1, 1, Conv, [512, 3, 2]]- [[-1, 9], 1, Concat, [1]]  # cat head P5- [-1, 3, C2f, [1024]]  # 21 (P5/32-large)- [[15, 18, 21], 1, Detect, [nc]]  # Detect(P3, P4, P5)
啟動命令
from ultralytics import YOLO# Load a model
# model = YOLO('yolov8s.yaml')  # build a new model from YAML
model = YOLO('/ultralytics/cfg/models/v8/yolov8_swinTrans.yaml')  # load a pretrained model (recommended for training)
# model = YOLO('yolov8s.yaml').load('yolov8s.pt')  # build from YAML and transfer weights# Train the model
if __name__ == '__main__':model.train( )
完整代碼分享

https://download.csdn.net/download/m0_67647321/88890624

本文來自互聯網用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。
如若轉載,請注明出處:http://www.pswp.cn/news/710652.shtml
繁體地址,請注明出處:http://hk.pswp.cn/news/710652.shtml
英文地址,請注明出處:http://en.pswp.cn/news/710652.shtml

如若內容造成侵權/違法違規/事實不符,請聯系多彩編程網進行投訴反饋email:809451989@qq.com,一經查實,立即刪除!

相關文章

小白的matlab簡單應用

基本概念 1、數組array 數組是一個更通用的數據結構&#xff0c;可以是一維、二維或多維的。 一維數組通常被稱為向量。 二維數組可以被視為矩陣。 多維數組可以用來表示更高維度的數據&#xff0c;例如三維數組可以表示一系列的矩陣。 用過的函數 20240229 1、讀取excel文件…

python_pyecharts_堆積圖

from pyecharts.charts import Bar from pyecharts import options as opts # 構建數據 x_data ["A", "B", "C", "D", "E"] y_data [10, 20, 30, 40, 50] z_data [5, 15, 25, 35, 45] # 創建堆積柱狀圖Bar實例 bar Bar(…

進階了解C++(4)——多態

在上篇文章中&#xff0c;簡單的介紹了多態中的概念以及其相關原理。本文將針對多態中其他的概念進一步進行介紹&#xff0c;并且更加深入的介紹關于多態的相關原理。 目錄 1. 抽象類&#xff1a; 2. 再談虛表&#xff1a; 3. 多繼承中的虛函數表&#xff1a; 1. 抽象類&am…

MySQL 用戶賬號遷移

文章目錄 前言1. 工具安裝1.1 下載安裝包1.2 編譯安裝 2. 用戶遷移后記 前言 有一個典型的使用場景&#xff0c;就是 RDS 下云大多數都是通過 DTS 進行數據傳輸的&#xff0c;用戶是不會同步到自建數據庫的。需要運維人員在自建數據庫重新創建用戶&#xff0c;如果用戶數量很多…

基于springboot+vue的在線考試與學習交流平臺

博主主頁&#xff1a;貓頭鷹源碼 博主簡介&#xff1a;Java領域優質創作者、CSDN博客專家、阿里云專家博主、公司架構師、全網粉絲5萬、專注Java技術領域和畢業設計項目實戰&#xff0c;歡迎高校老師\講師\同行交流合作 ?主要內容&#xff1a;畢業設計(Javaweb項目|小程序|Pyt…

中小型水庫安全監測運營解決方案,筑牢水庫安全防線

我國水庫大壩具有“六多”的特點。第一&#xff0c;總量多。我國現有水庫9.8萬座&#xff0c;是世界上水庫大壩最多的國家。第二&#xff0c;小水庫多。我國現有水庫中95%的水庫是小型水庫。第三&#xff0c;病險水庫多。 目前&#xff0c;在我國水庫管理中&#xff0c;部分地方…

供應鏈|NUS覃含章MS論文解讀:數據驅動下聯合定價和庫存控制的近似方法 (二)

編者按 本次解讀的文章發表于 Management Science&#xff0c;原文信息&#xff1a;Hanzhang Qin, David Simchi-Levi, Li Wang (2022) Data-Driven Approximation Schemes for Joint Pricing and Inventory Control Models. https://doi.org/10.1287/mnsc.2021.4212 文章在數…

深度神經網絡聯結主義的本質

一、介紹 在新興的人工智能 (AI) 領域&#xff0c;深度神經網絡 (DNN) 是一項里程碑式的成就&#xff0c;突破了機器學習、模式識別和認知模擬的界限。這一技術奇跡的核心是一個與認知科學本身一樣古老的思想&#xff1a;聯結主義。本文深入探討了聯結主義的基本原理&#xff0…

c# this關鍵字

c#this關鍵字 1. 代表當前類的對象 class Father {public int Age { get; set; }public string Name { get; set; }public Father(int age, string name){this.Age age;this.Name name;}public void Test(){Console.WriteLine($"name:{this.Name },age:{this.Age }&qu…

實例:NX二次開發抽取平面以及標準柱面中心線

一、概述 最近體驗許多外掛&#xff0c;包括胡波外掛、星空外掛及模圣等都有抽取面的中心線&#xff0c;由于剛剛學習&#xff0c;我嘗試看看能不能做出來&#xff0c;本博客代碼沒有封裝函數&#xff0c;代碼有待改進&#xff0c;但基本可以實現相應的功能。 二、案例實現的功…

【web APIs】3、(學習筆記)有案例!

提示&#xff1a;文章寫完后&#xff0c;目錄可以自動生成&#xff0c;如何生成可參考右邊的幫助文檔 文章目錄 前言一、概念其他事件頁面加載事件元素滾動事件頁面尺寸事件 元素尺寸與位置 二、案例舉例電梯導航 前言 掌握阻止事件冒泡的方法理解事件委托的實現原理 一、概念…

SpringCloud Alibaba(保姆級入門及操作)

第一章 微服務概念 1.0 科普一些術語 科普一下項目開發過程中常出現的術語,方便后續內容的理解。 **服務器:**分軟件與硬件,軟件:類型tomcat這種跑項目的程序, 硬件:用來部署項目的電腦(一般性能比個人電腦好) **服務:**操作系統上術語:一個程序,開發中術語:一個…

數學建模【分類模型】

一、分類模型簡介 本篇將介紹分類模型。對于二分類模型&#xff0c;我們將介紹邏輯回歸&#xff08;logistic regression&#xff09;和Fisher線性判別分析兩種分類算法&#xff1b;對于多分類模型&#xff0c;我們將簡單介紹SPSS中的多分類線性判別分析和多分類邏輯回歸。 分…

Java面試題之并發

并發 1.并發編程的優缺點&#xff1f;2.并發編程三要素&#xff1f;3.什么叫指令重排&#xff1f;4.如何避免指令重排&#xff1f;5.并發&#xff1f;并行&#xff1f;串行&#xff1f;6.線程和進程的概念和區別&#xff1f;7.什么是上下文切換&#xff1f;8.守護線程和用戶線程…

<網絡安全>《60 概念講解<第七課 網絡模型OSI對應協議>》

1 OSI模型 OSI模型&#xff08;Open Systems Interconnection Model&#xff09;是一個由國際標準化組織&#xff08;ISO&#xff09;提出的概念模型&#xff0c;用于描述和標準化電信或計算系統的通信功能&#xff0c;以實現不同通信系統之間的互操作性。該模型將通信系統劃分…

【k8s管理--Helm包管理器】

1、Helm的概念 Kubernetes包管器 Helm是查找、分享和使用軟件構件Kubernetes的最優方式。 Helm管理名為chart的Kubernetes包的工具。Helm可以做以下的事情&#xff1a; 從頭開始創建新的chat將chart打包成歸檔tgz)文件與存儲chat的倉庫進行交互在現有的Kubernetes集群中安裝和…

【Android】View 的滑動

View 的滑動是 Android 實現自定義控件的基礎&#xff0c;同時在開發中我們也難免會遇到 View 的滑動處理。其實不管是哪種滑動方式&#xff0c;其基本思想都是類似的&#xff1a;當點擊事件傳到 View 時&#xff0c;系統記下觸摸點的坐標&#xff0c;手指移動時系統記下移動后…

【AI+應用】怎么快速制作一個類chatGPT套殼網站

最近有人問我&#xff0c; 看了我之前寫的一篇文章 [人工智能] AI浪潮下Sora對于普通人的機會 &#xff0c; 怎么做一個類chatGPT的套殼網站&#xff0c;是從0開始做么。 對于普通人來說&#xff0c;萬事不懂先AI&#xff0c; AI找不到答案搜索google或百度。對于程序員來說…

C# 獲取類型 Type.GetType()

背景 C#是強類型語言&#xff0c;任何對象都有Type&#xff0c;有時候需要使用Type來進行反射、序列化、篩選等&#xff0c;獲取Type有Type.GetType, typeof()&#xff0c;object.GetType() 等方法&#xff0c;本文重點介紹Type.GetType()。 系統類型/本程序集內的類型 對于系…

有哪些視頻媒體?邀請視頻媒體報道活動的好處

傳媒如春雨&#xff0c;潤物細無聲&#xff0c;大家好&#xff0c;我是51媒體網胡老師。 視頻媒體在當今的媒體生態中占據了重要的地位。以下是一些主要的視頻媒體類型&#xff1a; 電視臺&#xff1a;如中央電視臺、各省級衛視臺、地方電視臺等&#xff0c;他們擁有專業的視…