深度學習項目--基于DenseNet網絡的“乳腺癌圖像識別”,準確率90%+,pytorch復現

  • 🍨 本文為🔗365天深度學習訓練營 中的學習記錄博客
  • 🍖 原作者:K同學啊

前言

  • 如果說最經典的神經網絡,ResNet肯定是一個,從ResNet發布后,很多人做了修改,denseNet網絡無疑是最成功的一個,它采用密集型連接,將通道數連接在一起
  • 本文是基于上一篇復現DenseNet121模型,做一個乳腺癌圖像識別,效果還行,準確率0.9+;
  • CNN經典網絡之“DenseNet”簡介,源碼研究與復現(pytorch): https://blog.csdn.net/weixin_74085818/article/details/146102290?spm=1001.2014.3001.5501
  • 歡迎收藏 + 關注,本人將會持續更新

文章目錄

    • 1、導入數據
      • 1、導入庫
      • 2、查看數據信息和導入數據
      • 3、展示數據
      • 4、數據導入
      • 5、數據劃分
      • 6、動態加載數據
    • 2、構建DenseNet121網絡
    • 3、模型訓練
      • 1、構建訓練集
      • 2、構建測試集
      • 3、設置超參數
    • 4、模型訓練
    • 5、結果可視化
    • 6、模型評估

1、導入數據

1、導入庫

import torch  
import torch.nn as nn
import torchvision 
import numpy as np 
import os, PIL, pathlib 
from collections import OrderedDict
import re
from torch.hub import load_state_dict_from_url# 設置設備
device = "cuda" if torch.cuda.is_available() else "cpu"device 
'cuda'

2、查看數據信息和導入數據

data_dir = "./data/"data_dir = pathlib.Path(data_dir)# 類別數量
classnames = [str(path).split("\\")[0] for path in os.listdir(data_dir)]classnames
['0', '1']

3、展示數據

import matplotlib.pylab as plt  
from PIL import Image # 獲取文件名稱
data_path_name = "./data/0/"  # 不患病的
data_path_list = [f for f in os.listdir(data_path_name) if f.endswith(('jpg', 'png'))]# 創建畫板
fig, axes = plt.subplots(2, 8, figsize=(16, 6))for ax, img_file in zip(axes.flat, data_path_list):path_name = os.path.join(data_path_name, img_file)img = Image.open(path_name) # 打開# 顯示ax.imshow(img)ax.axis('off')plt.show()

?
在這里插入圖片描述

?

4、數據導入

from torchvision import transforms, datasets # 數據統一格式
img_height = 224
img_width = 224 data_tranforms = transforms.Compose([transforms.Resize([img_height, img_width]),transforms.ToTensor(),transforms.Normalize(   # 歸一化mean=[0.485, 0.456, 0.406],std=[0.229, 0.224, 0.225] )
])# 加載所有數據
total_data = datasets.ImageFolder(root="./data/", transform=data_tranforms)

5、數據劃分

# 大小 8 : 2
train_size = int(len(total_data) * 0.8)
test_size = len(total_data) - train_size train_data, test_data = torch.utils.data.random_split(total_data, [train_size, test_size])

6、動態加載數據

batch_size = 64train_dl = torch.utils.data.DataLoader(train_data,batch_size=batch_size,shuffle=True
)test_dl = torch.utils.data.DataLoader(test_data,batch_size=batch_size,shuffle=False
)
# 查看數據維度
for data, labels in train_dl:print("data shape[N, C, H, W]: ", data.shape)print("labels: ", labels)break
data shape[N, C, H, W]:  torch.Size([64, 3, 224, 224])
labels:  tensor([1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1,1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0,1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1])

2、構建DenseNet121網絡

import torch.nn.functional as F# 實現DenseBlock中的部件:DenseLayer
'''  
1、BN + ReLU: 處理部分,首先進行歸一化,然后在用激活函數ReLU
2、Bottlenck Layer:稱為瓶頸層,這個層在yolov5中常用,但是yolov5中主要用于特征提取+維度降維,這里采用1 * 1卷積核 + 3 * 3的卷積核進行卷積操作,目的:減少輸入輸入特征維度
3、BN + ReLU:對 瓶頸層 數據進行歸一化,ReLU激活函數,歸一化可以確保梯度下降的時候較為平穩
4、3 * 3 生成新的特征圖
'''
class _DenseLayer(nn.Sequential):def __init__(self, num_input_features, growth_rate, bn_size, drop_rate):'''  num_input_features: 輸入特征數,也就是通道數,在DenseNet中,每一層都會接受之前層的輸出作為輸入,故,這個數值通常會隨著網絡深度增加而增加growth_rate: 增長率,這個是 DenseNet的核心概念,決定了每一層為全局狀態貢獻的特征數量,他的用處主要在于決定了中間瓶頸層的輸出通道,需要結合代碼去研究bn_size: 瓶頸層中輸出通道大小,含義:在使用1 * 1卷積核去提取特征數時,目標通道需要擴展到growth_rate的多少倍倍數, bn_size * growth_rate(輸出維度)drop_rate: 使用Dropout的參數'''super(_DenseLayer, self).__init__()self.add_module("norm1", nn.BatchNorm2d(num_input_features))self.add_module("relu1", nn.ReLU(inplace=True))# 輸出維度: bn_size * growth_rate, 1 * 1卷積核,步伐為1,只起到特征提取作用self.add_module("conv1", nn.Conv2d(num_input_features, bn_size * growth_rate, stride=1, kernel_size=1, bias=False))self.add_module("norm2", nn.BatchNorm2d(bn_size * growth_rate))self.add_module("relu2", nn.ReLU(inplace=True))# 輸出通道:growth_rate, 維度計算:不變self.add_module("conv2", nn.Conv2d(bn_size * growth_rate, growth_rate, stride=1, kernel_size=3, padding=1, bias=False))self.drop_rate = drop_ratedef forward(self, x):new_features = super(_DenseLayer, self).forward(x)  # 傳播if self.drop_rate > 0:new_features = F.dropout(new_features, p=self.drop_rate, training=self.training)  # self.training 繼承nn.Sequential,是否訓練模式# 模型融合,即,特征通道融合,形成新的特征圖return torch.cat([x, new_features], dim=1)  # (N, C, H, W)  # 即 C1 + C2,通道上融合'''  
DenseNet網絡核心由DenseBlock模塊組成,DenseBlock網絡由DenseLayer組成,從 DenseLayer 可以看出,DenseBlock是密集連接,每一層的輸入不僅包含前一層的輸出,還包含網絡中所有之前層的輸出
'''
# 構建DenseBlock模塊, 通過上圖
class _DenseBlock(nn.Sequential):# num_layers 幾層DenseLayer模塊def __init__(self, num_layers, num_input_features, bn_size, growth_rate, drop_rate):super(_DenseBlock, self).__init__()for i in range(num_layers):layer = _DenseLayer(num_input_features + i * growth_rate, growth_rate, bn_size, drop_rate)self.add_module("denselayer%d" % (i + 1), layer)# Transition層,用于維度壓縮# 組成:一個卷積層 + 一個池化層
class _Transition(nn.Sequential):def __init__(self, num_init_features, num_out_features):super(_Transition, self).__init__()self.add_module("norm", nn.BatchNorm2d(num_init_features))self.add_module("relu", nn.ReLU(inplace=True))self.add_module("conv", nn.Conv2d(num_init_features, num_out_features, kernel_size=1, stride=1, bias=False))# 降維self.add_module("pool", nn.AvgPool2d(2, stride=2))# 搭建DenseNet網絡
class DenseNet(nn.Module):def __init__(self, growth_rate=32, block_config=(6, 12, 24, 16), num_init_features=64, bn_size=4, compression_rate=0.5, drop_rate=0.5, num_classes=1000):'''  growth_rate、num_init_features、num_init_features、drop_rate 和denselayer一樣block_config : 參數在 DenseNet 架構中用于指定每個 Dense Block 中包含的層數, 如:DenseNet-121: block_config=(6, 12, 24, 16) 表示第一個 Dense Block 包含 6 層,第二個包含 12 層,第三個包含 24 層,第四個包含 16 層。DenseNet-169: block_config=(6, 12, 32, 32)DenseNet-201: block_config=(6, 12, 48, 32)DenseNet-264: block_config=(6, 12, 64, 48)compression_rate: 壓縮維度, DenseNet 中用于 Transition Layer(過渡層)的一個重要參數,它控制了從一個 Dense Block 到下一個 Dense Block 之間特征維度的壓縮程度'''super(DenseNet, self).__init__()# 第一層卷積# OrderedDict,讓模型層有序排列self.features = nn.Sequential(OrderedDict([# 輸出維度:((w - k + 2 * p) / s) + 1("conv0", nn.Conv2d(3, num_init_features, kernel_size=7, stride=2, padding=3, bias=False)),("norm0", nn.BatchNorm2d(num_init_features)),("relu0", nn.ReLU(inplace=True)),("pool0", nn.MaxPool2d(3, stride=2, padding=1))  # 降維]))# 搭建DenseBlock層num_features = num_init_features# num_layers: 層數for i, num_layers in enumerate(block_config):block = _DenseBlock(num_layers, num_features, bn_size, growth_rate, drop_rate)# nn.Module 中features封裝了nn.Sequentialself.features.add_module("denseblock%d" % (i + 1), block)'''  # 這個計算反映了 DenseNet 中的一個關鍵特性:每一層輸出的特征圖(即新增加的通道數)由 growth_rate 決定,# 并且這些新生成的特征圖會被傳遞給該 Dense Block 中的所有后續層以及下一個 Dense Block。'''num_features += num_layers * growth_rate  # 疊加,每一次疊加# 判斷是否需要使用Transition層if i != len(block_config) - 1:transition = _Transition(num_features, int(num_features*compression_rate)) # compression_rate 作用self.features.add_module("transition%d" % (i + 1), transition)num_features = int(num_features*compression_rate)  # 更新維度# 最后一層self.features.add_module("norm5", nn.BatchNorm2d(num_features))self.features.add_module("relu5", nn.ReLU(inplace=True))# 分類層         self.classifier = nn.Linear(num_features, num_classes)# params initialization         for m in self.modules():             if isinstance(m, nn.Conv2d):         '''如果當前模塊是一個二維卷積層 (nn.Conv2d),那么它的權重 (m.weight) 將通過 Kaiming 正態分布 (kaiming_normal_) 進行初始化。這種初始化方式特別適合與ReLU激活函數一起使用,有助于緩解深度網絡中的梯度消失問題,促進有效的訓練。  '''       nn.init.kaiming_normal_(m.weight)             elif isinstance(m, nn.BatchNorm2d):      '''  對于二維批歸一化層 (nn.BatchNorm2d),偏置項 (m.bias) 被初始化為0,而尺度因子 (m.weight) 被初始化為1。這意味著在沒有數據經過的情況下,批歸一化層不會對輸入進行額外的縮放或偏移,保持輸入不變。'''           nn.init.constant_(m.bias, 0)                 nn.init.constant_(m.weight, 1)             elif isinstance(m, nn.Linear):        '''  對于全連接層 (nn.Linear),只對其偏置項 (m.bias) 進行了初始化,設置為0'''         nn.init.constant_(m.bias, 0)def forward(self, x):features = self.features(x)out = F.avg_pool2d(features, 7, stride=1).view(x.size(0), -1)out = self.classifier(out)return outmodel = DenseNet(num_init_features=64, growth_rate=32, block_config=(6, 12, 12, 16))model.to(device)
DenseNet((features): Sequential((conv0): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)(norm0): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu0): ReLU(inplace=True)(pool0): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)(denseblock1): _DenseBlock((denselayer1): _DenseLayer((norm1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(64, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer2): _DenseLayer((norm1): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(96, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer3): _DenseLayer((norm1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer4): _DenseLayer((norm1): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(160, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer5): _DenseLayer((norm1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(192, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer6): _DenseLayer((norm1): BatchNorm2d(224, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(224, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)))(transition1): _Transition((norm): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(pool): AvgPool2d(kernel_size=2, stride=2, padding=0))(denseblock2): _DenseBlock((denselayer1): _DenseLayer((norm1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer2): _DenseLayer((norm1): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(160, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer3): _DenseLayer((norm1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(192, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer4): _DenseLayer((norm1): BatchNorm2d(224, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(224, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer5): _DenseLayer((norm1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer6): _DenseLayer((norm1): BatchNorm2d(288, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(288, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer7): _DenseLayer((norm1): BatchNorm2d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(320, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer8): _DenseLayer((norm1): BatchNorm2d(352, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(352, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer9): _DenseLayer((norm1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(384, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer10): _DenseLayer((norm1): BatchNorm2d(416, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(416, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer11): _DenseLayer((norm1): BatchNorm2d(448, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(448, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer12): _DenseLayer((norm1): BatchNorm2d(480, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(480, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)))(transition2): _Transition((norm): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(pool): AvgPool2d(kernel_size=2, stride=2, padding=0))(denseblock3): _DenseBlock((denselayer1): _DenseLayer((norm1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer2): _DenseLayer((norm1): BatchNorm2d(288, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(288, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer3): _DenseLayer((norm1): BatchNorm2d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(320, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer4): _DenseLayer((norm1): BatchNorm2d(352, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(352, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer5): _DenseLayer((norm1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(384, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer6): _DenseLayer((norm1): BatchNorm2d(416, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(416, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer7): _DenseLayer((norm1): BatchNorm2d(448, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(448, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer8): _DenseLayer((norm1): BatchNorm2d(480, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(480, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer9): _DenseLayer((norm1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer10): _DenseLayer((norm1): BatchNorm2d(544, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(544, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer11): _DenseLayer((norm1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(576, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer12): _DenseLayer((norm1): BatchNorm2d(608, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(608, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)))(transition3): _Transition((norm): BatchNorm2d(640, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(conv): Conv2d(640, 320, kernel_size=(1, 1), stride=(1, 1), bias=False)(pool): AvgPool2d(kernel_size=2, stride=2, padding=0))(denseblock4): _DenseBlock((denselayer1): _DenseLayer((norm1): BatchNorm2d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(320, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer2): _DenseLayer((norm1): BatchNorm2d(352, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(352, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer3): _DenseLayer((norm1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(384, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer4): _DenseLayer((norm1): BatchNorm2d(416, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(416, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer5): _DenseLayer((norm1): BatchNorm2d(448, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(448, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer6): _DenseLayer((norm1): BatchNorm2d(480, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(480, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer7): _DenseLayer((norm1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer8): _DenseLayer((norm1): BatchNorm2d(544, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(544, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer9): _DenseLayer((norm1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(576, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer10): _DenseLayer((norm1): BatchNorm2d(608, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(608, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer11): _DenseLayer((norm1): BatchNorm2d(640, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(640, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer12): _DenseLayer((norm1): BatchNorm2d(672, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(672, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer13): _DenseLayer((norm1): BatchNorm2d(704, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(704, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer14): _DenseLayer((norm1): BatchNorm2d(736, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(736, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer15): _DenseLayer((norm1): BatchNorm2d(768, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(768, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer16): _DenseLayer((norm1): BatchNorm2d(800, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(800, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)))(norm5): BatchNorm2d(832, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu5): ReLU(inplace=True))(classifier): Linear(in_features=832, out_features=1000, bias=True)
)

3、模型訓練

1、構建訓練集

def train(dataloader, model, loss_fn, optimizer):size = len(dataloader.dataset)batch_size = len(dataloader)train_acc, train_loss = 0, 0 for X, y in dataloader:X, y = X.to(device), y.to(device)# 訓練pred = model(X)loss = loss_fn(pred, y)# 梯度下降法optimizer.zero_grad()loss.backward()optimizer.step()# 記錄train_loss += loss.item()train_acc += (pred.argmax(1) == y).type(torch.float).sum().item()train_acc /= sizetrain_loss /= batch_sizereturn train_acc, train_loss

2、構建測試集

def test(dataloader, model, loss_fn):size = len(dataloader.dataset)batch_size = len(dataloader)test_acc, test_loss = 0, 0 with torch.no_grad():for X, y in dataloader:X, y = X.to(device), y.to(device)pred = model(X)loss = loss_fn(pred, y)test_loss += loss.item()test_acc += (pred.argmax(1) == y).type(torch.float).sum().item()test_acc /= sizetest_loss /= batch_sizereturn test_acc, test_loss

3、設置超參數

loss_fn = nn.CrossEntropyLoss()  # 損失函數     
learn_lr = 1e-4             # 超參數
optimizer = torch.optim.Adam(model.parameters(), lr=learn_lr)   # 優化器

4、模型訓練

通過實驗發現,還是設置20輪次附件最好

import copytrain_acc = []
train_loss = []
test_acc = []
test_loss = []epoches = 20best_acc = 0    # 設置一個最佳準確率,作為最佳模型的判別指標for i in range(epoches):model.train()epoch_train_acc, epoch_train_loss = train(train_dl, model, loss_fn, optimizer)model.eval()epoch_test_acc, epoch_test_loss = test(test_dl, model, loss_fn)# 保存最佳模型到 best_model     if epoch_test_acc > best_acc:         best_acc   = epoch_test_acc         best_model = copy.deepcopy(model)train_acc.append(epoch_train_acc)train_loss.append(epoch_train_loss)test_acc.append(epoch_test_acc)test_loss.append(epoch_test_loss)# 獲取當前的學習率     lr = optimizer.state_dict()['param_groups'][0]['lr']# 輸出template = ('Epoch:{:2d}, Train_acc:{:.1f}%, Train_loss:{:.3f}, Test_acc:{:.1f}%, Test_loss:{:.3f}')print(template.format(i + 1, epoch_train_acc*100, epoch_train_loss, epoch_test_acc*100, epoch_test_loss))PATH = './best_model.pth'  # 保存的參數文件名 
torch.save(best_model.state_dict(), PATH)print("Done")
Epoch: 1, Train_acc:79.3%, Train_loss:1.948, Test_acc:84.6%, Test_loss:1.079
Epoch: 2, Train_acc:85.3%, Train_loss:0.395, Test_acc:85.2%, Test_loss:0.721
Epoch: 3, Train_acc:87.3%, Train_loss:0.318, Test_acc:86.5%, Test_loss:0.526
Epoch: 4, Train_acc:89.0%, Train_loss:0.277, Test_acc:86.6%, Test_loss:0.494
Epoch: 5, Train_acc:89.0%, Train_loss:0.266, Test_acc:87.9%, Test_loss:0.400
Epoch: 6, Train_acc:89.6%, Train_loss:0.252, Test_acc:84.6%, Test_loss:0.524
Epoch: 7, Train_acc:90.3%, Train_loss:0.239, Test_acc:85.5%, Test_loss:0.445
Epoch: 8, Train_acc:90.2%, Train_loss:0.235, Test_acc:87.6%, Test_loss:0.359
Epoch: 9, Train_acc:90.0%, Train_loss:0.235, Test_acc:89.3%, Test_loss:0.298
Epoch:10, Train_acc:91.0%, Train_loss:0.220, Test_acc:89.5%, Test_loss:0.307
Epoch:11, Train_acc:90.8%, Train_loss:0.222, Test_acc:88.3%, Test_loss:0.316
Epoch:12, Train_acc:91.4%, Train_loss:0.210, Test_acc:83.3%, Test_loss:0.516
Epoch:13, Train_acc:91.5%, Train_loss:0.208, Test_acc:91.3%, Test_loss:0.247
Epoch:14, Train_acc:91.5%, Train_loss:0.206, Test_acc:90.1%, Test_loss:0.269
Epoch:15, Train_acc:92.0%, Train_loss:0.199, Test_acc:91.1%, Test_loss:0.242
Epoch:16, Train_acc:92.1%, Train_loss:0.194, Test_acc:89.4%, Test_loss:0.285
Epoch:17, Train_acc:92.4%, Train_loss:0.193, Test_acc:91.0%, Test_loss:0.229
Epoch:18, Train_acc:92.4%, Train_loss:0.188, Test_acc:88.0%, Test_loss:0.317
Epoch:19, Train_acc:92.7%, Train_loss:0.182, Test_acc:89.2%, Test_loss:0.285
Epoch:20, Train_acc:92.6%, Train_loss:0.182, Test_acc:78.5%, Test_loss:0.728
Done

5、結果可視化

import matplotlib.pyplot as plt
#隱藏警告
import warnings
warnings.filterwarnings("ignore")               #忽略警告信息epochs_range = range(epoches)plt.figure(figsize=(12, 3))
plt.subplot(1, 2, 1)plt.plot(epochs_range, train_acc, label='Training Accuracy')
plt.plot(epochs_range, test_acc, label='Test Accuracy')
plt.legend(loc='lower right')
plt.title('Training Accuracy')plt.subplot(1, 2, 2)
plt.plot(epochs_range, train_loss, label='Training Loss')
plt.plot(epochs_range, test_loss, label='Test Loss')
plt.legend(loc='upper right')
plt.title('Training= Loss')
plt.show()

?
在這里插入圖片描述

?

在20輪測試集準確率變化比較大,從跑的幾次實驗來看,這次是偶然事件,測試集損失率后面一直穩定在0.3附件,測試準確率一直在0.8、0.89、0.90附件徘徊

6、模型評估

# 將參數加載到model當中 
best_model.load_state_dict(torch.load(PATH, map_location=device)) 
epoch_test_acc, epoch_test_loss = test(test_dl, best_model, loss_fn)print(epoch_test_acc, epoch_test_loss)
0.9134651249533756 0.24670581874393283

本文來自互聯網用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。
如若轉載,請注明出處:http://www.pswp.cn/news/898400.shtml
繁體地址,請注明出處:http://hk.pswp.cn/news/898400.shtml
英文地址,請注明出處:http://en.pswp.cn/news/898400.shtml

如若內容造成侵權/違法違規/事實不符,請聯系多彩編程網進行投訴反饋email:809451989@qq.com,一經查實,立即刪除!

相關文章

面試八股 —— Redis篇

重點:緩存 和 分布式鎖 緩存(穿透,擊穿,雪崩) 降級可作為系統的保底策略,適用于穿透,擊穿,雪崩 1.緩存穿透 2.緩存擊穿 3.緩存雪崩 緩存——雙寫一致性 1.強一致性業務&#xff08…

mapbox-gl源碼中解析style加載地圖過程詳解

我將結合 Mapbox GL JS 的源碼示例,一步一步講解 style 的解析和地圖加載過程,幫助大家深入理解其內部機制。 Mapbox GL JS 是一個強大的 Web 地圖庫,利用 WebGL 技術渲染交互式地圖。其核心功能之一是通過樣式(style&#xff09…

瑞薩RA系列使用JLink RTT Viewer輸出調試信息

引言 還在用UART調試程序么?試試JLINK的RTT Viewer吧!不需占用UART端口、低資源暫用、實時性高延時微秒級,這么好的工具還有什么理由不用了! 目錄 一、JLink RTT Viewer 簡介 二、軟件安裝 三、工程應用 3.1 SEGGER_RTT驅動包 3.2 手搓宏定義APP_PRINT 3.3 使用APP_…

MySQL 入門大全:查詢語言分類

🧑 博主簡介:CSDN博客專家,歷代文學網(PC端可以訪問:https://literature.sinhy.com/#/literature?__c1000,移動端可微信小程序搜索“歷代文學”)總架構師,15年工作經驗,…

1.Windows+vscode+cline+MCP配置

文章目錄 1.簡介與資源2.在windows中安裝vscode及Cline插件1. 安裝vscode2. 安裝Cline插件3. 配置大語言模型3. 配置MCP步驟(windows) 1.簡介與資源 MCP官方開源倉庫 MCP合集網站 參考視頻 2.在windows中安裝vscode及Cline插件 1. 安裝vscode 2. 安裝Cline插件 Cline插件…

性能測試過程實時監控分析

性能監控 前言一、查看性能測試結果的3大方式1、GUI界面報告插件2、命令行運行 html報告3、后端監聽器接入儀表盤 二、influxDB grafana jmeter測試監控大屏1、原理:2、linux環境中influxDB 安裝和配置3、jmerer后端監聽器連接influxDB4、linux環境總grafana環境搭…

【Linux我做主】淺談Shell及其原理

淺談Linux中的Shell及其原理 Linux中Shell的運行原理github地址前言一、Linux內核與Shell的關系1.1 操作系統核心1.2 用戶與內核的隔離 二、Shell的演進與核心機制2.1 發展歷程2.2 核心功能解析2.3 shell的工作流程1. 用戶輸入命令2. 解析器拆分指令3. 擴展器處理動態內容變量替…

可視化圖解算法:鏈表中倒數(最后)k個結點

1. 題目 描述 輸入一個長度為 n 的鏈表,設鏈表中的元素的值為ai ,返回該鏈表中倒數第k個節點。 如果該鏈表長度小于k,請返回一個長度為 0 的鏈表。 數據范圍:0≤n≤105,0 ≤ai≤109,0 ≤k≤109 要求&am…

在線教育網站項目第四步:deepseek騙我, WSL2不能創建兩個獨立的Ubuntu,但我們能實現實例互訪及外部訪問

一、說明 上一章折騰了半天,搞出不少問題,今天我們在deepseek的幫助下,完成多個獨立ubuntu24.04實例的安裝,并完成固定ip,實踐證明,deepseek不靠譜,浪費我2個小時時間,我們將在下面實…

CMake 保姆級教程

CMake 是一個跨平臺的構建工具,用于生成適合不同平臺和編譯器的構建系統文件(如 Makefile 或 Visual Studio 項目文件)。 在 Windows 下使用 CMake 構建項目時,CMake 會根據 CMakeLists.txt 文件生成適合 Windows 的構建系統文件&…

zabbix數據庫溯源

0x00 背景 zabbix數據庫如果密碼泄露被登錄并新增管理員如何快速發現?并進行溯源? 本文介紹數據庫本身未開啟access log的情況。 0x01 實踐 Mysql 數據庫查insert SELECT * FROM sys.host_summary_by_statement_type where statement like %insert% 查…

Spring Boot集成PageHelper:輕松實現數據庫分頁功能

Spring Boot集成PageHelper:輕松實現數據庫分頁功能 1. 為什么需要分頁? 分頁是處理大數據量查詢的核心技術,其重要性體現在: 性能優化:避免單次查詢返回過多數據導致內存溢出或響應延遲。用戶體驗:前端展…

Spring Cloud之負載均衡之LoadBalance

目錄 負載均衡 問題 步驟 現象 什么是負載均衡? 負載均衡的一些實現 服務端負載均衡 客戶端負載均衡 使用Spring Cloud LoadBalance實現負載均衡 負載均衡策略 ?編輯 ?編輯LoadBalancer原理 服務部署 準備環境和數據 服務構建打包 啟動服務 上傳J…

數據無憂:自動備份策略全解析

引言 在信息化飛速發展的今天,數據已成為個人、企業乃至國家最為寶貴的資產之一。無論是日常辦公文檔、科研數據、客戶資料,還是個人隱私信息,一旦丟失或損壞,都可能帶來不可估量的損失。因此,備份文件作為數據安全的…

Latex2024安裝教程(附安裝包)Latex2024詳細圖文安裝教程

文章目錄 前言一、Latex2024下載二、Texlive 2024安裝教程1.準備安裝文件2.啟動安裝程序3.配置安裝選項4.開始安裝5.安裝完成6.TeX Live 2024 安裝后確認 三、Texstudio 安裝教程1.準備 Texstudio 安裝2.啟動 Texstudio 安裝向導3.選擇安裝位置4.等待安裝完成5.啟動 Texstudio6…

C++ 語法之函數和函數指針

在上一章中 C 語法之 指針的一些應用說明-CSDN博客 我們了解了指針變量&#xff0c;int *p;取變量a的地址這些。 那么函數同樣也有個地址&#xff0c;直接輸出函數名就可以得到地址&#xff0c;如下&#xff1a; #include<iostream> using namespace std; void fun() …

centos【rockylinux】安裝【supervisor】的注意事項【完整版】

重新加載 systemd 配置推薦使用pip的方式安裝 pip install supervisor 第二步&#xff1a;添加supervisord.conf配置文件 [unix_http_server] file/tmp/supervisor.sock ; UNIX socket 文件&#xff0c;supervisorctl 會使用 ;chmod0700 ; socket 文件的…

Spring Cloud Gateway 使用ribbon以及nacos實現灰度發布

1、Spring Cloud Gateway配置文件 gateway:userId-limit: 1000 agent-bff:ribbon:NFLoadBalancerRuleClassName: com.anlitech.gateway.gray.GrayRule operator-bff:ribbon:NFLoadBalancerRuleClassName: com.anlitech.gateway.gray.GrayRule spring:cloud:gateway:locator:en…

關于“碰一碰發視頻”系統的技術開發文檔框架

以下是關于“碰一碰發視頻”系統的技術開發文檔框架&#xff0c;涵蓋核心功能、技術選型、開發流程和關鍵模塊設計&#xff0c;幫助您快速搭建一站式解決方案 --- 隨著短視頻平臺的興起&#xff0c;用戶的創作與分享需求日益增長。而如何讓視頻分享更加便捷、有趣&#xff0c…

基于django+vue的購物商城系統

開發語言&#xff1a;Python框架&#xff1a;djangoPython版本&#xff1a;python3.8數據庫&#xff1a;mysql 5.7數據庫工具&#xff1a;Navicat11開發軟件&#xff1a;PyCharm 系統展示 系統首頁 熱賣商品 優惠資訊 個人中心 后臺登錄 管理員功能界面 用戶管理 商品分類管理…