深度學習項目--基于DenseNet網絡的“乳腺癌圖像識別”,準確率090%+,pytorch復現

  • 🍨 本文為🔗365天深度學習訓練營 中的學習記錄博客
  • 🍖 原作者:K同學啊

前言

  • 如果說最經典的神經網絡,ResNet肯定是一個,從ResNet發布后,很多人做了修改,denseNet網絡無疑是最成功的一個,它采用密集型連接,將通道數連接在一起
  • 本文是基于上一篇復現DenseNet121模型,做一個乳腺癌圖像識別,效果還行,準確率0.9+;
  • CNN經典網絡之“DenseNet”簡介,源碼研究與復現(pytorch): https://blog.csdn.net/weixin_74085818/article/details/146102290?spm=1001.2014.3001.5501
  • 歡迎收藏 + 關注,本人將會持續更新

文章目錄

    • 1、導入數據
      • 1、導入庫
      • 2、查看數據信息和導入數據
      • 3、展示數據
      • 4、數據導入
      • 5、數據劃分
      • 6、動態加載數據
    • 2、構建DenseNet121網絡
    • 3、模型訓練
      • 1、構建訓練集
      • 2、構建測試集
      • 3、設置超參數
    • 4、模型訓練
    • 5、結果可視化
    • 6、模型評估

1、導入數據

1、導入庫

import torch  
import torch.nn as nn
import torchvision 
import numpy as np 
import os, PIL, pathlib 
from collections import OrderedDict
import re
from torch.hub import load_state_dict_from_url# 設置設備
device = "cuda" if torch.cuda.is_available() else "cpu"device 
'cuda'

2、查看數據信息和導入數據

data_dir = "./data/"data_dir = pathlib.Path(data_dir)# 類別數量
classnames = [str(path).split("\\")[0] for path in os.listdir(data_dir)]classnames
['0', '1']

3、展示數據

import matplotlib.pylab as plt  
from PIL import Image # 獲取文件名稱
data_path_name = "./data/0/"  # 不患病的
data_path_list = [f for f in os.listdir(data_path_name) if f.endswith(('jpg', 'png'))]# 創建畫板
fig, axes = plt.subplots(2, 8, figsize=(16, 6))for ax, img_file in zip(axes.flat, data_path_list):path_name = os.path.join(data_path_name, img_file)img = Image.open(path_name) # 打開# 顯示ax.imshow(img)ax.axis('off')plt.show()

?
在這里插入圖片描述

?

4、數據導入

from torchvision import transforms, datasets # 數據統一格式
img_height = 224
img_width = 224 data_tranforms = transforms.Compose([transforms.Resize([img_height, img_width]),transforms.ToTensor(),transforms.Normalize(   # 歸一化mean=[0.485, 0.456, 0.406],std=[0.229, 0.224, 0.225] )
])# 加載所有數據
total_data = datasets.ImageFolder(root="./data/", transform=data_tranforms)

5、數據劃分

# 大小 8 : 2
train_size = int(len(total_data) * 0.8)
test_size = len(total_data) - train_size train_data, test_data = torch.utils.data.random_split(total_data, [train_size, test_size])

6、動態加載數據

batch_size = 64train_dl = torch.utils.data.DataLoader(train_data,batch_size=batch_size,shuffle=True
)test_dl = torch.utils.data.DataLoader(test_data,batch_size=batch_size,shuffle=False
)
# 查看數據維度
for data, labels in train_dl:print("data shape[N, C, H, W]: ", data.shape)print("labels: ", labels)break
data shape[N, C, H, W]:  torch.Size([64, 3, 224, 224])
labels:  tensor([1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1,1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0,1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1])

2、構建DenseNet121網絡

import torch.nn.functional as F# 實現DenseBlock中的部件:DenseLayer
'''  
1、BN + ReLU: 處理部分,首先進行歸一化,然后在用激活函數ReLU
2、Bottlenck Layer:稱為瓶頸層,這個層在yolov5中常用,但是yolov5中主要用于特征提取+維度降維,這里采用1 * 1卷積核 + 3 * 3的卷積核進行卷積操作,目的:減少輸入輸入特征維度
3、BN + ReLU:對 瓶頸層 數據進行歸一化,ReLU激活函數,歸一化可以確保梯度下降的時候較為平穩
4、3 * 3 生成新的特征圖
'''
class _DenseLayer(nn.Sequential):def __init__(self, num_input_features, growth_rate, bn_size, drop_rate):'''  num_input_features: 輸入特征數,也就是通道數,在DenseNet中,每一層都會接受之前層的輸出作為輸入,故,這個數值通常會隨著網絡深度增加而增加growth_rate: 增長率,這個是 DenseNet的核心概念,決定了每一層為全局狀態貢獻的特征數量,他的用處主要在于決定了中間瓶頸層的輸出通道,需要結合代碼去研究bn_size: 瓶頸層中輸出通道大小,含義:在使用1 * 1卷積核去提取特征數時,目標通道需要擴展到growth_rate的多少倍倍數, bn_size * growth_rate(輸出維度)drop_rate: 使用Dropout的參數'''super(_DenseLayer, self).__init__()self.add_module("norm1", nn.BatchNorm2d(num_input_features))self.add_module("relu1", nn.ReLU(inplace=True))# 輸出維度: bn_size * growth_rate, 1 * 1卷積核,步伐為1,只起到特征提取作用self.add_module("conv1", nn.Conv2d(num_input_features, bn_size * growth_rate, stride=1, kernel_size=1, bias=False))self.add_module("norm2", nn.BatchNorm2d(bn_size * growth_rate))self.add_module("relu2", nn.ReLU(inplace=True))# 輸出通道:growth_rate, 維度計算:不變self.add_module("conv2", nn.Conv2d(bn_size * growth_rate, growth_rate, stride=1, kernel_size=3, padding=1, bias=False))self.drop_rate = drop_ratedef forward(self, x):new_features = super(_DenseLayer, self).forward(x)  # 傳播if self.drop_rate > 0:new_features = F.dropout(new_features, p=self.drop_rate, training=self.training)  # self.training 繼承nn.Sequential,是否訓練模式# 模型融合,即,特征通道融合,形成新的特征圖return torch.cat([x, new_features], dim=1)  # (N, C, H, W)  # 即 C1 + C2,通道上融合'''  
DenseNet網絡核心由DenseBlock模塊組成,DenseBlock網絡由DenseLayer組成,從 DenseLayer 可以看出,DenseBlock是密集連接,每一層的輸入不僅包含前一層的輸出,還包含網絡中所有之前層的輸出
'''
# 構建DenseBlock模塊, 通過上圖
class _DenseBlock(nn.Sequential):# num_layers 幾層DenseLayer模塊def __init__(self, num_layers, num_input_features, bn_size, growth_rate, drop_rate):super(_DenseBlock, self).__init__()for i in range(num_layers):layer = _DenseLayer(num_input_features + i * growth_rate, growth_rate, bn_size, drop_rate)self.add_module("denselayer%d" % (i + 1), layer)# Transition層,用于維度壓縮# 組成:一個卷積層 + 一個池化層
class _Transition(nn.Sequential):def __init__(self, num_init_features, num_out_features):super(_Transition, self).__init__()self.add_module("norm", nn.BatchNorm2d(num_init_features))self.add_module("relu", nn.ReLU(inplace=True))self.add_module("conv", nn.Conv2d(num_init_features, num_out_features, kernel_size=1, stride=1, bias=False))# 降維self.add_module("pool", nn.AvgPool2d(2, stride=2))# 搭建DenseNet網絡
class DenseNet(nn.Module):def __init__(self, growth_rate=32, block_config=(6, 12, 24, 16), num_init_features=64, bn_size=4, compression_rate=0.5, drop_rate=0.5, num_classes=1000):'''  growth_rate、num_init_features、num_init_features、drop_rate 和denselayer一樣block_config : 參數在 DenseNet 架構中用于指定每個 Dense Block 中包含的層數, 如:DenseNet-121: block_config=(6, 12, 24, 16) 表示第一個 Dense Block 包含 6 層,第二個包含 12 層,第三個包含 24 層,第四個包含 16 層。DenseNet-169: block_config=(6, 12, 32, 32)DenseNet-201: block_config=(6, 12, 48, 32)DenseNet-264: block_config=(6, 12, 64, 48)compression_rate: 壓縮維度, DenseNet 中用于 Transition Layer(過渡層)的一個重要參數,它控制了從一個 Dense Block 到下一個 Dense Block 之間特征維度的壓縮程度'''super(DenseNet, self).__init__()# 第一層卷積# OrderedDict,讓模型層有序排列self.features = nn.Sequential(OrderedDict([# 輸出維度:((w - k + 2 * p) / s) + 1("conv0", nn.Conv2d(3, num_init_features, kernel_size=7, stride=2, padding=3, bias=False)),("norm0", nn.BatchNorm2d(num_init_features)),("relu0", nn.ReLU(inplace=True)),("pool0", nn.MaxPool2d(3, stride=2, padding=1))  # 降維]))# 搭建DenseBlock層num_features = num_init_features# num_layers: 層數for i, num_layers in enumerate(block_config):block = _DenseBlock(num_layers, num_features, bn_size, growth_rate, drop_rate)# nn.Module 中features封裝了nn.Sequentialself.features.add_module("denseblock%d" % (i + 1), block)'''  # 這個計算反映了 DenseNet 中的一個關鍵特性:每一層輸出的特征圖(即新增加的通道數)由 growth_rate 決定,# 并且這些新生成的特征圖會被傳遞給該 Dense Block 中的所有后續層以及下一個 Dense Block。'''num_features += num_layers * growth_rate  # 疊加,每一次疊加# 判斷是否需要使用Transition層if i != len(block_config) - 1:transition = _Transition(num_features, int(num_features*compression_rate)) # compression_rate 作用self.features.add_module("transition%d" % (i + 1), transition)num_features = int(num_features*compression_rate)  # 更新維度# 最后一層self.features.add_module("norm5", nn.BatchNorm2d(num_features))self.features.add_module("relu5", nn.ReLU(inplace=True))# 分類層         self.classifier = nn.Linear(num_features, num_classes)# params initialization         for m in self.modules():             if isinstance(m, nn.Conv2d):         '''如果當前模塊是一個二維卷積層 (nn.Conv2d),那么它的權重 (m.weight) 將通過 Kaiming 正態分布 (kaiming_normal_) 進行初始化。這種初始化方式特別適合與ReLU激活函數一起使用,有助于緩解深度網絡中的梯度消失問題,促進有效的訓練。  '''       nn.init.kaiming_normal_(m.weight)             elif isinstance(m, nn.BatchNorm2d):      '''  對于二維批歸一化層 (nn.BatchNorm2d),偏置項 (m.bias) 被初始化為0,而尺度因子 (m.weight) 被初始化為1。這意味著在沒有數據經過的情況下,批歸一化層不會對輸入進行額外的縮放或偏移,保持輸入不變。'''           nn.init.constant_(m.bias, 0)                 nn.init.constant_(m.weight, 1)             elif isinstance(m, nn.Linear):        '''  對于全連接層 (nn.Linear),只對其偏置項 (m.bias) 進行了初始化,設置為0'''         nn.init.constant_(m.bias, 0)def forward(self, x):features = self.features(x)out = F.avg_pool2d(features, 7, stride=1).view(x.size(0), -1)out = self.classifier(out)return outmodel = DenseNet(num_init_features=64, growth_rate=32, block_config=(6, 12, 12, 16))model.to(device)
DenseNet((features): Sequential((conv0): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)(norm0): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu0): ReLU(inplace=True)(pool0): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)(denseblock1): _DenseBlock((denselayer1): _DenseLayer((norm1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(64, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer2): _DenseLayer((norm1): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(96, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer3): _DenseLayer((norm1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer4): _DenseLayer((norm1): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(160, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer5): _DenseLayer((norm1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(192, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer6): _DenseLayer((norm1): BatchNorm2d(224, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(224, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)))(transition1): _Transition((norm): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(pool): AvgPool2d(kernel_size=2, stride=2, padding=0))(denseblock2): _DenseBlock((denselayer1): _DenseLayer((norm1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer2): _DenseLayer((norm1): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(160, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer3): _DenseLayer((norm1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(192, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer4): _DenseLayer((norm1): BatchNorm2d(224, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(224, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer5): _DenseLayer((norm1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer6): _DenseLayer((norm1): BatchNorm2d(288, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(288, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer7): _DenseLayer((norm1): BatchNorm2d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(320, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer8): _DenseLayer((norm1): BatchNorm2d(352, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(352, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer9): _DenseLayer((norm1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(384, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer10): _DenseLayer((norm1): BatchNorm2d(416, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(416, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer11): _DenseLayer((norm1): BatchNorm2d(448, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(448, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer12): _DenseLayer((norm1): BatchNorm2d(480, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(480, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)))(transition2): _Transition((norm): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)(pool): AvgPool2d(kernel_size=2, stride=2, padding=0))(denseblock3): _DenseBlock((denselayer1): _DenseLayer((norm1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer2): _DenseLayer((norm1): BatchNorm2d(288, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(288, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer3): _DenseLayer((norm1): BatchNorm2d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(320, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer4): _DenseLayer((norm1): BatchNorm2d(352, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(352, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer5): _DenseLayer((norm1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(384, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer6): _DenseLayer((norm1): BatchNorm2d(416, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(416, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer7): _DenseLayer((norm1): BatchNorm2d(448, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(448, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer8): _DenseLayer((norm1): BatchNorm2d(480, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(480, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer9): _DenseLayer((norm1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer10): _DenseLayer((norm1): BatchNorm2d(544, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(544, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer11): _DenseLayer((norm1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(576, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer12): _DenseLayer((norm1): BatchNorm2d(608, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(608, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)))(transition3): _Transition((norm): BatchNorm2d(640, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu): ReLU(inplace=True)(conv): Conv2d(640, 320, kernel_size=(1, 1), stride=(1, 1), bias=False)(pool): AvgPool2d(kernel_size=2, stride=2, padding=0))(denseblock4): _DenseBlock((denselayer1): _DenseLayer((norm1): BatchNorm2d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(320, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer2): _DenseLayer((norm1): BatchNorm2d(352, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(352, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer3): _DenseLayer((norm1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(384, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer4): _DenseLayer((norm1): BatchNorm2d(416, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(416, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer5): _DenseLayer((norm1): BatchNorm2d(448, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(448, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer6): _DenseLayer((norm1): BatchNorm2d(480, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(480, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer7): _DenseLayer((norm1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer8): _DenseLayer((norm1): BatchNorm2d(544, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(544, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer9): _DenseLayer((norm1): BatchNorm2d(576, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(576, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer10): _DenseLayer((norm1): BatchNorm2d(608, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(608, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer11): _DenseLayer((norm1): BatchNorm2d(640, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(640, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer12): _DenseLayer((norm1): BatchNorm2d(672, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(672, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer13): _DenseLayer((norm1): BatchNorm2d(704, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(704, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer14): _DenseLayer((norm1): BatchNorm2d(736, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(736, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer15): _DenseLayer((norm1): BatchNorm2d(768, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(768, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False))(denselayer16): _DenseLayer((norm1): BatchNorm2d(800, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu1): ReLU(inplace=True)(conv1): Conv2d(800, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)(norm2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu2): ReLU(inplace=True)(conv2): Conv2d(128, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)))(norm5): BatchNorm2d(832, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(relu5): ReLU(inplace=True))(classifier): Linear(in_features=832, out_features=1000, bias=True)
)

3、模型訓練

1、構建訓練集

def train(dataloader, model, loss_fn, optimizer):size = len(dataloader.dataset)batch_size = len(dataloader)train_acc, train_loss = 0, 0 for X, y in dataloader:X, y = X.to(device), y.to(device)# 訓練pred = model(X)loss = loss_fn(pred, y)# 梯度下降法optimizer.zero_grad()loss.backward()optimizer.step()# 記錄train_loss += loss.item()train_acc += (pred.argmax(1) == y).type(torch.float).sum().item()train_acc /= sizetrain_loss /= batch_sizereturn train_acc, train_loss

2、構建測試集

def test(dataloader, model, loss_fn):size = len(dataloader.dataset)batch_size = len(dataloader)test_acc, test_loss = 0, 0 with torch.no_grad():for X, y in dataloader:X, y = X.to(device), y.to(device)pred = model(X)loss = loss_fn(pred, y)test_loss += loss.item()test_acc += (pred.argmax(1) == y).type(torch.float).sum().item()test_acc /= sizetest_loss /= batch_sizereturn test_acc, test_loss

3、設置超參數

loss_fn = nn.CrossEntropyLoss()  # 損失函數     
learn_lr = 1e-4             # 超參數
optimizer = torch.optim.Adam(model.parameters(), lr=learn_lr)   # 優化器

4、模型訓練

通過實驗發現,還是設置20輪次附件最好

import copytrain_acc = []
train_loss = []
test_acc = []
test_loss = []epoches = 20best_acc = 0    # 設置一個最佳準確率,作為最佳模型的判別指標for i in range(epoches):model.train()epoch_train_acc, epoch_train_loss = train(train_dl, model, loss_fn, optimizer)model.eval()epoch_test_acc, epoch_test_loss = test(test_dl, model, loss_fn)# 保存最佳模型到 best_model     if epoch_test_acc > best_acc:         best_acc   = epoch_test_acc         best_model = copy.deepcopy(model)train_acc.append(epoch_train_acc)train_loss.append(epoch_train_loss)test_acc.append(epoch_test_acc)test_loss.append(epoch_test_loss)# 獲取當前的學習率     lr = optimizer.state_dict()['param_groups'][0]['lr']# 輸出template = ('Epoch:{:2d}, Train_acc:{:.1f}%, Train_loss:{:.3f}, Test_acc:{:.1f}%, Test_loss:{:.3f}')print(template.format(i + 1, epoch_train_acc*100, epoch_train_loss, epoch_test_acc*100, epoch_test_loss))PATH = './best_model.pth'  # 保存的參數文件名 
torch.save(best_model.state_dict(), PATH)print("Done")
Epoch: 1, Train_acc:79.3%, Train_loss:1.948, Test_acc:84.6%, Test_loss:1.079
Epoch: 2, Train_acc:85.3%, Train_loss:0.395, Test_acc:85.2%, Test_loss:0.721
Epoch: 3, Train_acc:87.3%, Train_loss:0.318, Test_acc:86.5%, Test_loss:0.526
Epoch: 4, Train_acc:89.0%, Train_loss:0.277, Test_acc:86.6%, Test_loss:0.494
Epoch: 5, Train_acc:89.0%, Train_loss:0.266, Test_acc:87.9%, Test_loss:0.400
Epoch: 6, Train_acc:89.6%, Train_loss:0.252, Test_acc:84.6%, Test_loss:0.524
Epoch: 7, Train_acc:90.3%, Train_loss:0.239, Test_acc:85.5%, Test_loss:0.445
Epoch: 8, Train_acc:90.2%, Train_loss:0.235, Test_acc:87.6%, Test_loss:0.359
Epoch: 9, Train_acc:90.0%, Train_loss:0.235, Test_acc:89.3%, Test_loss:0.298
Epoch:10, Train_acc:91.0%, Train_loss:0.220, Test_acc:89.5%, Test_loss:0.307
Epoch:11, Train_acc:90.8%, Train_loss:0.222, Test_acc:88.3%, Test_loss:0.316
Epoch:12, Train_acc:91.4%, Train_loss:0.210, Test_acc:83.3%, Test_loss:0.516
Epoch:13, Train_acc:91.5%, Train_loss:0.208, Test_acc:91.3%, Test_loss:0.247
Epoch:14, Train_acc:91.5%, Train_loss:0.206, Test_acc:90.1%, Test_loss:0.269
Epoch:15, Train_acc:92.0%, Train_loss:0.199, Test_acc:91.1%, Test_loss:0.242
Epoch:16, Train_acc:92.1%, Train_loss:0.194, Test_acc:89.4%, Test_loss:0.285
Epoch:17, Train_acc:92.4%, Train_loss:0.193, Test_acc:91.0%, Test_loss:0.229
Epoch:18, Train_acc:92.4%, Train_loss:0.188, Test_acc:88.0%, Test_loss:0.317
Epoch:19, Train_acc:92.7%, Train_loss:0.182, Test_acc:89.2%, Test_loss:0.285
Epoch:20, Train_acc:92.6%, Train_loss:0.182, Test_acc:78.5%, Test_loss:0.728
Done

5、結果可視化

import matplotlib.pyplot as plt
#隱藏警告
import warnings
warnings.filterwarnings("ignore")               #忽略警告信息epochs_range = range(epoches)plt.figure(figsize=(12, 3))
plt.subplot(1, 2, 1)plt.plot(epochs_range, train_acc, label='Training Accuracy')
plt.plot(epochs_range, test_acc, label='Test Accuracy')
plt.legend(loc='lower right')
plt.title('Training Accuracy')plt.subplot(1, 2, 2)
plt.plot(epochs_range, train_loss, label='Training Loss')
plt.plot(epochs_range, test_loss, label='Test Loss')
plt.legend(loc='upper right')
plt.title('Training= Loss')
plt.show()

?
在這里插入圖片描述

?

在20輪測試集準確率變化比較大,從跑的幾次實驗來看,這次是偶然事件,測試集損失率后面一直穩定在0.3附件,測試準確率一直在0.8、0.89、0.90附件徘徊

6、模型評估

# 將參數加載到model當中 
best_model.load_state_dict(torch.load(PATH, map_location=device)) 
epoch_test_acc, epoch_test_loss = test(test_dl, best_model, loss_fn)print(epoch_test_acc, epoch_test_loss)
0.9134651249533756 0.24670581874393283

本文來自互聯網用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。
如若轉載,請注明出處:http://www.pswp.cn/bicheng/73237.shtml
繁體地址,請注明出處:http://hk.pswp.cn/bicheng/73237.shtml
英文地址,請注明出處:http://en.pswp.cn/bicheng/73237.shtml

如若內容造成侵權/違法違規/事實不符,請聯系多彩編程網進行投訴反饋email:809451989@qq.com,一經查實,立即刪除!

相關文章

優化用戶體驗:關鍵 Web 性能指標的獲取、分析、優化方法

前言 在當今互聯網高速發展的時代用戶對于網頁的加載速度和響應時間越來越敏感。一個性能表現不佳的網頁不僅會影響用戶體驗,還可能導致用戶流失。 因此,了解和優化網頁性能指標是每個開發者的必修課。今天我們就來聊聊常見的網頁性能指標以及如何獲取這…

vs code配置 c/C++

1、下載VSCode Visual Studio Code - Code Editing. Redefined 安裝目錄可改 勾選創建桌面快捷方式 安裝即可 2、漢化VSCode 點擊確定 下載MinGW 由于vsCode 只是一個編輯器,他沒有自帶編譯器,所以需要下載一個編譯器"MinGW". https://…

Kotlin關鍵字`when`的詳細用法

Kotlin關鍵字when的詳細用法 在Kotlin中,when是一個強大的控制流語句,相當于其他語言中的switch語句,但更加強大且靈活。本文將詳細講解when的用法及其常見場景,并與Java的switch語句進行對比。 一、基本語法 基本的when語法如…

MFCday01、模式對話框

對話框類和應用程序類。 MFC中 Combo Box List Box List Control三種列表控件,日期控件Date Time Picker

接口測試筆記

4、接口測試自動化 接口自動化概述 HttpClient HttpClient開發過程 創建Java工程 新建libs庫目錄 HttpClient 工具下載及引入 https://hc.apache.org/index.html工程中引入jar包 Get請求 HttpGet方法---發起Get請求 創建HttpClient對象 CloseableHttpClient httpclient …

查找sql中涉及的表名稱

import pandas as pd import datetime todaystr(datetime.date.today())filepath/Users/kangyongqing/Documents/kangyq/202303/分析模版/sql表引用提取/ file101試聽課明細.txt newfilefile1.title().split(.)[0]with open(filepathfile1,r) as file:contentfile.read().lower…

如何在Ubuntu上構建編譯LLVM和ISPC,以及Ubuntu上ISPC的使用方法

之前一直在 Mac 上使用 ISPC,奈何核心/線程太少了。最近想在 Ubuntu 上搞搞,但是 snap 安裝的 ISPC不知道為什么只能單核,很奇怪,就想著編譯一下,需要 Clang 和 LLVM。但是 Ubuntu 很搞,他的很多軟件版本是…

【Spring IOC/AOP】

IOC 參考: Spring基礎 - Spring核心之控制反轉(IOC) | Java 全棧知識體系 (pdai.tech) 概述: Ioc 即 Inverse of Control (控制反轉),是一種設計思想,就是將原本在程序中手動創建對象的控制權&#xff…

電感與電容的具體應用

文章目錄 一、電感應用1.?電源濾波:2. 儲能——平滑“電流波浪”? ?3. 調諧——校準“頻率樂器”?4. 限流——防止“洪水災害”?二、電容應用1.核心特性理解2.應用場景 三.電容電感對比 一、電感應用 1.?電源濾波: ?場景:工業設備中…

前端面試:axios 請求的底層依賴是什么?

在前端開發中,Axios 是一個流行的 JavaScript 庫,用于發送 HTTP 請求。它簡化了與 RESTful APIs 的交互,并提供了許多便利的方法與配置選項。要理解 Axios 的底層依賴,需要從以下幾個方面進行分析: 1. Axios 基于 XML…

springboot 3 集成Redisson

maven 依賴 <parent><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-parent</artifactId><version>3.2.12</version></parent><dependencies><dependency><groupId>org.red…

C#中繼承的核心定義?

1. 繼承的核心定義? ?繼承? 是面向對象編程&#xff08;OOP&#xff09;的核心特性之一&#xff0c;允許一個類&#xff08;稱為?子類/派生類?&#xff09;基于另一個類&#xff08;稱為?父類/基類?&#xff09;構建&#xff0c;自動獲得父類的成員&#xff08;字段、屬…

Deep research深度研究:ChatGPT/ Gemini/ Perplexity/ Grok哪家最強?(實測對比分析)

目前推出深度研究和深度檢索的AI大模型有四家&#xff1a; OpenAI和Gemini 的deep research&#xff0c;以及Perplexity 和Grok的deep search&#xff0c;都能生成帶參考文獻引用的主題報告。 致力于“幾分鐘之內生成一份完整的主題調研報告&#xff0c;解決人力幾小時甚至幾天…

Android SharedPreference 詳解

前提&#xff1a;基于 Android API 30 1. 認識 SharedPreference SharedPreference 是 Android 提供的輕量級的&#xff0c;線程安全的數據存儲機制&#xff0c;使用 key-value 鍵值對的方式將數據存儲在 xml 文件中&#xff0c;存儲路徑為 /data/data/yourPackageName/share…

自動化測試腳本語言選擇

測試人員在選擇自動化測試腳本語言時面臨多種選項。Python、Java、C#、JavaScript 和 Ruby 都是常見選擇&#xff0c;但哪種語言最適合&#xff1f;本文將詳細分析這些語言的特點、適用場景和優劣勢&#xff0c;結合行業趨勢和社會現象&#xff0c;為測試人員提供全面指導。 選…

【Java項目】基于JSP的KTV點歌系統

【Java項目】基于JSP的KTV點歌系統 技術簡介&#xff1a;采用JSP技術、B/S結構、MYSQL數據庫等實現。 系統簡介&#xff1a;KTV點歌系統的主要使用者分為管理員和用戶&#xff0c;實現功能包括管理員&#xff1a;個人中心、用戶管理、歌曲庫管理、歌曲類型管理、點歌信息管理&a…

element-plus文檔解析之Layout布局(el-row,el-col)

前言 這是element-plus提供的響應式布局組件。可以非常方便的實現響應式布局以及快速按比例分塊。 例如實現下面的效果&#xff1a; 第一行&#xff1a;寬度占100% 第二行&#xff1a;寬度1&#xff1a;1 第三行&#xff1a;1&#xff1a;1&#xff1a;1 第四行&#xff1a;1…

【Java】——數據類型和變量

個人主頁&#xff1a;User_芊芊君子 &#x1f389;歡迎大家點贊&#x1f44d;評論&#x1f4dd;收藏?文章 文章目錄&#xff1a; 1.Java中的注釋1.1.基本規則1.2.注釋規范 2.標識符3.關鍵字4.字面常量5.數據類型6.變量6.1變量的概念6.2語法6.3整型變量6.3.1整型變量6.3.2長整…

串口數據記錄儀DIY,體積小,全開源

作用 產品到客戶現場出現異常情況&#xff0c;這個時候就需要一個日志記錄儀、黑匣子&#xff0c;可以記錄產品的工作情況&#xff0c;當出現異常時&#xff0c;可以搜集到上下文的數據&#xff0c;從而判斷問題原因。 之前從網上買過&#xff0c;但是出現過丟數據的情況耽誤…

JVM中是如何定位一個對象的

在 Java 中&#xff0c;對象定位指的是如何通過引用&#xff08;Reference&#xff09;在堆內存中找到對象實例及其元數據&#xff08;如類型信息&#xff09;。JVM 主要通過 直接指針訪問 和 句柄訪問 兩種方式實現&#xff0c;各有其優缺點和應用場景&#xff1a; 一、直接指…