基于Pytorch深度學習神經網絡MNIST手寫數字識別系統源碼(帶界面和手寫畫板)

?第一步:準備數據

mnist開源數據集

第二步:搭建模型

我們這里搭建了一個LeNet5網絡

參考代碼如下:

import torch
from torch import nnclass Reshape(nn.Module):def forward(self, x):return x.view(-1, 1, 28, 28)class LeNet5(nn.Module):def __init__(self):super(LeNet5, self).__init__()self.net = nn.Sequential(Reshape(),# CONV1, ReLU1, POOL1nn.Conv2d(in_channels=1, out_channels=6, kernel_size=5, padding=2),# nn.Conv2d(in_channels=1, out_channels=6, kernel_size=5),nn.ReLU(),nn.MaxPool2d(kernel_size=2, stride=2),# CONV2, ReLU2, POOL2nn.Conv2d(in_channels=6, out_channels=16, kernel_size=5),nn.ReLU(),nn.MaxPool2d(kernel_size=2, stride=2),nn.Flatten(),# FC1nn.Linear(in_features=16 * 5 * 5, out_features=120),nn.ReLU(),# FC2nn.Linear(in_features=120, out_features=84),nn.ReLU(),# FC3nn.Linear(in_features=84, out_features=10))# 添加softmax層self.softmax = nn.Softmax()def forward(self, x):logits = self.net(x)# 將logits轉為概率prob = self.softmax(logits)return probif __name__ == '__main__':model = LeNet5()X = torch.rand(size=(256, 1, 28, 28), dtype=torch.float32)for layer in model.net:X = layer(X)print(layer.__class__.__name__, '\toutput shape: \t', X.shape)X = torch.rand(size=(1, 1, 28, 28), dtype=torch.float32)print(model(X))

第三步:訓練代碼

import torch
from torch import nn
from torchvision import datasets
from torchvision.transforms import ToTensor
from torch.utils.data import DataLoaderfrom model import LeNet5# DATASET
train_data = datasets.MNIST(root='./data',train=False,download=True,transform=ToTensor()
)test_data = datasets.MNIST(root='./data',train=False,download=True,transform=ToTensor()
)# PREPROCESS
batch_size = 256
train_dataloader = DataLoader(dataset=train_data, batch_size=batch_size)
test_dataloader = DataLoader(dataset=test_data, batch_size=batch_size)
for X, y in train_dataloader:print(X.shape)		# torch.Size([256, 1, 28, 28])print(y.shape)		# torch.Size([256])break# MODEL
device = 'cuda' if torch.cuda.is_available() else 'cpu'
model = LeNet5().to(device)# TRAIN MODEL
loss_func = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(params=model.parameters())def train(dataloader, model, loss_func, optimizer, epoch):model.train()data_size = len(dataloader.dataset)for batch, (X, y) in enumerate(dataloader):X, y = X.to(device), y.to(device)y_hat = model(X)loss = loss_func(y_hat, y)optimizer.zero_grad()loss.backward()optimizer.step()loss, current = loss.item(), batch * len(X)print(f'EPOCH{epoch+1}\tloss: {loss:>7f}', end='\t')# Test model
def test(dataloader, model, loss_fn):size = len(dataloader.dataset)num_batches = len(dataloader)model.eval()test_loss, correct = 0, 0with torch.no_grad():for X, y in dataloader:X, y = X.to(device), y.to(device)pred = model(X)test_loss += loss_fn(pred, y).item()correct += (pred.argmax(1) == y).type(torch.float).sum().item()test_loss /= num_batchescorrect /= sizeprint(f'Test Error: Accuracy: {(100 * correct):>0.1f}%, Average loss: {test_loss:>8f}\n')if __name__ == '__main__':epoches = 80for epoch in range(epoches):train(train_dataloader, model, loss_func, optimizer, epoch)test(test_dataloader, model, loss_func)# Save modelstorch.save(model.state_dict(), 'model.pth')print('Saved PyTorch LeNet5 State to model.pth')

第四步:統計訓練過程

EPOCH1	loss: 1.908403	Test Error: Accuracy: 58.3%, Average loss: 1.943602EPOCH2	loss: 1.776060	Test Error: Accuracy: 72.2%, Average loss: 1.750917EPOCH3	loss: 1.717706	Test Error: Accuracy: 73.6%, Average loss: 1.730332EPOCH4	loss: 1.719344	Test Error: Accuracy: 76.0%, Average loss: 1.703456EPOCH5	loss: 1.659312	Test Error: Accuracy: 76.6%, Average loss: 1.694500EPOCH6	loss: 1.647946	Test Error: Accuracy: 76.9%, Average loss: 1.691286EPOCH7	loss: 1.653712	Test Error: Accuracy: 77.0%, Average loss: 1.690819EPOCH8	loss: 1.653270	Test Error: Accuracy: 76.8%, Average loss: 1.692459EPOCH9	loss: 1.649021	Test Error: Accuracy: 77.5%, Average loss: 1.686158EPOCH10	loss: 1.648204	Test Error: Accuracy: 78.3%, Average loss: 1.678802EPOCH11	loss: 1.647159	Test Error: Accuracy: 78.4%, Average loss: 1.676133EPOCH12	loss: 1.647390	Test Error: Accuracy: 78.6%, Average loss: 1.674455EPOCH13	loss: 1.646807	Test Error: Accuracy: 78.4%, Average loss: 1.675752EPOCH14	loss: 1.630824	Test Error: Accuracy: 79.1%, Average loss: 1.668470EPOCH15	loss: 1.524222	Test Error: Accuracy: 86.3%, Average loss: 1.599240EPOCH16	loss: 1.524022	Test Error: Accuracy: 86.7%, Average loss: 1.594947EPOCH17	loss: 1.524296	Test Error: Accuracy: 87.1%, Average loss: 1.588946EPOCH18	loss: 1.523599	Test Error: Accuracy: 87.3%, Average loss: 1.588275EPOCH19	loss: 1.523655	Test Error: Accuracy: 87.5%, Average loss: 1.586576EPOCH20	loss: 1.523659	Test Error: Accuracy: 88.2%, Average loss: 1.579286EPOCH21	loss: 1.523733	Test Error: Accuracy: 87.9%, Average loss: 1.582472EPOCH22	loss: 1.523748	Test Error: Accuracy: 88.2%, Average loss: 1.578699EPOCH23	loss: 1.523788	Test Error: Accuracy: 88.0%, Average loss: 1.579700EPOCH24	loss: 1.523708	Test Error: Accuracy: 88.1%, Average loss: 1.579758EPOCH25	loss: 1.523683	Test Error: Accuracy: 88.4%, Average loss: 1.575913EPOCH26	loss: 1.523646	Test Error: Accuracy: 88.7%, Average loss: 1.572831EPOCH27	loss: 1.523654	Test Error: Accuracy: 88.9%, Average loss: 1.570528EPOCH28	loss: 1.523642	Test Error: Accuracy: 89.0%, Average loss: 1.570223EPOCH29	loss: 1.523663	Test Error: Accuracy: 89.0%, Average loss: 1.570385EPOCH30	loss: 1.523658	Test Error: Accuracy: 88.9%, Average loss: 1.571195EPOCH31	loss: 1.523653	Test Error: Accuracy: 88.4%, Average loss: 1.575981EPOCH32	loss: 1.523653	Test Error: Accuracy: 89.0%, Average loss: 1.570087EPOCH33	loss: 1.523642	Test Error: Accuracy: 88.9%, Average loss: 1.571018EPOCH34	loss: 1.523649	Test Error: Accuracy: 89.0%, Average loss: 1.570439EPOCH35	loss: 1.523629	Test Error: Accuracy: 90.4%, Average loss: 1.555473EPOCH36	loss: 1.461187	Test Error: Accuracy: 97.1%, Average loss: 1.491042EPOCH37	loss: 1.461230	Test Error: Accuracy: 97.7%, Average loss: 1.485049EPOCH38	loss: 1.461184	Test Error: Accuracy: 97.7%, Average loss: 1.485653EPOCH39	loss: 1.461156	Test Error: Accuracy: 98.2%, Average loss: 1.479966EPOCH40	loss: 1.461335	Test Error: Accuracy: 98.2%, Average loss: 1.479197EPOCH41	loss: 1.461152	Test Error: Accuracy: 98.7%, Average loss: 1.475477EPOCH42	loss: 1.461153	Test Error: Accuracy: 98.7%, Average loss: 1.475124EPOCH43	loss: 1.461153	Test Error: Accuracy: 98.9%, Average loss: 1.472885EPOCH44	loss: 1.461151	Test Error: Accuracy: 99.1%, Average loss: 1.470957EPOCH45	loss: 1.461156	Test Error: Accuracy: 99.1%, Average loss: 1.471141EPOCH46	loss: 1.461152	Test Error: Accuracy: 99.1%, Average loss: 1.470793EPOCH47	loss: 1.461151	Test Error: Accuracy: 98.8%, Average loss: 1.474548EPOCH48	loss: 1.461151	Test Error: Accuracy: 99.1%, Average loss: 1.470666EPOCH49	loss: 1.461151	Test Error: Accuracy: 99.1%, Average loss: 1.471546EPOCH50	loss: 1.461151	Test Error: Accuracy: 99.0%, Average loss: 1.471407EPOCH51	loss: 1.461151	Test Error: Accuracy: 98.8%, Average loss: 1.473795EPOCH52	loss: 1.461164	Test Error: Accuracy: 98.2%, Average loss: 1.480009EPOCH53	loss: 1.461151	Test Error: Accuracy: 99.2%, Average loss: 1.469931EPOCH54	loss: 1.461152	Test Error: Accuracy: 99.2%, Average loss: 1.469916EPOCH55	loss: 1.461151	Test Error: Accuracy: 98.9%, Average loss: 1.472574EPOCH56	loss: 1.461151	Test Error: Accuracy: 98.6%, Average loss: 1.476035EPOCH57	loss: 1.461151	Test Error: Accuracy: 98.2%, Average loss: 1.478933EPOCH58	loss: 1.461150	Test Error: Accuracy: 99.4%, Average loss: 1.468186EPOCH59	loss: 1.461151	Test Error: Accuracy: 99.4%, Average loss: 1.467602EPOCH60	loss: 1.461151	Test Error: Accuracy: 99.1%, Average loss: 1.471206EPOCH61	loss: 1.461151	Test Error: Accuracy: 98.8%, Average loss: 1.473356EPOCH62	loss: 1.461151	Test Error: Accuracy: 99.2%, Average loss: 1.470242EPOCH63	loss: 1.461150	Test Error: Accuracy: 99.1%, Average loss: 1.470826EPOCH64	loss: 1.461151	Test Error: Accuracy: 98.7%, Average loss: 1.474476EPOCH65	loss: 1.461150	Test Error: Accuracy: 99.3%, Average loss: 1.469116EPOCH66	loss: 1.461150	Test Error: Accuracy: 99.4%, Average loss: 1.467823EPOCH67	loss: 1.461150	Test Error: Accuracy: 99.5%, Average loss: 1.466486EPOCH68	loss: 1.461152	Test Error: Accuracy: 99.3%, Average loss: 1.468688EPOCH69	loss: 1.461150	Test Error: Accuracy: 99.5%, Average loss: 1.466256EPOCH70	loss: 1.461150	Test Error: Accuracy: 99.5%, Average loss: 1.466588EPOCH71	loss: 1.461150	Test Error: Accuracy: 99.6%, Average loss: 1.465280EPOCH72	loss: 1.461150	Test Error: Accuracy: 99.4%, Average loss: 1.467110EPOCH73	loss: 1.461151	Test Error: Accuracy: 99.6%, Average loss: 1.465245EPOCH74	loss: 1.461150	Test Error: Accuracy: 99.5%, Average loss: 1.466551EPOCH75	loss: 1.461150	Test Error: Accuracy: 99.5%, Average loss: 1.466001EPOCH76	loss: 1.461150	Test Error: Accuracy: 99.3%, Average loss: 1.468074EPOCH77	loss: 1.461151	Test Error: Accuracy: 99.6%, Average loss: 1.465709EPOCH78	loss: 1.461150	Test Error: Accuracy: 99.5%, Average loss: 1.466567EPOCH79	loss: 1.461150	Test Error: Accuracy: 99.6%, Average loss: 1.464922EPOCH80	loss: 1.461150	Test Error: Accuracy: 99.6%, Average loss: 1.465109

第五步:搭建GUI界面

第六步:整個工程的內容

有訓練代碼和訓練好的模型以及訓練過程,提供數據,提供GUI界面代碼,主要使用方法可以參考里面的“文檔說明_必看.docx”

?代碼的下載路徑(新窗口打開鏈接)基于Pytorch深度學習神經網絡MNIST手寫數字識別系統源碼(帶界面和手寫畫板)

?

有問題可以私信或者留言,有問必答

本文來自互聯網用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。
如若轉載,請注明出處:http://www.pswp.cn/diannao/13038.shtml
繁體地址,請注明出處:http://hk.pswp.cn/diannao/13038.shtml
英文地址,請注明出處:http://en.pswp.cn/diannao/13038.shtml

如若內容造成侵權/違法違規/事實不符,請聯系多彩編程網進行投訴反饋email:809451989@qq.com,一經查實,立即刪除!

相關文章

【數據結構】C++語言實現二叉樹的介紹及堆的實現(詳細解讀)

c語言中的小小白-CSDN博客c語言中的小小白關注算法,c,c語言,貪心算法,鏈表,mysql,動態規劃,后端,線性回歸,數據結構,排序算法領域.https://blog.csdn.net/bhbcdxb123?spm1001.2014.3001.5343 給大家分享一句我很喜歡我話: 知不足而奮進,望遠山而前行&am…

分布式系統的一致性與共識算法(三)

順序一致性(Sequential Consistency) ZooKeeper 一種說法是ZooKeeper是最終一致性,因為由于多副本、以及保證大多數成功的ZAB協議,當一個客戶端進程寫入一個新值,另外一個客戶端進程不能保證馬上就能讀到這個值,但是能保證最終能…

我的第一個網頁:武理天協

1. html代碼 1.1 首頁.html <!DOCTYPE html> <html lang"zh"> <head><meta charset"UTF-8"><title>武理天協</title><link rel"stylesheet" href"./style.css"><link rel"stylesh…

【車載開發系列】SID$11服務配置

【車載開發系列】SID$11服務配置 前言 ECUReset(ECU重置),ECU作為Server端,執行Client發送來ECU Reset請求中重啟的類型(通過子服務區分)。對于UDS協議關于處理該請求的邏輯,沒有強制性定義。 Step1:SID和SubFunction的追加 BasicEditor→Dcm→DcmConfigSet→DcmDs…

vs2019 c++里用 typeid() . name () 與 typeid() . raw_name () 測試數據類型的區別

&#xff08;1&#xff09; 都知道&#xff0c;在 vs2019 里用 typeid 打印的類型不大準&#xff0c;會主動去掉一些修飾符&#xff0c; const 和引用 修飾符會被去掉。但也可以給咱們驗證學到的代碼知識提供一些參考。那么今天發現其還有 raw_name 成員函數&#xff0c;這個函…

AES分組密碼

一、AES明文和密鑰位數 RIJNDAEL 算法數據塊長度和密鑰長度都可獨立地選定為大于等于 128 位且小于等于 256 位的 32 位的任意倍數。 而美國頒布 AES 時卻規定數據塊的長度為 128 位、密鑰的長度可分別選擇為 128 位&#xff0c; 192 位或 256 位 1.1 狀態 中間結果叫做狀態…

建模:3dmax

3Dmax 制作模型和動畫&#xff08;橘肉&#xff09;&#xff1b; RizomUV 對模型進行展UV&#xff08;橘皮&#xff09;&#xff1b; Substance Painter 紋理手繪&#xff08;給橘皮制定想要的皮膚&#xff09;&#xff1b; 1.基礎 1.1可編輯多邊形、可編輯樣條線 體、面都需要…

Polylang Pro插件下載:多語言網站構建的終極解決方案

在全球化的今天&#xff0c;多語言網站已成為企業拓展國際市場的重要工具。然而&#xff0c;創建和管理一個多語言網站并非易事。幸運的是&#xff0c;Polylang Pro插件的出現&#xff0c;為WordPress用戶提供了一個強大的多語言解決方案。本文將深入探討Polylang Pro插件的功能…

linux上git 使用方法

一、git上新建倉庫 在git上新建倉庫&#xff0c;并命名 二、本地初始化 //命令行 ssh-keygen -t rsa -b 4096 -C "your_emailexample.com" //ssh查看 cd /root/.ssh/ vim rsa.pub //復制后粘貼進git網頁設置里的ssh key //測試設置是否成功 ssh -T gitgithub.com/…

暴力數據結構之二叉樹(堆的相關知識)

1. 堆的基本了解 堆&#xff08;heap&#xff09;是計算機科學中一種特殊的數據結構&#xff0c;通常被視為一個完全二叉樹&#xff0c;并且可以用數組來存儲。堆的主要應用是在一組變化頻繁&#xff08;增刪查改的頻率較高&#xff09;的數據集中查找最值。堆分為大根堆和小根…

Spring事務的實現原理

Spring事務原理 Spring框架支持對于事務的管理功能&#xff0c;開發人員使用Spring框架能極大的簡化對于數據庫事務的管理操作&#xff0c;不必進行手動開啟事務&#xff0c;提交事務&#xff0c;回滾事務&#xff0c;就是在配置文件或者項目的啟動類配置Spring事務相關的注解…

什么是最大路徑?什么是極大路徑?

最近學習中&#xff0c;在這兩個概念上出現了混淆&#xff0c;導致了一些誤解&#xff0c;在此厘清。 最大路徑 在一個簡單圖G中&#xff0c;u、v之間的距離 d ( u , v ) min ? { u 到 v 的最短路的長度 } d(u,v) \min \{ u到v的最短路的長度 \} d(u,v)min{u到v的最短路的…

wefaf

c語言中的小小白-CSDN博客c語言中的小小白關注算法,c,c語言,貪心算法,鏈表,mysql,動態規劃,后端,線性回歸,數據結構,排序算法領域.https://blog.csdn.net/bhbcdxb123?spm1001.2014.3001.5343 給大家分享一句我很喜歡我話&#xff1a; 知不足而奮進&#xff0c;望遠山而前行&am…

使用Bash腳本和Logrotate實現Nginx日志切割

Nginx是一個廣泛使用的高性能Web服務器&#xff0c;它能夠處理大量的并發連接&#xff0c;但同時也會生成大量的日志文件。為了有效管理這些日志文件并確保系統的正常運行&#xff0c;我們需要定期對Nginx的日志文件進行切割和歸檔。本文將介紹如何使用Bash腳本和Logrotate來實…

每天Get一個小技巧:用DolphinScheduler實現隔幾天調度

轉載自tuoluzhe8521 這篇小短文將教會你如何使用Apache DolphinScheduler實現隔幾天調度&#xff0c;有此需求的小伙伴學起來&#xff01; 1 場景分析 DolphinScheduler定時器模塊-定時調度時每3秒|每3分鐘|每3天這種定時&#xff0c;不能夠跨分鐘&#xff0c;跨小時&#x…

【C++】:string類的基本使用

目錄 引言一&#xff0c;string類對象的常見構造二&#xff0c;string類對象的容量操作三&#xff0c;string類對象的訪問及遍歷操作四&#xff0c;string類對象的修改操作五&#xff0c;string類非成員函數六&#xff0c;整形與字符串的轉換 引言 string 就是我們常說的"…

如何對SQL Server中的敏感數據進行加密解密?

為什么需要對敏感數據進行加密&#xff1f; 近幾年有不少關于個人數據泄露的新聞&#xff08;個人數據通常包含如姓名、地址、身份證號碼、財務信息等&#xff09;&#xff0c;給事發公司和被泄露人都帶來了不小的影響。 許多國家和地區都出臺了個人數據保護的法律法規&#…

Unity Animation--動畫窗口指南(使用動畫視圖)

Unity Animation--動畫窗口指南&#xff08;使用動畫視圖&#xff09; 使用動畫視圖 window -> Animation 即可打開窗口 查看GameObject上的動畫 window -> Animation -> Animation 默認快捷鍵 Ctrl 6 動畫屬性列表 在下面的圖像中&#xff0c;“動畫”視圖&am…

思科模擬器--2.靜態路由和默認路由配置24.5.15

首先&#xff0c;創建三個路由器和兩個個人電腦。 接著&#xff0c;配置兩臺電腦的IP&#xff0c;子網掩碼和默認網關 對Router 0&#xff0c;進行以下命令&#xff1a; 對Router進行以下命令&#xff1a; 對Router2進行以下命令&#xff1a; 本實驗完成。 驗證&#xff1a;PC…

Vue3+ts(day06:路由)

學習源碼可以看我的個人前端學習筆記 (github.com):qdxzw/frontlearningNotes 覺得有幫助的同學&#xff0c;可以點心心支持一下哈&#xff08;筆記是根據b站上學習的尚硅谷的前端視頻【張天禹老師】&#xff0c;記錄一下學習筆記&#xff0c;用于自己復盤&#xff0c;有需要學…