- 🍨 本文為🔗365天深度學習訓練營?中的學習記錄博客
- 🍖 原作者:K同學啊 | 接輔導、項目定制
一、我的環境
1.語言環境:Python 3.8
2.編譯器:Pycharm
3.深度學習環境:
- torch==1.12.1+cu113
- torchvision==0.13.1+cu113
二、導入數據
import torch
import torch.nn as nn
import torchvision
from torchvision import transforms, datasets
import os,PIL,pathlib,warningswarnings.filterwarnings("ignore") #忽略警告信息
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")from torchtext.datasets import AG_NEWS
train_iter = AG_NEWS(split='train') # 加載 AG News 數據集
三、構建詞典
from torchtext.data.utils import get_tokenizer
from torchtext.vocab import build_vocab_from_iteratortokenizer = get_tokenizer('basic_english') # 返回分詞器函數def yield_tokens(data_iter):for _, text in data_iter:yield tokenizer(text)vocab = build_vocab_from_iterator(yield_tokens(train_iter), specials=["<unk>"])
vocab.set_default_index(vocab["<unk>"]) # 設置默認索引,如果找不到單詞,則會選擇默認索引
print(vocab(['here', 'is', 'an', 'example']))
結果:?[475, 21, 30, 5297]
text_pipeline = lambda x: vocab(tokenizer(x))
label_pipeline = lambda x: int(x) - 1
print(text_pipeline('here is the an example'))
結果:[475, 21, 2, 30, 5297]
print(label_pipeline('10'))
結果:10
四、生成數據批次和迭代器
from torch.utils.data import DataLoaderdef collate_batch(batch):label_list, text_list, offsets = [], [], [0]for (_label, _text) in batch:# 標簽列表label_list.append(label_pipeline(_label))# 文本列表processed_text = torch.tensor(text_pipeline(_text), dtype=torch.int64)text_list.append(processed_text)# 偏移量,即語句的總詞匯量offsets.append(processed_text.size(0))label_list = torch.tensor(label_list, dtype=torch.int64)text_list = torch.cat(text_list)offsets = torch.tensor(offsets[:-1]).cumsum(dim=0) #返回維度dim中輸入元素的累計和return label_list.to(device), text_list.to(device), offsets.to(device)# 數據加載器
dataloader = DataLoader(train_iter,batch_size=8,shuffle =False,collate_fn=collate_batch)
五、定義模型
from torch import nnclass TextClassificationModel(nn.Module):def __init__(self, vocab_size, embed_dim, num_class):super(TextClassificationModel, self).__init__()self.embedding = nn.EmbeddingBag(vocab_size, # 詞典大小embed_dim, # 嵌入的維度sparse=False) # self.fc = nn.Linear(embed_dim, num_class)self.init_weights()def init_weights(self):initrange = 0.5self.embedding.weight.data.uniform_(-initrange, initrange)self.fc.weight.data.uniform_(-initrange, initrange)self.fc.bias.data.zero_()def forward(self, text, offsets):embedded = self.embedding(text, offsets)return self.fc(embedded)
六、定義實例
num_class = len(set([label for (label, text) in train_iter]))
vocab_size = len(vocab)
em_size = 64
model = TextClassificationModel(vocab_size, em_size, num_class).to(device)
七、定義訓練函數與評估函數
import timedef train(dataloader):model.train() # 切換為訓練模式total_acc, train_loss, total_count = 0, 0, 0log_interval = 500start_time = time.time()for idx, (label, text, offsets) in enumerate(dataloader):predicted_label = model(text, offsets)optimizer.zero_grad() # grad屬性歸零loss = criterion(predicted_label, label) # 計算網絡輸出和真實值之間的差距,label為真實值loss.backward() # 反向傳播optimizer.step() # 每一步自動更新# 記錄acc與losstotal_acc += (predicted_label.argmax(1) == label).sum().item()train_loss += loss.item()total_count += label.size(0)if idx % log_interval == 0 and idx > 0:elapsed = time.time() - start_timeprint('| epoch {:1d} | {:4d}/{:4d} batches ''| train_acc {:4.3f} train_loss {:4.5f}'.format(epoch, idx, len(dataloader),total_acc/total_count, train_loss/total_count))total_acc, train_loss, total_count = 0, 0, 0start_time = time.time()def evaluate(dataloader):model.eval() # 切換為測試模式total_acc, train_loss, total_count = 0, 0, 0with torch.no_grad():for idx, (label, text, offsets) in enumerate(dataloader):predicted_label = model(text, offsets)loss = criterion(predicted_label, label) # 計算loss值# 記錄測試數據total_acc += (predicted_label.argmax(1) == label).sum().item()train_loss += loss.item()total_count += label.size(0)return total_acc/total_count, train_loss/total_count
?結果:
| epoch 1 | 500/1782 batches| train_acc 0.901 train_loss 0.00458
| epoch 1 | 1000/1782 batches| train_acc 0.905 train_loss 0.00438
| epoch 1 | 1500/1782 batches| train_acc 0.908 train_loss 0.00437
---------------------------------------------------------------------
| epoch 1 | time:6.30s |valid_acc 0.907 | valid_loss 0.004
---------------------------------------------------------------------
| epoch 2 | 500/1782 batches| train_acc 0.917 train_loss 0.00381
| epoch 2 | 1000/1782 batches| train_acc 0.917 train_loss 0.00383
| epoch 2 | 1500/1782 batches| train_acc 0.917 train_loss 0.00386
---------------------------------------------------------------------
| epoch 2 | time:6.26s |valid_acc 0.911 | valid_loss 0.004
---------------------------------------------------------------------
| epoch 3 | 500/1782 batches| train_acc 0.929 train_loss 0.00330
| epoch 3 | 1000/1782 batches| train_acc 0.927 train_loss 0.00340
| epoch 3 | 1500/1782 batches| train_acc 0.923 train_loss 0.00354
---------------------------------------------------------------------
| epoch 3 | time:6.21s |valid_acc 0.935 | valid_loss 0.003
---------------------------------------------------------------------
| epoch 4 | 500/1782 batches| train_acc 0.933 train_loss 0.00306
| epoch 4 | 1000/1782 batches| train_acc 0.932 train_loss 0.00311
| epoch 4 | 1500/1782 batches| train_acc 0.929 train_loss 0.00318
---------------------------------------------------------------------
| epoch 4 | time:6.22s |valid_acc 0.916 | valid_loss 0.003
---------------------------------------------------------------------
| epoch 5 | 500/1782 batches| train_acc 0.948 train_loss 0.00253
| epoch 5 | 1000/1782 batches| train_acc 0.949 train_loss 0.00242
| epoch 5 | 1500/1782 batches| train_acc 0.951 train_loss 0.00238
---------------------------------------------------------------------
| epoch 5 | time:6.23s |valid_acc 0.954 | valid_loss 0.002
---------------------------------------------------------------------
| epoch 6 | 500/1782 batches| train_acc 0.951 train_loss 0.00241
| epoch 6 | 1000/1782 batches| train_acc 0.952 train_loss 0.00236
| epoch 6 | 1500/1782 batches| train_acc 0.952 train_loss 0.00235
---------------------------------------------------------------------
| epoch 6 | time:6.26s |valid_acc 0.954 | valid_loss 0.002
---------------------------------------------------------------------
| epoch 7 | 500/1782 batches| train_acc 0.954 train_loss 0.00228
| epoch 7 | 1000/1782 batches| train_acc 0.951 train_loss 0.00238
| epoch 7 | 1500/1782 batches| train_acc 0.954 train_loss 0.00228
---------------------------------------------------------------------
| epoch 7 | time:6.26s |valid_acc 0.954 | valid_loss 0.002
---------------------------------------------------------------------
| epoch 8 | 500/1782 batches| train_acc 0.953 train_loss 0.00227
| epoch 8 | 1000/1782 batches| train_acc 0.955 train_loss 0.00224
| epoch 8 | 1500/1782 batches| train_acc 0.954 train_loss 0.00224
---------------------------------------------------------------------
| epoch 8 | time:6.32s |valid_acc 0.954 | valid_loss 0.002
---------------------------------------------------------------------
| epoch 9 | 500/1782 batches| train_acc 0.955 train_loss 0.00218
| epoch 9 | 1000/1782 batches| train_acc 0.953 train_loss 0.00227
| epoch 9 | 1500/1782 batches| train_acc 0.955 train_loss 0.00227
---------------------------------------------------------------------
| epoch 9 | time:6.24s |valid_acc 0.954 | valid_loss 0.002
---------------------------------------------------------------------
| epoch 10 | 500/1782 batches| train_acc 0.952 train_loss 0.00229
| epoch 10 | 1000/1782 batches| train_acc 0.955 train_loss 0.00220
| epoch 10 | 1500/1782 batches| train_acc 0.956 train_loss 0.00220
---------------------------------------------------------------------
| epoch 10 | time:6.29s |valid_acc 0.954 | valid_loss 0.002
---------------------------------------------------------------------
八、定義訓練函數與評估函數
print('Checking the results of test dataset.')
test_acc, test_loss = evaluate(test_dataloader)
print('test accuracy {:8.3f}'.format(test_acc))
Checking the results of test dataset.
test accuracy 0.905
總結:?
-
預訓練詞向量:使用GloVe、FastText等預訓練詞向量能顯著提升性能
-
正則化:合理使用dropout、權重衰減等技術防止過擬合
-
超參數調優:學習率、批大小、隱藏層維度等對模型性能影響很大
-
遷移學習:對于小數據集,考慮使用BERT等預訓練模型進行微調