摘要:
????????記錄MindSpore?AI框架使用RNN網絡對自然語言進行情感分類的過程、步驟和方法。
????????包括環境準備、下載數據集、數據集加載和預處理、構建模型、模型訓練、模型測試等。
一、概念
情感分類。
RNN網絡模型
實現效果:
????????輸入: This film is terrible
????????正確標簽: Negative
????????預測標簽: Negative
????????輸入: This film is great
????????正確標簽: Positive
????????預測標簽: Positive
二、環境準備
%%capture captured_output
# 實驗環境已經預裝了mindspore==2.2.14,如需更換mindspore版本,可更改下面mindspore的版本號
!pip uninstall mindspore -y
!pip install -i https://pypi.mirrors.ustc.edu.cn/simple mindspore==2.2.14
# 查看當前 mindspore 版本
!pip show mindspore
輸出:
Name: mindspore
Version: 2.2.14
Summary: MindSpore is a new open source deep learning training/inference framework that could be used for mobile, edge and cloud scenarios.
Home-page: https://www.mindspore.cn
Author: The MindSpore Authors
Author-email: contact@mindspore.cn
License: Apache 2.0
Location: /home/nginx/miniconda/envs/jupyter/lib/python3.9/site-packages
Requires: asttokens, astunparse, numpy, packaging, pillow, protobuf, psutil, scipy
Required-by:
手動安裝tqdm和requests庫
!pip install tqdm requests
三、加載數據集
IMDB影評數據集
????????Positive
????????Negative
預訓練詞向量
????????編碼自然語言單詞
????????獲取文本語義特征
選取Glove詞向量作為Embedding。
1.下載數據集和預訓練詞向量
requests庫
????????http請求
tqdm庫
????????下載百分比進度
IO方式下載臨時文件
保存至指定路徑并返回
import os
import shutil
import requests
import tempfile
from tqdm import tqdm
from typing import IO
from pathlib import Path
?
# 指定保存路徑為 `home_path/.mindspore_examples`
cache_dir = Path.home() / '.mindspore_examples'
?
def http_get(url: str, temp_file: IO):"""使用requests庫下載數據,并使用tqdm庫進行流程可視化"""req = requests.get(url, stream=True)content_length = req.headers.get('Content-Length')total = int(content_length) if content_length is not None else Noneprogress = tqdm(unit='B', total=total)for chunk in req.iter_content(chunk_size=1024):if chunk:progress.update(len(chunk))temp_file.write(chunk)progress.close()
?
def download(file_name: str, url: str):"""下載數據并存為指定名稱"""if not os.path.exists(cache_dir):os.makedirs(cache_dir)cache_path = os.path.join(cache_dir, file_name)cache_exist = os.path.exists(cache_path)if not cache_exist:with tempfile.NamedTemporaryFile() as temp_file:http_get(url, temp_file)temp_file.flush()temp_file.seek(0)with open(cache_path, 'wb') as cache_file:shutil.copyfileobj(temp_file, cache_file)return cache_path
下載IMDB數據集(使用華為云鏡像):
imdb_path = download('aclImdb_v1.tar.gz', 'https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/aclImdb_v1.tar.gz')
imdb_path
輸出:
100%|██████████| 84125825/84125825 [00:02<00:00, 38051349.02B/s]
'/home/nginx/.mindspore_examples/aclImdb_v1.tar.gz'
2.IMDB加載模塊
Python的tarfile庫
????????讀取IMDB數據集tar.gz文件
????????所有數據和標簽分別進行存放
IMDB數據集解壓目錄如下:
├── aclImdb│ ├── imdbEr.txt│ ├── imdb.vocab│ ├── README│ ├── test│ └── train│ ├── neg│ ├── pos...
數據集分兩部分加載
????????train
????????????????neg
????????????????pos
????????Test
????????????????neg
????????????????pos
import re
import six
import string
import tarfile
?
class IMDBData():"""IMDB數據集加載器
?加載IMDB數據集并處理為一個Python迭代對象。
?"""label_map = {"pos": 1,"neg": 0}def __init__(self, path, mode="train"):self.mode = modeself.path = pathself.docs, self.labels = [], []
?self._load("pos")self._load("neg")
?def _load(self, label):pattern = re.compile(r"aclImdb/{}/{}/.*\.txt$".format(self.mode, label))# 將數據加載至內存with tarfile.open(self.path) as tarf:tf = tarf.next()while tf is not None:if bool(pattern.match(tf.name)):# 對文本進行分詞、去除標點和特殊字符、小寫處理self.docs.append(str(tarf.extractfile(tf).read().rstrip(six.b("\n\r")).translate(None, six.b(string.punctuation)).lower()).split())self.labels.append([self.label_map[label]])tf = tarf.next()
?def __getitem__(self, idx):return self.docs[idx], self.labels[idx]
?def __len__(self):return len(self.docs)
3.加載訓練數據集
imdb_train = IMDBData(imdb_path, 'train')
len(imdb_train)
輸出:
25000
mindspore.dataset.Generatordataset接口
加載數據集迭代對象
????????加載訓練數據集train
????????加載測試數據集test
????????指定數據集中文本和標簽列名
import mindspore.dataset as ds
?
def load_imdb(imdb_path):imdb_train = ds.GeneratorDataset(IMDBData(imdb_path, "train"), column_names=["text", "label"], shuffle=True, num_samples=10000)imdb_test = ds.GeneratorDataset(IMDBData(imdb_path, "test"), column_names=["text", "label"], shuffle=False)return imdb_train, imdb_testimdb_train, imdb_test = load_imdb(imdb_path)
imdb_train
輸出:
<mindspore.dataset.engine.datasets_user_defined.GeneratorDataset at 0xfffed0a44fd0>
4.加載預訓練詞向量
預訓練詞向量
Glove(Global Vectors for Word Representation)
????????數值化表示輸入單詞
????????構造Embedding層詞向量和詞表
????????????????Glove
????????nn.Embedding層
????????查表方式
????????輸入
????????????????單詞對應詞表中的index
????????輸出
????????????????對應表達向量
預訓練詞向量
Word | Vector |
the | 0.418 0.24968 -0.41242 0.1217 0.34527 -0.044457 -0.49688 -0.17862 -0.00066023 ... |
, | 0.013441 0.23682 -0.16899 0.40951 0.63812 0.47709 -0.42852 -0.55641 -0.364 ... |
第一列單詞作詞表
dataset.text.Vocab按序加載
讀取每一行Vector
????????轉為numpy.array
nn.Embedding加載權重
import zipfile
import numpy as np
?
def load_glove(glove_path):glove_100d_path = os.path.join(cache_dir, 'glove.6B.100d.txt')if not os.path.exists(glove_100d_path):glove_zip = zipfile.ZipFile(glove_path)glove_zip.extractall(cache_dir)
?embeddings = []tokens = []with open(glove_100d_path, encoding='utf-8') as gf:for glove in gf:word, embedding = glove.split(maxsplit=1)tokens.append(word)embeddings.append(np.fromstring(embedding, dtype=np.float32, sep=' '))# 添加 <unk>, <pad> 兩個特殊占位符對應的embeddingembeddings.append(np.random.rand(100))embeddings.append(np.zeros((100,), np.float32))
?vocab = ds.text.Vocab.from_list(tokens, special_tokens=["<unk>", "<pad>"], special_first=False)embeddings = np.array(embeddings).astype(np.float32)return vocab, embeddings
詞表沒有覆蓋的單詞,加<unk>標記符
輸入長度不一致
????????打包batch時填充短文本,加<pad>標記符
完成的詞表長度為原詞表長度+2。
下載Glove詞向量
加載生成詞表和詞向量權重矩陣
glove_path = download('glove.6B.zip', 'https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/glove.6B.zip')
vocab, embeddings = load_glove(glove_path)
len(vocab.vocab())
輸出:
100%|██████████| 862182613/862182613 [00:22<00:00, 38245398.49B/s]
400002
使用詞表轉換the為index id
查詢詞向量矩陣對應的詞向量
idx = vocab.tokens_to_ids('the')
embedding = embeddings[idx]
idx, embedding
輸出:
(0,array([-0.038194, -0.24487 , 0.72812 , -0.39961 , 0.083172, 0.043953,-0.39141 , 0.3344 , -0.57545 , 0.087459, 0.28787 , -0.06731 ,0.30906 , -0.26384 , -0.13231 , -0.20757 , 0.33395 , -0.33848 ,-0.31743 , -0.48336 , 0.1464 , -0.37304 , 0.34577 , 0.052041,0.44946 , -0.46971 , 0.02628 , -0.54155 , -0.15518 , -0.14107 ,-0.039722, 0.28277 , 0.14393 , 0.23464 , -0.31021 , 0.086173,0.20397 , 0.52624 , 0.17164 , -0.082378, -0.71787 , -0.41531 ,0.20335 , -0.12763 , 0.41367 , 0.55187 , 0.57908 , -0.33477 ,-0.36559 , -0.54857 , -0.062892, 0.26584 , 0.30205 , 0.99775 ,-0.80481 , -3.0243 , 0.01254 , -0.36942 , 2.2167 , 0.72201 ,-0.24978 , 0.92136 , 0.034514, 0.46745 , 1.1079 , -0.19358 ,-0.074575, 0.23353 , -0.052062, -0.22044 , 0.057162, -0.15806 ,-0.30798 , -0.41625 , 0.37972 , 0.15006 , -0.53212 , -0.2055 ,-1.2526 , 0.071624, 0.70565 , 0.49744 , -0.42063 , 0.26148 ,-1.538 , -0.30223 , -0.073438, -0.28312 , 0.37104 , -0.25217 ,0.016215, -0.017099, -0.38984 , 0.87424 , -0.72569 , -0.51058 ,-0.52028 , -0.1459 , 0.8278 , 0.27062 ], dtype=float32))
四、數據集預處理
預處理
????????mindspore.dataset接口
????????Vocab處理
????????????????text.Lookup接口
????????????????加載詞表
????????????????指定unknown_token
????????????????所有Token對應index id。
????????文本序列統一長度
????????????????PadEnd接口
????????????????????????定義最大長度 此例為500
????????????????????????補齊值(pad_value) 此例為<pad>
????????????????不足的<pad>補齊
????????????????超出的截斷
????????label數據轉為float32格式。
import mindspore as ms
?
lookup_op = ds.text.Lookup(vocab, unknown_token='<unk>')
pad_op = ds.transforms.PadEnd([500], pad_value=vocab.tokens_to_ids('<pad>'))
type_cast_op = ds.transforms.TypeCast(ms.float32)
定義數據集處理流水線
添加指定column的操作。
????????map接口
imdb_train = imdb_train.map(operations=[lookup_op, pad_op], input_columns=['text'])
imdb_train = imdb_train.map(operations=[type_cast_op], input_columns=['label'])
?
imdb_test = imdb_test.map(operations=[lookup_op, pad_op], input_columns=['text'])
imdb_test = imdb_test.map(operations=[type_cast_op], input_columns=['label'])
分割兩部分數據集
????split接口
? ? ? ? ? ? 訓練比例0.7
? ? ? ? ? ? 驗證比例0.3
imdb_train, imdb_valid = imdb_train.split([0.7, 0.3])
輸出:
[WARNING] ME(281:281473514793264,MainProcess):2024-07-07-02:08:44.142.068 [mindspore/dataset/engine/datasets.py:1203] Dataset is shuffled before split.
打包數據集
????????batch接口
????????????????batch大小
????????????????設置是否丟棄剩余數據(無法整除batch size的部分)
imdb_train = imdb_train.batch(64, drop_remainder=True)
imdb_valid = imdb_valid.batch(64, drop_remainder=True)
五、模型構建
情感分類模型結構
nn.Embedding -> nn.RNN -> nn.Dense
????????輸入文本(即序列化后的index id列表)
????????查表轉為向量化表示
????????nn.Embedding層加載Glove詞向量
????????RNN循環神經網絡做特征提取
? ? ? ? ? ??????規避RNN梯度消失問題
? ? ? ? ? ??????LSTM(Long short-term memory)變種
????????RNN連接全連接層nn.Dense
????????轉化特征為與分類數量相同的size
1.Embedding
Embedding層(EmbeddingLookup)
????????使用index id查找權重矩陣對應id的向量
????????輸入
????????????????index id序列
????????輸出
????????????????相同長度矩陣
????????例如:
????????????????# 詞表大小(index的取值范圍)為1000,表示向量的size為100
????????????????embedding = nn.Embedding(1000, 100)
????????????????# 序列長度為16
????????????????input shape: (1, 16) ???????????????
????????????????output shape: (1, 16, 100)
????????預訓練詞向量矩陣glove
????????????????nn.Embedding.embedding_table
????????對應vocab_size?詞表大小400002
????????embedding_size??選用的glove.6B.100d向量大小100
2.RNN(循環神經網絡)
RNN循環神經網絡
Recurrent Neural Network
????????輸入序列sequence數據
????????遞歸recursion序列演進
????????所有節點(循環單元)按鏈式連接
RNN一般結構圖
左側
????????RNN Cell循環
????????????????只有一個Cell參數
????????????????在循環計算中更新
右側
????????RNN鏈式連接平鋪
自然語言處理大量應用RNN
????匹配
????????RNN的循環特性
????????自然語言文本的序列特性
RNN結構拆解圖
RNN單個Cell結構簡單
????????梯度消失(Gradient Vanishing)問題
????????????????序列較長時,序列尾部丟失首部信息
????????提出LSTM(Long short-term memory)
????????????????門控機制(Gating Mechanism)
????????????????????????控制信息流在每個循環步中的留存和丟棄
LSTM結構拆解圖
MindSpore.nn.LSTM對應公式:
h0:t,(ht,ct)=LSTM(x0:t,(h0,c0))
nn.LSTM隱藏了整個循環神經網絡在序列時間步(Time step)上的循環
輸入
????????序列
????????初始狀態
輸出
????????時間步隱狀態(hidden state)矩陣
????????最后一個時間步隱狀態
????????輸入下一層
????????????????句子的編碼特征
時間步Time step
????????循環神經網絡計算的每一次循環,成為一個Time step。
????????輸入
????????????????文本序列時,一個Time step對應一個單詞
????????輸出
????????????????h0:t對應每個單詞的隱狀態集合
????????????????ht,ct對應最后一個單詞對應的隱狀態
3.Dense
LSTM編碼獲取句子特征
輸入全連接層nn.Dense
變換特征維度為二分類所需的維度1
輸出模型預測結果
import math
import mindspore as ms
import mindspore.nn as nn
import mindspore.ops as ops
from mindspore.common.initializer import Uniform, HeUniform
?
class RNN(nn.Cell):def __init__(self, embeddings, hidden_dim, output_dim, n_layers,bidirectional, pad_idx):super().__init__()vocab_size, embedding_dim = embeddings.shapeself.embedding = nn.Embedding(vocab_size, embedding_dim, embedding_table=ms.Tensor(embeddings), padding_idx=pad_idx)self.rnn = nn.LSTM(embedding_dim,hidden_dim,num_layers=n_layers,bidirectional=bidirectional,batch_first=True)weight_init = HeUniform(math.sqrt(5))bias_init = Uniform(1 / math.sqrt(hidden_dim * 2))self.fc = nn.Dense(hidden_dim * 2, output_dim, weight_init=weight_init, bias_init=bias_init)
?def construct(self, inputs):embedded = self.embedding(inputs)_, (hidden, _) = self.rnn(embedded)hidden = ops.concat((hidden[-2, :, :], hidden[-1, :, :]), axis=1)output = self.fc(hidden)return output
4.損失函數與優化器
指定參數實例化網絡
選擇損失函數和優化器
預測Positive或Negative二分類問題
????????nn.BCEWithLogitsLoss(二分類交叉熵損失函數)
hidden_size = 256
output_size = 1
num_layers = 2
bidirectional = True
lr = 0.001
pad_idx = vocab.tokens_to_ids('<pad>')
?
model = RNN(embeddings, hidden_size, output_size, num_layers, bidirectional, pad_idx)
loss_fn = nn.BCEWithLogitsLoss(reduction='mean')
optimizer = nn.Adam(model.trainable_params(), learning_rate=lr)
5.訓練邏輯
訓練邏輯設計:
????????讀取一個Batch的數據;
????????輸入網絡
????????????????正向計算
????????????????反向傳播
????????????????更新權重
????????返回loss
tqdm庫
????????可視化
????????????????訓練過程
????????????????loss
def forward_fn(data, label):logits = model(data)loss = loss_fn(logits, label)return loss
?
grad_fn = ms.value_and_grad(forward_fn, None, optimizer.parameters)
?
def train_step(data, label):loss, grads = grad_fn(data, label)optimizer(grads)return loss
?
def train_one_epoch(model, train_dataset, epoch=0):model.set_train()total = train_dataset.get_dataset_size()loss_total = 0step_total = 0with tqdm(total=total) as t:t.set_description('Epoch %i' % epoch)for i in train_dataset.create_tuple_iterator():loss = train_step(*i)loss_total += loss.asnumpy()step_total += 1t.set_postfix(loss=loss_total/step_total)t.update(1)
6.評估指標和邏輯
評估模型
????????對比
????????????????模型預測結果
????????????????測試集正確標簽
????????求預測準確率
IMDB的情感二分類準確率實現代碼:
def binary_accuracy(preds, y):"""計算每個batch的準確率"""
?# 對預測值進行四舍五入rounded_preds = np.around(ops.sigmoid(preds).asnumpy())correct = (rounded_preds == y).astype(np.float32)acc = correct.sum() / len(correct)return acc
評估步驟
????????讀取一個Batch的數據;
????????輸入網絡
????????????????正向計算
????????????????預測結果
????????計算準確率
????????tqdm可視化
????????????????評估過程
????????????????Loss
????????輸出評估loss
????????判斷模型優劣
評估時網絡模型主體不包含損失函數和優化器
評估前model.set_train(False)設置模型評估狀態
def evaluate(model, test_dataset, criterion, epoch=0):total = test_dataset.get_dataset_size()epoch_loss = 0epoch_acc = 0step_total = 0model.set_train(False)
?with tqdm(total=total) as t:t.set_description('Epoch %i' % epoch)for i in test_dataset.create_tuple_iterator():predictions = model(i[0])loss = criterion(predictions, i[1])epoch_loss += loss.asnumpy()
?acc = binary_accuracy(predictions, i[1])epoch_acc += acc
?step_total += 1t.set_postfix(loss=epoch_loss/step_total, acc=epoch_acc/step_total)t.update(1)
?return epoch_loss / total
六、模型訓練與保存
模型訓練
????????設置訓練輪數5輪
????????保存最優模型變量best_valid_loss
????????保存最小loss值輪次的模型
num_epochs = 2
best_valid_loss = float('inf')
ckpt_file_name = os.path.join(cache_dir, 'sentiment-analysis.ckpt')
?
for epoch in range(num_epochs):train_one_epoch(model, imdb_train, epoch)valid_loss = evaluate(model, imdb_valid, loss_fn, epoch)
?if valid_loss < best_valid_loss:best_valid_loss = valid_lossms.save_checkpoint(model, ckpt_file_name)
輸出:
Epoch 0: 0%| | 0/109 [00:00<?, ?it/s]
|
Epoch 0: 100%|██████████| 109/109 [13:38<00:00, 7.51s/it, loss=0.692]
Epoch 0: 100%|██████████| 46/46 [00:40<00:00, 1.15it/s, acc=0.524, loss=0.69]
Epoch 1: 100%|██████████| 109/109 [01:23<00:00, 1.31it/s, loss=0.668]
Epoch 1: 100%|██████████| 46/46 [00:13<00:00, 3.38it/s, acc=0.672, loss=0.615]
每輪Loss逐步下降
準確率逐步提升
七、模型加載與測試
模型測試
加載保存的最優模型
????????加載checkpoint
????????加載網絡權重加載接口
param_dict = ms.load_checkpoint(ckpt_file_name)
ms.load_param_into_net(model, param_dict)
輸出:
([], [])
測試集打包batch
evaluate評估
得到模型在測試集上的效果
imdb_test = imdb_test.batch(64)
evaluate(model, imdb_test, loss_fn)
輸出:
Epoch 0: 100%|█████████▉| 390/391 [01:29<00:00, 4.57it/s, acc=0.666, loss=0.619]
-
Epoch 0: 100%|██████████| 391/391 [03:33<00:00, 1.83it/s, acc=0.666, loss=0.619]
0.6193520278881883
八、自定義輸入測試
預測函數
????????輸入一句評價
????????獲得評價的情感分類
現實步驟:
????????輸入句子分詞;
????????詞表對應的index id序列;
????????index id序列轉為Tensor;
????????輸入模型
????????獲得預測結果;
????????打印輸出預測結果。
實現代碼:
score_map = {1: "Positive",0: "Negative"
}
?
def predict_sentiment(model, vocab, sentence):model.set_train(False)tokenized = sentence.lower().split()indexed = vocab.tokens_to_ids(tokenized)tensor = ms.Tensor(indexed, ms.int32)tensor = tensor.expand_dims(0)prediction = model(tensor)return score_map[int(np.round(ops.sigmoid(prediction).asnumpy()))]predict_sentiment(model, vocab, "This film is terrible")
輸出:
'Negative'
predict_sentiment(model, vocab, "This film is great")
輸出:
'Positive'