目錄
- 序列模型簡介
- RNN循環神經網絡
- LSTM長短期記憶網絡
- Transformer架構
- BERT模型詳解
- 實踐項目
序列模型簡介
什么是序列數據?
序列數據是按照特定順序排列的數據,其中元素的順序包含重要信息。常見的序列數據包括:
- 文本:單詞或字符的序列
- 時間序列:股票價格、天氣數據
- 音頻:聲音信號的采樣序列
- 視頻:圖像幀的序列
為什么需要專門的序列模型?
傳統的神經網絡(如全連接網絡)存在以下限制:
- 固定輸入大小:無法處理變長序列
- 無記憶能力:不能利用序列中的時序信息
- 參數過多:對于長序列,參數量會爆炸式增長
RNN循環神經網絡
1. RNN基本概念
1.1 核心思想
RNN(Recurrent Neural Network)的核心思想是讓網絡具有"記憶"能力。它通過在處理序列時維護一個隱藏狀態(hidden state),將之前的信息傳遞到當前時刻。
RNN的數學表示:
h_t = tanh(W_hh * h_{t-1} + W_xh * x_t + b_h)
y_t = W_hy * h_t + b_y
其中:
x_t
:時刻t的輸入h_t
:時刻t的隱藏狀態y_t
:時刻t的輸出W_hh, W_xh, W_hy
:權重矩陣b_h, b_y
:偏置項
1.2 RNN的展開視圖
輸入: x_1 → x_2 → x_3 → ... → x_t↓ ↓ ↓ ↓
RNN: [h_1]→[h_2]→[h_3]→ ... →[h_t]↓ ↓ ↓ ↓
輸出: y_1 y_2 y_3 ... y_t
每個時間步的RNN單元共享相同的參數(權重和偏置)。
1.3 Python實現示例
import numpy as npclass SimpleRNN:def __init__(self, input_size, hidden_size, output_size):# 初始化權重self.W_xh = np.random.randn(hidden_size, input_size) * 0.01self.W_hh = np.random.randn(hidden_size, hidden_size) * 0.01self.W_hy = np.random.randn(output_size, hidden_size) * 0.01self.b_h = np.zeros((hidden_size, 1))self.b_y = np.zeros((output_size, 1))def forward(self, inputs):"""前向傳播inputs: 輸入序列,shape=(input_size, seq_length)"""h = np.zeros((self.W_hh.shape[0], 1)) # 初始隱藏狀態self.hidden_states = [h]outputs = []for t in range(inputs.shape[1]):x = inputs[:, t].reshape(-1, 1)# 計算新的隱藏狀態h = np.tanh(np.dot(self.W_xh, x) + np.dot(self.W_hh, h) + self.b_h)# 計算輸出y = np.dot(self.W_hy, h) + self.b_yself.hidden_states.append(h)outputs.append(y)return outputs, self.hidden_states
2. RNN的反向傳播(BPTT)
2.1 什么是BPTT?
BPTT(Backpropagation Through Time)是RNN的訓練算法,它將RNN在時間上展開,然后像普通神經網絡一樣進行反向傳播。
2.2 BPTT的核心步驟
- 前向傳播:計算所有時間步的輸出和隱藏狀態
- 計算損失:累積所有時間步的損失
- 反向傳播:從最后一個時間步開始,逐步向前計算梯度
- 梯度累積:由于參數共享,需要累積所有時間步的梯度
2.3 梯度計算公式
對于損失函數L,梯度計算如下:
?L/?W_hy = Σ_t ?L_t/?W_hy
?L/?W_hh = Σ_t Σ_k ?L_t/?h_t * ?h_t/?h_k * ?h_k/?W_hh
?L/?W_xh = Σ_t ?L_t/?h_t * ?h_t/?W_xh
2.4 BPTT實現示例
def backward(self, targets, learning_rate=0.01):"""反向傳播算法targets: 目標序列"""# 初始化梯度dW_xh = np.zeros_like(self.W_xh)dW_hh = np.zeros_like(self.W_hh)dW_hy = np.zeros_like(self.W_hy)db_h = np.zeros_like(self.b_h)db_y = np.zeros_like(self.b_y)dh_next = np.zeros_like(self.hidden_states[0])# 反向遍歷時間步for t in reversed(range(len(outputs))):# 輸出層梯度dy = outputs[t] - targets[t]dW_hy += np.dot(dy, self.hidden_states[t+1].T)db_y += dy# 隱藏層梯度dh = np.dot(self.W_hy.T, dy) + dh_nextdh_raw = (1 - self.hidden_states[t+1]**2) * dh# 參數梯度db_h += dh_rawdW_xh += np.dot(dh_raw, inputs[t].T)dW_hh += np.dot(dh_raw, self.hidden_states[t].T)# 傳遞梯度到前一時間步dh_next = np.dot(self.W_hh.T, dh_raw)# 梯度裁剪(防止梯度爆炸)for dparam in [dW_xh, dW_hh, dW_hy, db_h, db_y]:np.clip(dparam, -5, 5, out=dparam)# 參數更新self.W_xh -= learning_rate * dW_xhself.W_hh -= learning_rate * dW_hhself.W_hy -= learning_rate * dW_hyself.b_h -= learning_rate * db_hself.b_y -= learning_rate * db_y
3. RNN的問題
3.1 梯度消失和梯度爆炸
- 梯度消失:當序列很長時,早期時間步的梯度會變得極小,導致無法有效學習長期依賴
- 梯度爆炸:梯度在反向傳播過程中不斷累積,可能變得極大
3.2 解決方案
- 梯度裁剪(Gradient Clipping)
- 使用ReLU激活函數
- 更好的初始化方法
- 使用LSTM或GRU(最有效的解決方案)
LSTM長短期記憶網絡
1. LSTM的動機
LSTM(Long Short-Term Memory)是為了解決RNN的長期依賴問題而設計的。它通過引入"門控機制"來控制信息的流動。
2. LSTM的核心組件
2.1 細胞狀態(Cell State)
細胞狀態C_t
是LSTM的核心,它像一條傳送帶,貫穿整個網絡,只有少量的線性交互,信息可以輕松地流動。
2.2 三個門控機制
LSTM通過三個門來控制細胞狀態:
1. 遺忘門(Forget Gate)
f_t = σ(W_f · [h_{t-1}, x_t] + b_f)
- 決定從細胞狀態中丟棄什么信息
- 輸出0到1之間的數值,0表示完全遺忘,1表示完全保留
2. 輸入門(Input Gate)
i_t = σ(W_i · [h_{t-1}, x_t] + b_i)
C?_t = tanh(W_C · [h_{t-1}, x_t] + b_C)
- 決定什么新信息存儲在細胞狀態中
i_t
決定更新哪些值C?_t
創建新的候選值
3. 輸出門(Output Gate)
o_t = σ(W_o · [h_{t-1}, x_t] + b_o)
h_t = o_t * tanh(C_t)
- 決定輸出什么信息
- 基于細胞狀態,但是經過過濾
2.3 LSTM的完整計算流程
import numpy as npclass LSTM:def __init__(self, input_size, hidden_size, output_size):# 初始化權重矩陣self.hidden_size = hidden_size# 遺忘門參數self.W_f = np.random.randn(hidden_size, input_size + hidden_size) * 0.01self.b_f = np.zeros((hidden_size, 1))# 輸入門參數self.W_i = np.random.randn(hidden_size, input_size + hidden_size) * 0.01self.b_i = np.zeros((hidden_size, 1))# 候選值參數self.W_C = np.random.randn(hidden_size, input_size + hidden_size) * 0.01self.b_C = np.zeros((hidden_size, 1))# 輸出門參數self.W_o = np.random.randn(hidden_size, input_size + hidden_size) * 0.01self.b_o = np.zeros((hidden_size, 1))# 輸出層參數self.W_y = np.random.randn(output_size, hidden_size) * 0.01self.b_y = np.zeros((output_size, 1))def sigmoid(self, x):return 1 / (1 + np.exp(-x))def lstm_cell(self, x_t, h_prev, C_prev):"""單個LSTM單元的前向傳播"""# 拼接輸入和前一個隱藏狀態concat = np.vstack((h_prev, x_t))# 遺忘門f_t = self.sigmoid(np.dot(self.W_f, concat) + self.b_f)# 輸入門i_t = self.sigmoid(np.dot(self.W_i, concat) + self.b_i)C_tilde = np.tanh(np.dot(self.W_C, concat) + self.b_C)# 更新細胞狀態C_t = f_t * C_prev + i_t * C_tilde# 輸出門o_t = self.sigmoid(np.dot(self.W_o, concat) + self.b_o)h_t = o_t * np.tanh(C_t)# 計算輸出y_t = np.dot(self.W_y, h_t) + self.b_y# 保存中間值用于反向傳播cache = (x_t, h_prev, C_prev, f_t, i_t, C_tilde, o_t, C_t, h_t)return h_t, C_t, y_t, cachedef forward(self, inputs):"""LSTM的前向傳播inputs: shape=(input_size, seq_length)"""seq_length = inputs.shape[1]h_t = np.zeros((self.hidden_size, 1))C_t = np.zeros((self.hidden_size, 1))outputs = []caches = []for t in range(seq_length):x_t = inputs[:, t].reshape(-1, 1)h_t, C_t, y_t, cache = self.lstm_cell(x_t, h_t, C_t)outputs.append(y_t)caches.append(cache)return outputs, caches
3. LSTM的優勢
3.1 長期記憶能力
- 細胞狀態可以攜帶信息穿越很長的序列
- 門控機制精確控制信息的添加和刪除
3.2 梯度流動
- 細胞狀態的線性特性使梯度能夠更好地流動
- 避免了梯度消失問題
3.3 選擇性記憶
- 能夠學習何時記住、何時遺忘、何時輸出
- 對不同類型的序列模式有很好的適應性
4. LSTM的變體
4.1 GRU(Gated Recurrent Unit)
GRU是LSTM的簡化版本,只有兩個門:
- 重置門(Reset Gate):決定忘記多少過去的信息
- 更新門(Update Gate):決定保留多少過去的信息
# GRU的核心計算
z_t = σ(W_z · [h_{t-1}, x_t]) # 更新門
r_t = σ(W_r · [h_{t-1}, x_t]) # 重置門
h?_t = tanh(W · [r_t * h_{t-1}, x_t]) # 候選隱藏狀態
h_t = (1 - z_t) * h_{t-1} + z_t * h?_t # 最終隱藏狀態
4.2 雙向LSTM(Bidirectional LSTM)
- 同時使用前向和后向兩個LSTM
- 能夠利用未來的上下文信息
- 在許多NLP任務中表現優秀
Transformer架構
1. Transformer的革命性創新
2017年,Google提出的Transformer架構徹底改變了序列建模的方式。其核心創新是完全基于注意力機制,拋棄了RNN的遞歸結構。
2. 自注意力機制(Self-Attention)
2.1 核心思想
自注意力允許模型在處理每個位置時,直接關注序列中的所有其他位置,而不需要通過遞歸。
2.2 注意力的數學表達
Query、Key、Value的計算:
Q = X · W_Q # Query矩陣
K = X · W_K # Key矩陣
V = X · W_V # Value矩陣Attention(Q, K, V) = softmax(QK^T / √d_k) · V
其中:
X
:輸入序列的嵌入表示W_Q, W_K, W_V
:學習的權重矩陣d_k
:Key向量的維度(用于縮放)
2.3 自注意力的直觀理解
想象你在閱讀句子"The cat sat on the mat":
- 處理"sat"時,模型需要知道是"cat"在坐
- 自注意力讓"sat"直接查看所有詞,并學習與"cat"的關聯
- 每個詞都會計算與其他所有詞的相關性分數
2.4 Python實現
import numpy as npclass SelfAttention:def __init__(self, d_model, d_k, d_v):"""d_model: 輸入維度d_k: Key的維度d_v: Value的維度"""self.d_k = d_kself.W_Q = np.random.randn(d_model, d_k) * 0.01self.W_K = np.random.randn(d_model, d_k) * 0.01self.W_V = np.random.randn(d_model, d_v) * 0.01def forward(self, X):"""X: 輸入序列,shape=(seq_len, d_model)"""# 計算Q, K, VQ = np.dot(X, self.W_Q)K = np.dot(X, self.W_K)V = np.dot(X, self.W_V)# 計算注意力分數scores = np.dot(Q, K.T) / np.sqrt(self.d_k)# Softmax歸一化attention_weights = self.softmax(scores)# 加權求和output = np.dot(attention_weights, V)return output, attention_weightsdef softmax(self, x):exp_x = np.exp(x - np.max(x, axis=-1, keepdims=True))return exp_x / np.sum(exp_x, axis=-1, keepdims=True)
3. 多頭注意力(Multi-Head Attention)
3.1 為什么需要多頭?
單個注意力頭可能只能捕捉一種類型的關系。多頭注意力允許模型同時關注不同類型的信息。
3.2 多頭注意力的實現
class MultiHeadAttention:def __init__(self, d_model, num_heads):self.num_heads = num_headsself.d_model = d_modelself.d_k = d_model // num_heads# 為每個頭創建獨立的權重矩陣self.W_Q = np.random.randn(num_heads, d_model, self.d_k) * 0.01self.W_K = np.random.randn(num_heads, d_model, self.d_k) * 0.01self.W_V = np.random.randn(num_heads, d_model, self.d_k) * 0.01# 輸出投影self.W_O = np.random.randn(d_model, d_model) * 0.01def forward(self, X):batch_size, seq_len = X.shape[0], X.shape[1]outputs = []for i in range(self.num_heads):# 每個頭獨立計算注意力Q = np.dot(X, self.W_Q[i])K = np.dot(X, self.W_K[i])V = np.dot(X, self.W_V[i])scores = np.dot(Q, K.T) / np.sqrt(self.d_k)attention = self.softmax(scores)head_output = np.dot(attention, V)outputs.append(head_output)# 拼接所有頭的輸出concat_output = np.concatenate(outputs, axis=-1)# 最終投影final_output = np.dot(concat_output, self.W_O)return final_output
4. 位置編碼(Positional Encoding)
由于Transformer沒有遞歸結構,需要額外的位置信息。
4.1 正弦位置編碼
def positional_encoding(seq_len, d_model):"""生成位置編碼"""PE = np.zeros((seq_len, d_model))for pos in range(seq_len):for i in range(0, d_model, 2):PE[pos, i] = np.sin(pos / (10000 ** (2 * i / d_model)))PE[pos, i + 1] = np.cos(pos / (10000 ** (2 * i / d_model)))return PE
5. Transformer的完整架構
5.1 編碼器(Encoder)
class TransformerEncoder:def __init__(self, d_model, num_heads, d_ff, num_layers):self.num_layers = num_layersself.layers = []for _ in range(num_layers):layer = {'attention': MultiHeadAttention(d_model, num_heads),'norm1': LayerNorm(d_model),'feedforward': FeedForward(d_model, d_ff),'norm2': LayerNorm(d_model)}self.layers.append(layer)def forward(self, X):for layer in self.layers:# 多頭注意力attn_output = layer['attention'].forward(X)X = layer['norm1'].forward(X + attn_output) # 殘差連接# 前饋網絡ff_output = layer['feedforward'].forward(X)X = layer['norm2'].forward(X + ff_output) # 殘差連接return X
5.2 解碼器(Decoder)
解碼器除了自注意力外,還包含編碼器-解碼器注意力:
class TransformerDecoder:def __init__(self, d_model, num_heads, d_ff, num_layers):self.num_layers = num_layersself.layers = []for _ in range(num_layers):layer = {'self_attention': MultiHeadAttention(d_model, num_heads),'norm1': LayerNorm(d_model),'cross_attention': MultiHeadAttention(d_model, num_heads),'norm2': LayerNorm(d_model),'feedforward': FeedForward(d_model, d_ff),'norm3': LayerNorm(d_model)}self.layers.append(layer)def forward(self, X, encoder_output):for layer in self.layers:# 帶掩碼的自注意力self_attn = layer['self_attention'].forward(X, mask=True)X = layer['norm1'].forward(X + self_attn)# 編碼器-解碼器注意力cross_attn = layer['cross_attention'].forward(X, encoder_output)X = layer['norm2'].forward(X + cross_attn)# 前饋網絡ff_output = layer['feedforward'].forward(X)X = layer['norm3'].forward(X + ff_output)return X
6. Transformer的優勢
- 并行計算:所有位置可以同時計算,大大提高訓練速度
- 長距離依賴:直接建模任意兩個位置的關系
- 可解釋性:注意力權重提供了模型決策的可視化
- 擴展性:容易擴展到更大的模型規模
BERT模型詳解
1. BERT的創新之處
BERT(Bidirectional Encoder Representations from Transformers)是Google在2018年提出的預訓練語言模型,它革命性地改變了NLP領域。
1.1 核心創新
- 雙向性:同時利用左右上下文
- 預訓練-微調范式:在大規模數據上預訓練,然后針對特定任務微調
- 統一架構:同一個模型可以用于各種下游任務
2. BERT的架構
2.1 基礎架構
BERT基于Transformer的編碼器:
- BERT-Base:12層,768維隱藏層,12個注意力頭,110M參數
- BERT-Large:24層,1024維隱藏層,16個注意力頭,340M參數
2.2 輸入表示
BERT的輸入是三個嵌入的和:
class BERTEmbedding:def __init__(self, vocab_size, max_len, d_model):# 詞嵌入self.token_embedding = np.random.randn(vocab_size, d_model) * 0.01# 段嵌入(用于區分句子A和句子B)self.segment_embedding = np.random.randn(2, d_model) * 0.01# 位置嵌入self.position_embedding = np.random.randn(max_len, d_model) * 0.01def forward(self, token_ids, segment_ids, position_ids):# 獲取三種嵌入token_emb = self.token_embedding[token_ids]segment_emb = self.segment_embedding[segment_ids]position_emb = self.position_embedding[position_ids]# 相加得到最終嵌入embeddings = token_emb + segment_emb + position_embreturn embeddings
3. BERT的預訓練任務
3.1 掩碼語言模型(Masked Language Model, MLM)
核心思想:隨機遮蔽輸入中15%的詞,讓模型預測被遮蔽的詞。
實現細節:
- 80%的時間:用[MASK]標記替換
- 10%的時間:用隨機詞替換
- 10%的時間:保持不變
def create_mlm_data(tokens, vocab_size, mask_prob=0.15):"""創建MLM訓練數據"""output_labels = []masked_tokens = tokens.copy()for i in range(len(tokens)):if np.random.random() < mask_prob:output_labels.append(tokens[i])# 80%概率用[MASK]替換if np.random.random() < 0.8:masked_tokens[i] = '[MASK]'# 10%概率用隨機詞替換elif np.random.random() < 0.5:masked_tokens[i] = np.random.randint(0, vocab_size)# 10%概率保持不變# else: keep originalelse:output_labels.append(-1) # 不需要預測return masked_tokens, output_labels
3.2 下一句預測(Next Sentence Prediction, NSP)
核心思想:給定兩個句子,預測第二個句子是否是第一個句子的下一句。
訓練數據構造:
- 50%的時間:句子B確實是句子A的下一句(標簽:IsNext)
- 50%的時間:句子B是隨機選擇的(標簽:NotNext)
def create_nsp_data(sentence_pairs):"""創建NSP訓練數據"""nsp_data = []for i, (sent_a, sent_b) in enumerate(sentence_pairs):if np.random.random() < 0.5:# 正樣本:真實的下一句nsp_data.append({'sent_a': sent_a,'sent_b': sent_b,'label': 1 # IsNext})else:# 負樣本:隨機句子random_idx = np.random.randint(len(sentence_pairs))while random_idx == i:random_idx = np.random.randint(len(sentence_pairs))nsp_data.append({'sent_a': sent_a,'sent_b': sentence_pairs[random_idx][0], # 隨機句子'label': 0 # NotNext})return nsp_data
4. BERT的完整實現
class BERT:def __init__(self, vocab_size, max_len=512, d_model=768, num_layers=12, num_heads=12, d_ff=3072):# 嵌入層self.embedding = BERTEmbedding(vocab_size, max_len, d_model)# Transformer編碼器self.encoder = TransformerEncoder(d_model, num_heads, d_ff, num_layers)# MLM預測頭self.mlm_head = MLMHead(d_model, vocab_size)# NSP預測頭self.nsp_head = NSPHead(d_model)def forward(self, token_ids, segment_ids, masked_positions=None):# 獲取嵌入seq_len = len(token_ids)position_ids = np.arange(seq_len)embeddings = self.embedding.forward(token_ids, segment_ids, position_ids)# 通過Transformer編碼器encoded = self.encoder.forward(embeddings)# MLM預測mlm_predictions = Noneif masked_positions is not None:masked_encoded = encoded[masked_positions]mlm_predictions = self.mlm_head.forward(masked_encoded)# NSP預測(使用[CLS]標記的輸出)cls_output = encoded[0] # 第一個位置是[CLS]nsp_prediction = self.nsp_head.forward(cls_output)return mlm_predictions, nsp_predictionclass MLMHead:def __init__(self, d_model, vocab_size):self.dense = np.random.randn(d_model, d_model) * 0.01self.layer_norm = LayerNorm(d_model)self.decoder = np.random.randn(d_model, vocab_size) * 0.01def forward(self, hidden_states):x = np.dot(hidden_states, self.dense)x = self.gelu(x)x = self.layer_norm.forward(x)predictions = np.dot(x, self.decoder)return predictionsdef gelu(self, x):# GELU激活函數return 0.5 * x * (1 + np.tanh(np.sqrt(2 / np.pi) * (x + 0.044715 * x**3)))class NSPHead:def __init__(self, d_model):self.classifier = np.random.randn(d_model, 2) * 0.01def forward(self, cls_output):return np.dot(cls_output, self.classifier)
5. BERT的微調
5.1 文本分類任務
class BERTForClassification:def __init__(self, bert_model, num_classes):self.bert = bert_modelself.classifier = np.random.randn(768, num_classes) * 0.01self.dropout = 0.1def forward(self, token_ids, segment_ids):# 獲取BERT的輸出_, cls_output = self.bert.forward(token_ids, segment_ids)# Dropoutif self.training:mask = np.random.binomial(1, 1-self.dropout, cls_output.shape)cls_output = cls_output * mask / (1-self.dropout)# 分類logits = np.dot(cls_output, self.classifier)return logits
5.2 問答任務
class BERTForQuestionAnswering:def __init__(self, bert_model):self.bert = bert_model# 預測答案的起始和結束位置self.qa_outputs = np.random.randn(768, 2) * 0.01def forward(self, token_ids, segment_ids):# 獲取所有位置的輸出sequence_output, _ = self.bert.forward(token_ids, segment_ids)# 預測起始和結束位置logits = np.dot(sequence_output, self.qa_outputs)start_logits = logits[:, 0]end_logits = logits[:, 1]return start_logits, end_logits
6. BERT的訓練技巧
6.1 學習率調度
def get_linear_schedule_with_warmup(optimizer, num_warmup_steps, num_training_steps):"""帶預熱的線性學習率調度"""def lr_lambda(current_step):if current_step < num_warmup_steps:# 預熱階段:線性增加return float(current_step) / float(max(1, num_warmup_steps))# 線性衰減return max(0.0, float(num_training_steps - current_step) / float(max(1, num_training_steps - num_warmup_steps)))return lr_lambda
6.2 梯度累積
當批次太大無法放入內存時,使用梯度累積:
def train_with_gradient_accumulation(model, data_loader, accumulation_steps=4):optimizer.zero_grad()for i, batch in enumerate(data_loader):outputs = model.forward(batch)loss = compute_loss(outputs, batch['labels'])loss = loss / accumulation_steps # 歸一化損失loss.backward()if (i + 1) % accumulation_steps == 0:optimizer.step()optimizer.zero_grad()
7. BERT的變體和改進
7.1 RoBERTa
- 去除NSP任務
- 使用更大的批次和更多數據
- 動態掩碼
7.2 ALBERT
- 參數共享(跨層共享)
- 因式分解嵌入
- 句子順序預測(SOP)替代NSP
7.3 ELECTRA
- 生成器-判別器架構
- 替換詞檢測任務
- 更高效的預訓練
實踐項目
項目1:使用RNN進行文本生成
目標
構建一個字符級RNN,生成莎士比亞風格的文本。
實現步驟
import numpy as npclass CharRNN:def __init__(self, vocab_size, hidden_size=128, seq_length=25):self.vocab_size = vocab_sizeself.hidden_size = hidden_sizeself.seq_length = seq_length# 初始化參數self.Wxh = np.random.randn(hidden_size, vocab_size) * 0.01self.Whh = np.random.randn(hidden_size, hidden_size) * 0.01self.Why = np.random.randn(vocab_size, hidden_size) * 0.01self.bh = np.zeros((hidden_size, 1))self.by = np.zeros((vocab_size, 1))def train(self, data, epochs=100, learning_rate=0.1):"""訓練模型"""for epoch in range(epochs):h_prev = np.zeros((self.hidden_size, 1))for t in range(0, len(data) - self.seq_length, self.seq_length):# 準備輸入和目標inputs = [data[t+i] for i in range(self.seq_length)]targets = [data[t+i+1] for i in range(self.seq_length)]# 前向傳播和反向傳播loss, h_prev = self.train_step(inputs, targets, h_prev, learning_rate)if t % 1000 == 0:print(f'Epoch {epoch}, Step {t}, Loss: {loss:.4f}')def generate(self, seed_char, length=100, temperature=1.0):"""生成文本"""h = np.zeros((self.hidden_size, 1))x = np.zeros((self.vocab_size, 1))x[seed_char] = 1generated = []for _ in range(length):h = np.tanh(np.dot(self.Wxh, x) + np.dot(self.Whh, h) + self.bh)y = np.dot(self.Why, h) + self.byp = np.exp(y / temperature) / np.sum(np.exp(y / temperature))# 采樣ix = np.random.choice(range(self.vocab_size), p=p.ravel())x = np.zeros((self.vocab_size, 1))x[ix] = 1generated.append(ix)return generated
項目2:使用LSTM進行情感分析
目標
構建LSTM模型對電影評論進行情感分類。
import torch
import torch.nn as nnclass LSTMSentimentClassifier(nn.Module):def __init__(self, vocab_size, embedding_dim=128, hidden_dim=256, num_layers=2, dropout=0.5):super().__init__()self.embedding = nn.Embedding(vocab_size, embedding_dim)self.lstm = nn.LSTM(embedding_dim, hidden_dim, num_layers, batch_first=True, dropout=dropout, bidirectional=True)self.dropout = nn.Dropout(dropout)self.fc = nn.Linear(hidden_dim * 2, 2) # 二分類def forward(self, x):# x shape: (batch_size, seq_len)embedded = self.embedding(x)# embedded shape: (batch_size, seq_len, embedding_dim)lstm_out, (hidden, cell) = self.lstm(embedded)# 使用最后一個時間步的輸出# lstm_out shape: (batch_size, seq_len, hidden_dim * 2)# 取最后時間步last_hidden = lstm_out[:, -1, :]dropped = self.dropout(last_hidden)output = self.fc(dropped)return output# 訓練函數
def train_sentiment_model(model, train_loader, val_loader, epochs=10):optimizer = torch.optim.Adam(model.parameters(), lr=0.001)criterion = nn.CrossEntropyLoss()for epoch in range(epochs):model.train()total_loss = 0for batch in train_loader:texts, labels = batchoptimizer.zero_grad()outputs = model(texts)loss = criterion(outputs, labels)loss.backward()# 梯度裁剪torch.nn.utils.clip_grad_norm_(model.parameters(), 5)optimizer.step()total_loss += loss.item()# 驗證model.eval()correct = 0total = 0with torch.no_grad():for batch in val_loader:texts, labels = batchoutputs = model(texts)_, predicted = torch.max(outputs, 1)total += labels.size(0)correct += (predicted == labels).sum().item()accuracy = 100 * correct / totalprint(f'Epoch {epoch+1}, Loss: {total_loss:.4f}, Accuracy: {accuracy:.2f}%')
項目3:使用Transformer進行機器翻譯
class TransformerTranslator(nn.Module):def __init__(self, src_vocab_size, tgt_vocab_size, d_model=512, num_heads=8, num_layers=6, d_ff=2048, max_len=100):super().__init__()# 源語言和目標語言的嵌入self.src_embedding = nn.Embedding(src_vocab_size, d_model)self.tgt_embedding = nn.Embedding(tgt_vocab_size, d_model)self.positional_encoding = PositionalEncoding(d_model, max_len)# Transformerself.transformer = nn.Transformer(d_model, num_heads, num_layers, num_layers, d_ff)# 輸出層self.output_layer = nn.Linear(d_model, tgt_vocab_size)def forward(self, src, tgt):# 嵌入和位置編碼src_emb = self.positional_encoding(self.src_embedding(src))tgt_emb = self.positional_encoding(self.tgt_embedding(tgt))# 生成目標掩碼(防止看到未來的詞)tgt_mask = self.generate_square_subsequent_mask(tgt.size(1))# Transformer前向傳播output = self.transformer(src_emb, tgt_emb, tgt_mask=tgt_mask)# 預測下一個詞output = self.output_layer(output)return outputdef generate_square_subsequent_mask(self, sz):mask = torch.triu(torch.ones(sz, sz) * float('-inf'), diagonal=1)return mask# 推理函數
def translate(model, src_sentence, src_vocab, tgt_vocab, max_len=50):model.eval()# 編碼源句子src_tokens = [src_vocab[word] for word in src_sentence.split()]src_tensor = torch.tensor(src_tokens).unsqueeze(0)# 開始解碼tgt_tokens = [tgt_vocab['<sos>']]for _ in range(max_len):tgt_tensor = torch.tensor(tgt_tokens).unsqueeze(0)with torch.no_grad():output = model(src_tensor, tgt_tensor)next_token = output[0, -1, :].argmax().item()tgt_tokens.append(next_token)if next_token == tgt_vocab['<eos>']:break# 轉換回文字translation = [tgt_vocab.get_word(token) for token in tgt_tokens]return ' '.join(translation[1:-1]) # 去除<sos>和<eos>
項目4:使用BERT進行命名實體識別
from transformers import BertForTokenClassification, BertTokenizerclass BERTNERModel:def __init__(self, num_labels):self.model = BertForTokenClassification.from_pretrained('bert-base-uncased', num_labels=num_labels)self.tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')def train(self, train_data, val_data, epochs=3):optimizer = torch.optim.AdamW(self.model.parameters(), lr=5e-5)for epoch in range(epochs):self.model.train()total_loss = 0for batch in train_data:# 準備輸入inputs = self.tokenizer(batch['texts'], padding=True, truncation=True, return_tensors="pt")labels = batch['labels']# 前向傳播outputs = self.model(**inputs, labels=labels)loss = outputs.loss# 反向傳播optimizer.zero_grad()loss.backward()optimizer.step()total_loss += loss.item()# 驗證self.evaluate(val_data)def predict(self, text):self.model.eval()# Tokenizeinputs = self.tokenizer(text, return_tensors="pt")with torch.no_grad():outputs = self.model(**inputs)predictions = torch.argmax(outputs.logits, dim=-1)# 解碼預測結果tokens = self.tokenizer.convert_ids_to_tokens(inputs["input_ids"][0])labels = predictions[0].cpu().numpy()# 合并結果entities = []current_entity = []current_label = Nonefor token, label in zip(tokens, labels):if label != 0: # 非O標簽if current_label == label:current_entity.append(token)else:if current_entity:entities.append((current_entity, current_label))current_entity = [token]current_label = labelelse:if current_entity:entities.append((current_entity, current_label))current_entity = []current_label = Nonereturn entities
總結與展望
學習路線建議
-
基礎階段(1-2周)
- 理解RNN的基本概念
- 實現簡單的RNN
- 理解梯度消失問題
-
進階階段(2-3周)
- 深入理解LSTM的門控機制
- 實現LSTM并在實際任務中應用
- 學習GRU等變體
-
深入階段(3-4周)
- 全面掌握Transformer架構
- 理解自注意力機制
- 實現簡單的Transformer
-
應用階段(持續)
- 學習使用預訓練模型
- 微調BERT解決實際問題
- 探索最新的模型架構
重要資源
-
論文
- “Attention Is All You Need” (Transformer)
- “BERT: Pre-training of Deep Bidirectional Transformers”
- “GPT-3: Language Models are Few-Shot Learners”
-
開源框架
- Hugging Face Transformers
- PyTorch
- TensorFlow
-
在線課程
- Stanford CS224N: Natural Language Processing with Deep Learning
- Fast.ai Practical Deep Learning
- Andrew Ng’s Deep Learning Specialization
-
模型規模:GPT-4、PaLM等超大規模模型
-
效率優化:模型壓縮、知識蒸餾
-
多模態學習:結合文本、圖像、音頻
-
持續學習:模型的在線更新和適應
-
可解釋性:理解模型的決策過程