hidden state 向量
當我們把一句話輸入模型后,例如 “Hello world”:
token IDs: [15496, 995]
經過 Embedding + Transformer 層后,會得到每個 token 的中間表示,形狀為:
hidden_states: (batch_size, seq_len, hidden_dim) 比如: (1, 2, 768)
這是 Transformer 層的輸出,即每個 token 的向量表示。
hidden state → logits:映射到詞表空間
🔹 使用 輸出投影矩陣(通常是 embedding 的轉置)
為了從 hidden state 還原出詞,我們需要得到它在詞表上每個 token 的“分數”,這叫 logits。實現方式如下:
logits = hidden_state @ W_out.T + b
其中:
W_out
是詞嵌入矩陣(Embedding matrix),形狀為(vocab_size, hidden_dim)
@
是矩陣乘法,hidden_state
形狀是(seq_len, hidden_dim)
- 得到的
logits
形狀是(seq_len, vocab_size)
所以,每個位置的 hidden state 都被映射成一個 詞表大小的分布。
logits → token ID:選出最可能的 token
現在每個位置我們都有了一個 logits 向量,例如:
logits = [2.1, -0.5, 0.3, 6.9, ...] # 長度 = vocab_size
有幾種選擇方式:
方法 | 說明 |
---|---|
argmax(logits) | 選最大值,對應 greedy decoding |
softmax → sample | 轉成概率分布后隨機采樣 |
top-k sampling | 從 top-k 個中采樣,控制多樣性 |
top-p (nucleus) | 從累計概率在 p 范圍內采樣 |
例如:
probs = softmax(logits)
token_id = torch.argmax(probs).item()
token ID → token 字符串片段
token ID 其實對應的是某個詞表里的編號,比如:
tokenizer.convert_ids_to_tokens(50256) # 輸出: <|endoftext|>
tokenizer.convert_ids_to_tokens(15496) # 輸出: "Hello"
如果是多個 token ID,可以:
tokenizer.convert_ids_to_tokens([15496, 995]) # 輸出: ["Hello", " world"]
tokens → 拼接成文本(decode)
tokens 是“子詞”或“子字符”,例如:
["Hel", "lo", " world", "!"]
通過 tokenizer.decode()
會自動合并它們為字符串:
tokenizer.decode([15496, 995]) # 輸出: "Hello world"
它會處理空格、子詞連接等細節,恢復為人類可讀的句子。
多輪生成:把預測作為輸入繼續生成
在生成任務(如 GPT)中,模型是逐 token 生成的。
流程如下:
輸入: "你好"
↓
tokenize → [token IDs]
↓
送入模型 → 得到下一個 token 的 logits
↓
選出 token ID → decode 成文字
↓
拼接到輸入后,繼續送入模型 → 下一輪生成
↓
...
直到生成 EOS(終止符)或達到最大長度
總結流程圖:
(1) 輸入文本 → tokenizer → token IDs
(2) token IDs → Embedding → hidden_states(中間層向量)
(3) hidden_states × W.T → logits(詞表得分)
(4) logits → sampling → token ID
(5) token ID → token → decode → 文本
(6) 拼接文本 → 重復生成(自回歸)
示例代碼
"""
大語言模型解碼過程詳解
===========================
本示例展示了大語言模型如何將隱藏狀態向量解碼成文本輸出
使用GPT-2模型作為演示,展示從輸入文本到預測下一個token的完整流程
"""import torch
import numpy as np
import matplotlib.pyplot as plt
from transformers import GPT2LMHeadModel, GPT2Tokenizer# 設置隨機種子,確保結果可復現
torch.manual_seed(42)def display_token_probabilities(probabilities, tokens, top_k=5):"""可視化展示token的概率分布(僅展示top_k個)"""# 獲取前k個最大概率及其索引top_probs, top_indices = torch.topk(probabilities, top_k)top_probs = top_probs.detach().numpy()top_tokens = [tokens[idx] for idx in top_indices]print(f"\n前{top_k}個最可能的下一個token:")for token, prob in zip(top_tokens, top_probs):print(f" {token:15s}: {prob:.6f} ({prob * 100:.2f}%)")# 可視化概率分布plt.figure(figsize=(10, 6))plt.bar(top_tokens, top_probs)plt.title(f"Top {top_k} The probability distribution of the next token")plt.ylabel("probability")plt.xlabel("Token")plt.xticks(rotation=45)plt.tight_layout()plt.show()def main():print("Step 1: 加載預訓練模型和分詞器")# 從Hugging Face加載預訓練的GPT-2模型和分詞器tokenizer = GPT2Tokenizer.from_pretrained("gpt2")model = GPT2LMHeadModel.from_pretrained("gpt2")model.eval() # 將模型設置為評估模式print("\nStep 2: 準備輸入文本")input_text = "Artificial intelligence is"print(f"輸入文本: '{input_text}'")# 將輸入文本轉換為模型需要的格式inputs = tokenizer(input_text, return_tensors="pt")input_ids = inputs["input_ids"]attention_mask = inputs["attention_mask"]# 展示分詞結果tokens = tokenizer.convert_ids_to_tokens(input_ids[0])print(f"分詞結果: {tokens}")print(f"Token IDs: {input_ids[0].tolist()}")print("\nStep 3: 運行模型前向傳播")# 使用torch.no_grad()避免計算梯度,節省內存with torch.no_grad():# output_hidden_states=True 讓模型返回所有層的隱藏狀態outputs = model(input_ids=input_ids,attention_mask=attention_mask,output_hidden_states=True)# 獲取最后一層的隱藏狀態# hidden_states的形狀: [層數, batch_size, seq_len, hidden_dim]last_layer_hidden_states = outputs.hidden_states[-1]print(f"隱藏狀態形狀: {last_layer_hidden_states.shape}")# 獲取序列中最后一個token的隱藏狀態last_token_hidden_state = last_layer_hidden_states[0, -1, :]print(f"最后一個token的隱藏狀態形狀: {last_token_hidden_state.shape}")print(f"隱藏狀態前5個值: {last_token_hidden_state[:5].tolist()}")print("\nStep 4: 手動計算logits")# 從模型中獲取輸出嵌入矩陣的權重lm_head_weights = model.get_output_embeddings().weight # [vocab_size, hidden_dim]print(f"語言模型輸出嵌入矩陣形狀: {lm_head_weights.shape}")# 通過點積計算logits# logits代表每個詞匯表中token的分數logits = torch.matmul(last_token_hidden_state, lm_head_weights.T) # [vocab_size]print(f"Logits形狀: {logits.shape}")print(f"Logits值域: [{logits.min().item():.4f}, {logits.max().item():.4f}]")print("\nStep 5: 應用softmax轉換為概率")# 使用softmax將logits轉換為概率分布probabilities = torch.softmax(logits, dim=0)print(f"概率總和: {probabilities.sum().item():.4f}") # 應該接近1# 找出概率最高的tokennext_token_id = torch.argmax(probabilities).item()next_token = tokenizer.decode([next_token_id])print(f"預測的下一個token (ID: {next_token_id}): '{next_token}'")# 展示完整的句子complete_text = input_text + next_tokenprint(f"生成的文本: '{complete_text}'")# 展示top-k的概率分布display_token_probabilities(probabilities, tokenizer.convert_ids_to_tokens(range(len(probabilities))), top_k=10)print("\nStep 6: 比較與模型內置解碼結果")# 獲取模型內置的logits輸出model_outputs = model(input_ids=input_ids, attention_mask=attention_mask)model_logits = model_outputs.logitsprint(f"模型輸出的logits形狀: {model_logits.shape}")# 獲取最后一個token位置的logitslast_token_model_logits = model_logits[0, -1, :]# 驗證我們手動計算的logits與模型輸出的logits是否一致is_close = torch.allclose(logits, last_token_model_logits, rtol=1e-4)print(f"手動計算的logits與模型輸出的logits是否一致: {is_close}")# 如果不一致,計算差異if not is_close:diff = torch.abs(logits - last_token_model_logits)print(f"最大差異: {diff.max().item():.8f}")print(f"平均差異: {diff.mean().item():.8f}")print("\nStep 7: 使用模型進行文本生成")# 使用模型的generate方法生成更多文本# 生成時傳遞 attention_mask 和 pad_token_idgenerated_ids = model.generate(input_ids,max_length=input_ids.shape[1] + 10, # 生成10個額外的tokentemperature=1.0,do_sample=True,top_k=50,top_p=0.95,num_return_sequences=1,attention_mask=attention_mask, # 添加 attention_maskpad_token_id=tokenizer.eos_token_id # 明確設置 pad_token_id 為 eos_token_id)generated_text = tokenizer.decode(generated_ids[0], skip_special_tokens=True)print(f"模型生成的文本:\n'{generated_text}'")if __name__ == "__main__":main()
Roberta代碼案例
import torch
from transformers import RobertaTokenizer, RobertaForMaskedLM
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns# 設置中文字體顯示(如果需要顯示中文)
plt.rcParams['font.sans-serif'] = ['SimHei']
plt.rcParams['axes.unicode_minus'] = False# 1. 加載預訓練的RoBERTa模型和tokenizer
tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
model = RobertaForMaskedLM.from_pretrained('roberta-base')# 2. 定義一個帶有[MASK]標記的示例句子
text = f"The capital of France is {tokenizer.mask_token}."
print(f"原始文本: {text}")# 3. 對文本進行編碼,轉換為模型的輸入格式
inputs = tokenizer(text, return_tensors="pt")
print(f"\n標記化后的輸入ID: {inputs['input_ids'][0].tolist()}")
print(f"對應的標記: {tokenizer.convert_ids_to_tokens(inputs['input_ids'][0])}")# 4. 找到[MASK]標記的位置
mask_token_index = torch.where(inputs["input_ids"] == tokenizer.mask_token_id)[1]
if mask_token_index.numel() == 0:raise ValueError("沒有找到[MASK]標記,請檢查輸入文本。")
print(f"\n[MASK]標記的位置: {mask_token_index.item()}")# 5. 前向傳播,獲取預測結果(添加output_hidden_states=True)
with torch.no_grad():outputs = model(**inputs, output_hidden_states=True)# 6. 獲取[MASK]位置的預測分數
logits = outputs.logits
mask_token_logits = logits[0, mask_token_index, :]# 7. 找出前5個最可能的標記
top_5_tokens = torch.topk(mask_token_logits, 5, dim=1)
top_5_token_indices = top_5_tokens.indices[0].tolist()
top_5_token_scores = top_5_tokens.values[0].tolist()print("\n預測結果:")
for i, (index, score) in enumerate(zip(top_5_token_indices, top_5_token_scores)):token = tokenizer.decode([index])probability = torch.softmax(mask_token_logits, dim=1)[0, index].item()print(f" {i + 1}. '{token}' - 分數: {score:.2f}, 概率: {probability:.4f}")# 8. 獲取向量表示 - 確保獲取到hidden_states
last_hidden_states = outputs.hidden_states[-1] # 現在這一行應該可以工作了# 9. 可視化[MASK]位置的向量
def visualize_vector(vector, title):plt.figure(figsize=(10, 6))plt.bar(range(len(vector)), vector)plt.title(title)plt.xlabel('維度')plt.ylabel('激活值')plt.tight_layout()plt.show()# 10. 可視化解碼過程
def visualize_decoding_process():# 獲取模型最終層的權重矩陣decoder_weights = model.lm_head.decoder.weight.detach()# 獲取[MASK]位置的隱藏狀態向量mask_hidden_state = last_hidden_states[0, mask_token_index].squeeze()# 計算點積得分scores = torch.matmul(mask_hidden_state, decoder_weights.t())# 獲取前5個最高分詞的索引和分數top_indices = torch.topk(scores, 5).indices.tolist()top_tokens = [tokenizer.decode([idx]) for idx in top_indices]# 可視化注意力/解碼過程plt.figure(figsize=(12, 6))# 可視化隱藏狀態向量與詞表中向量的相似度plt.subplot(1, 2, 1)sns.heatmap(scores.reshape(1, -1)[:, top_indices].detach().numpy(),annot=True, fmt=".2f", cmap="YlGnBu",xticklabels=top_tokens)plt.title("詞向量與詞表的相似度分數")plt.xlabel("候選詞")plt.ylabel("點積分數")# 可視化softmax后的概率分布plt.subplot(1, 2, 2)probabilities = torch.softmax(scores, dim=0)[top_indices].detach().numpy()plt.bar(top_tokens, probabilities)plt.title("解碼后的概率分布")plt.xlabel("候選詞")plt.ylabel("概率")plt.tight_layout()plt.show()# 可視化[MASK]位置的向量
mask_vector = last_hidden_states[0, mask_token_index].squeeze().detach().numpy()
visualize_vector(mask_vector[:50], "MASK位置的詞向量表示(僅顯示前50維)")# 顯示解碼過程
visualize_decoding_process()# 12. 顯示最終預測結果
predicted_token_id = top_5_token_indices[0]
predicted_token = tokenizer.decode([predicted_token_id])
print(f"\n最終預測結果: '{predicted_token}'")
print(f"完整句子: {text.replace(tokenizer.mask_token, predicted_token)}")
更詳細的代碼案例
"""
大語言模型向量解碼過程詳解 - 使用BERT模型
=============================================
本示例展示了大語言模型如何將隱藏狀態向量解碼成文本輸出
"""
import time
from datetime import datetimeimport torch
import numpy as np
import matplotlib.pyplot as plt
from transformers import BertTokenizer, BertForMaskedLM
from typing import List, Tuple, Dict# 設置隨機種子,確保結果可復現
torch.manual_seed(42)class DecodingVisualizer:"""用于可視化大語言模型解碼過程的工具類"""def __init__(self, model_name: str = "bert-base-uncased", use_cuda: bool = True):"""初始化模型和分詞器"""print(f"正在加載 {model_name} 模型和分詞器...")# 添加設備自動檢測self.device = torch.device("cuda" if torch.cuda.is_available() and use_cuda else "cpu")# 注意:這里需要根據use_cuda參數決定使用哪個設備,而不是只檢查可用性# 添加低內存加載選項self.tokenizer = BertTokenizer.from_pretrained(model_name)print(f"正在加載 {model_name} (設備: {self.device})...")start_time = time.time()self.model = BertForMaskedLM.from_pretrained(model_name,low_cpu_mem_usage=True,torch_dtype=torch.float16 if self.device.type == "cuda" else torch.float32).to(self.device) # 確保這里沒有遺漏.to(self.device)self.model.eval() # 將模型設置為評估模式# 獲取模型配置self.hidden_size = self.model.config.hidden_sizeself.vocab_size = self.model.config.vocab_size# 獲取MASK token IDself.mask_token_id = self.tokenizer.mask_token_idself.mask_token = self.tokenizer.mask_tokenload_time = time.time() - start_timeprint(f"加載完成! 耗時: {load_time:.2f}s")print(f"模型加載完成! 隱藏層維度: {self.hidden_size}, 詞表大小: {self.vocab_size}")print(f"MASK token: '{self.mask_token}', ID: {self.mask_token_id}")def prepare_masked_input(self, text: str) -> Dict[str, torch.Tensor]:"""掩碼輸入準備函數"""# 更智能的掩碼位置選擇words = text.split()if not words:return {"inputs": self.tokenizer(text, return_tensors="pt").to(self.device), # 注意:這里需要將inputs張量移動到設備上"original_text": text,"masked_text": text,"original_word": "","masked_position": 0}# 選擇內容詞進行掩碼(避免掩碼停用詞)content_pos = []stopwords = {"the", "a", "an", "is", "are", "of", "to"}for i, word in enumerate(words):if word.lower() not in stopwords:content_pos.append(i)# 如果沒有內容詞,則選擇最后一個詞masked_pos = content_pos[-1] if content_pos else len(words) - 1original_word = words[masked_pos]words[masked_pos] = self.mask_tokenmasked_text = " ".join(words)inputs = self.tokenizer(masked_text,return_tensors="pt",max_length=512,truncation=True,padding="max_length" # 固定長度便于批處理).to(self.device)return {"inputs": inputs,"original_text": text,"masked_text": masked_text,"original_word": original_word,"masked_position": masked_pos + 1 # 考慮[CLS] token}def decode_step_by_step(self, text: str) -> None:"""詳細展示BERT模型解碼過程的每個步驟參數:text: 要處理的輸入文本verbose: 是否打印詳細過程返回:包含完整解碼信息的字典:{"input_text": str,"masked_text": str,"hidden_state": torch.Tensor,"logits": torch.Tensor,"predictions": List[Tuple[str, float]],"top_k_predictions": List[Tuple[str, float]]}"""print("\n" + "="*60)print("BERT模型解碼過程演示")print("="*60)# 準備帶掩碼的輸入masked_data = self.prepare_masked_input(text)inputs = masked_data["inputs"]original_text = masked_data["original_text"]masked_text = masked_data["masked_text"]original_word = masked_data["original_word"]print(f"原始文本: '{original_text}'")print(f"掩碼文本: '{masked_text}'")print(f"被掩碼的詞: '{original_word}'")# 分詞結果input_ids = inputs["input_ids"]token_type_ids = inputs["token_type_ids"]attention_mask = inputs["attention_mask"]tokens = self.tokenizer.convert_ids_to_tokens(input_ids[0])print(f"\n分詞結果: {tokens}")print(f"Token IDs: {input_ids[0].tolist()}")# 查找[MASK]的位置mask_positions = [i for i, id in enumerate(input_ids[0]) if id == self.mask_token_id]if mask_positions:mask_position = mask_positions[0]print(f"[MASK]的位置: {mask_position}, Token: '{tokens[mask_position]}'")else:print("未找到[MASK]標記,使用最后一個token作為示例")mask_position = len(tokens) - 2 # 避免[SEP]標記# Step 1: 運行模型前向傳播print("\n【Step 1: 運行模型前向傳播】")with torch.no_grad():outputs = self.model(input_ids=input_ids,token_type_ids=token_type_ids,attention_mask=attention_mask,output_hidden_states=True)# 獲取最后一層的隱藏狀態last_hidden_states = outputs.hidden_states[-1]print(f"隱藏狀態形狀: {last_hidden_states.shape}")# 獲取[MASK]位置的隱藏狀態mask_hidden_state = last_hidden_states[0, mask_position, :]print(f"[MASK]位置的隱藏狀態形狀: {mask_hidden_state.shape}")print(f"隱藏狀態前5個值: {mask_hidden_state[:5].tolist()}")# Step 2優化: 添加詳細解釋和性能優化print("\n【Step 2: 解碼向量生成logits(解碼過程的核心)】")print("解碼過程實質上是將隱藏狀態向量映射到詞表空間的一個線性變換")print(f"數學表達式: logits = hidden_state × W^T + b")# 使用更高效的矩陣運算with torch.no_grad():cls_weights = self.model.cls.predictions.decoder.weightcls_bias = self.model.cls.predictions.decoder.biasprint(f"解碼器權重矩陣形狀: {cls_weights.shape}")print(f"解碼器偏置向量形狀: {cls_bias.shape}")# 使用einsum進行高效矩陣乘法manual_logits = torch.einsum("d,vd->v",mask_hidden_state,cls_weights) + cls_bias# 添加溫度系數調節temperature = 1.0 # 可調節參數tempered_logits = manual_logits / temperature# 驗證一致性時添加容差說明model_logits = outputs.logits[0, mask_position, :]is_close = torch.allclose(manual_logits,model_logits,rtol=1e-3,atol=1e-5)print(f"\n手動計算的logits與模型輸出是否一致: {is_close}")if not is_close:diff = torch.abs(manual_logits - model_logits)print(f"最大差異: {diff.max().item():.8f}")print(f"平均差異: {diff.mean().item():.8f}")print("注: 小的數值差異可能是由于計算精度造成的")# Step 3: 從logits到概率print("\n【Step 3: 將logits轉換為概率】")with torch.no_grad():# 使用softmax轉換為概率分布probabilities = torch.softmax(manual_logits, dim=0)print(f"概率總和: {probabilities.sum().item():.4f}") # 應該接近1# 找出概率最高的tokentop_probs, top_indices = torch.topk(probabilities, 5)predicted_token_id = top_indices[0].item()predicted_token = self.tokenizer.convert_ids_to_tokens([predicted_token_id])[0]predicted_word = self.tokenizer.decode([predicted_token_id])print(f"\n預測的token (ID: {predicted_token_id}): '{predicted_token}'")print(f"解碼后的單詞: '{predicted_word}'")# 原始被遮蔽的詞的概率if original_word:original_word_ids = self.tokenizer.encode(original_word, add_special_tokens=False)if original_word_ids:original_id = original_word_ids[0]original_prob = probabilities[original_id].item()print(f"原始單詞 '{original_word}' (ID: {original_id}) 的概率: {original_prob:.6f} ({original_prob*100:.2f}%)")# 展示前10個最可能的tokensself._display_token_probabilities(probabilities, top_k=10)def _display_token_probabilities(self, probabilities: torch.Tensor, top_k: int = 5) -> None:# 獲取前k個最大概率及其索引top_probs, top_indices = torch.topk(probabilities, top_k)top_tokens = [self.tokenizer.convert_ids_to_tokens([idx.item()])[0] for idx in top_indices]top_words = [self.tokenizer.decode([idx.item()]) for idx in top_indices]# 創建使用更精確比例的圖形fig, ax = plt.subplots(figsize=(16, 9), constrained_layout=True)# 使用更適合數據對比的漸變色調colors = plt.cm.Blues(np.linspace(0.6, 0.9, top_k))# 繪制條形圖,適當增加條形寬度以提高可讀性bars = ax.barh(range(top_k), top_probs.tolist(), color=colors, height=0.6)# 自定義Y軸刻度,同時顯示token和對應的實際內容ax.set_yticks(range(top_k))labels = [f"{w} ({t})" if t != w else w for t, w in zip(top_tokens, top_words)]ax.set_yticklabels(labels, fontsize=12)# 添加更突出的標題與標簽ax.set_title("Token Prediction Probabilities", fontsize=18, fontweight='bold', pad=20)ax.set_xlabel("Probability", fontsize=15, fontweight='semibold', labelpad=12)# 去掉多余的Y軸標簽,因為標簽已經在刻度上顯示ax.set_ylabel("")# 動態設置X軸范圍,確保最高概率條形圖占據約80%的寬度max_prob = top_probs[0].item()ax.set_xlim(0, max(max_prob * 1.25, 0.05))# 添加更清晰的數據標簽for i, (bar, prob) in enumerate(zip(bars, top_probs)):width = bar.get_width()ax.text(width + 0.005,i,f"{prob:.4f} ({prob:.1%})",ha='left',va='center',fontsize=13,fontweight='bold',color='#333333')# 添加半透明的網格線以便于閱讀ax.grid(axis='x', linestyle='--', alpha=0.4, color='gray')# 反轉Y軸使最高概率在上方ax.invert_yaxis()# 美化圖表邊框和背景ax.spines['top'].set_visible(False)ax.spines['right'].set_visible(False)ax.spines['left'].set_linewidth(0.5)ax.spines['bottom'].set_linewidth(0.5)# 設置淺色背景以提高對比度ax.set_facecolor('#f8f8f8')# 添加概率條形圖的圓角效果for bar in bars:bar.set_edgecolor('white')bar.set_linewidth(1)plt.show()def visualize_linear_transformation(self, text: str) -> None:"""可視化向量解碼的線性變換過程"""print("\n" + "=" * 60)print("可視化向量解碼的線性變換過程")print("=" * 60)# 準備帶掩碼的輸入masked_data = self.prepare_masked_input(text)inputs = masked_data["inputs"]# 尋找[MASK]位置input_ids = inputs["input_ids"]mask_positions = [i for i, id in enumerate(input_ids[0]) if id == self.mask_token_id]if mask_positions:mask_position = mask_positions[0]else:mask_position = len(input_ids[0]) - 2 # 避免[SEP]# 運行模型獲取隱藏狀態with torch.no_grad():outputs = self.model(**inputs,output_hidden_states=True)last_hidden_states = outputs.hidden_states[-1]mask_hidden_state = last_hidden_states[0, mask_position, :]# 獲取解碼器權重cls_weights = self.model.cls.predictions.decoder.weight# 為了可視化,我們只取前2維隱藏狀態和幾個樣本詞reduced_hidden = mask_hidden_state[:2].cpu().numpy() # 添加cpu()# 選取幾個常見詞的權重向量common_words = ["the", "is", "and", "of", "to", "a", "in", "for", "with"]word_ids = []for word in common_words:word_ids.extend(self.tokenizer.encode(word, add_special_tokens=False))# 確保我們有不重復的IDsword_ids = list(set(word_ids))[:8] # 取前8個word_tokens = [self.tokenizer.convert_ids_to_tokens([id])[0] for id in word_ids]# 獲取這些詞的權重向量word_vectors = cls_weights[word_ids, :2].cpu().numpy() # 添加cpu()# 可視化plt.figure(figsize=(10, 8))# 繪制隱藏狀態向量plt.scatter(reduced_hidden[0], reduced_hidden[1], c='red', s=100, marker='*',label='Hidden state vector')# 繪制詞向量plt.scatter(word_vectors[:, 0], word_vectors[:, 1], c='blue', s=50)# 添加詞標簽for i, token in enumerate(word_tokens):plt.annotate(token, (word_vectors[i, 0], word_vectors[i, 1]),fontsize=10, alpha=0.8)# 計算這些詞的logits(向量點積)logits = np.dot(reduced_hidden, word_vectors.T)# 繪制從隱藏狀態到各詞向量的連線,線寬表示logit值max_logit = np.max(np.abs(logits))for i, token in enumerate(word_tokens):# 歸一化logit值作為線寬width = 0.5 + 3.0 * (logits[i] + max_logit) / (2 * max_logit)# 用顏色表示logit的正負color = 'green' if logits[i] > 0 else 'red'alpha = abs(logits[i]) / max_logitplt.plot([reduced_hidden[0], word_vectors[i, 0]],[reduced_hidden[1], word_vectors[i, 1]],linewidth=width, alpha=alpha, color=color)plt.title("The relationship between the hidden state vector and the word vector (2D projection)")plt.xlabel("dimension1")plt.ylabel("dimension2")plt.grid(True, alpha=0.3)plt.legend()plt.tight_layout()plt.show()# 展示與這些詞的點積(logits)print("\n隱藏狀態與詞向量的點積(logits):")for i, token in enumerate(word_tokens):print(f" 與 '{token}' 的點積: {logits[i]:.4f}")# 將logits轉換為概率probs = np.exp(logits) / np.sum(np.exp(logits))print("\n轉換為概率后:")for i, token in enumerate(word_tokens):print(f" '{token}' 的概率: {probs[i]:.4f} ({probs[i]*100:.2f}%)")def demonstrate_bert_mlm(self, text: str, positions_to_mask=None) -> None:"""演示BERT掩碼語言模型的完整預測過程"""print("\n" + "=" * 60)print("BERT掩碼語言模型演示")print("=" * 60)# 分詞inputs = self.tokenizer(text,return_tensors="pt",padding=True,truncation=True).to(self.device) # 添加.to(self.device)將輸入移動到正確的設備input_ids = inputs["input_ids"]tokens = self.tokenizer.convert_ids_to_tokens(input_ids[0])print(f"原始文本: '{text}'")print(f"分詞結果: {tokens}")# 如果沒有指定要掩碼的位置,則隨機選擇if positions_to_mask is None:# 排除[CLS]和[SEP]標記valid_positions = list(range(1, len(tokens) - 1))# 隨機選擇15%的token進行掩碼num_to_mask = max(1, int(len(valid_positions) * 0.15))positions_to_mask = np.random.choice(valid_positions, num_to_mask, replace=False)# 應用掩碼masked_input_ids = input_ids.clone()for pos in positions_to_mask:if pos < len(tokens):original_token = tokens[pos]original_id = input_ids[0, pos].item()masked_input_ids[0, pos] = self.mask_token_idprint(f"位置 {pos}: 將 '{original_token}' (ID: {original_id}) 替換為 '{self.mask_token}'")# 運行模型with torch.no_grad():outputs = self.model(input_ids=masked_input_ids,token_type_ids=inputs["token_type_ids"],attention_mask=inputs["attention_mask"])predictions = outputs.logits# 對每個掩碼位置進行預測print("\n預測結果:")for pos in positions_to_mask:if pos < len(tokens):# 獲取該位置的logitslogits = predictions[0, pos, :]# 應用softmax獲取概率probs = torch.softmax(logits, dim=0)# 獲取概率最高的tokentop_probs, top_indices = torch.topk(probs, 5)# 顯示預測結果original_token = tokens[pos]original_id = input_ids[0, pos].item()print(f"\n位置 {pos} 原始token: '{original_token}' (ID: {original_id})")print("Top 5預測:")for i, (index, prob) in enumerate(zip(top_indices, top_probs)):predicted_token = self.tokenizer.convert_ids_to_tokens([index])[0]print(f" {i + 1}. '{predicted_token}': {prob:.6f} ({prob * 100:.2f}%)")# 檢查原始token的排名和概率original_prob = probs[original_id].item()original_rank = torch.where(torch.argsort(probs, descending=True) == original_id)[0].item() + 1print(f" 原始token '{original_token}' 排名: #{original_rank}, 概率: {original_prob:.6f} ({original_prob * 100:.2f}%)")# 恢復掩碼后的文本predicted_ids = torch.argmax(outputs.logits, dim=-1)predicted_tokens = []for i in range(len(tokens)):if i in positions_to_mask:# 使用預測的tokenpredicted_token = self.tokenizer.convert_ids_to_tokens([predicted_ids[0, i].item()])[0]predicted_tokens.append(predicted_token)else:predicted_tokens.append(tokens[i])# 解碼回文本predicted_text = self.tokenizer.convert_tokens_to_string(predicted_tokens)print(f"\n恢復后的文本: '{predicted_text}'")def _nucleus_sampling(self, logits: torch.Tensor, p: float = 0.9) -> torch.Tensor:"""實現nucleus sampling (也稱為top-p sampling)Args:logits: 模型輸出的logitsp: 概率質量閾值(默認0.9)Returns:采樣得到的token ID"""# 計算softmax概率probs = torch.softmax(logits, dim=-1)# 按概率從大到小排序sorted_probs, sorted_indices = torch.sort(probs, descending=True)# 計算累積概率cumulative_probs = torch.cumsum(sorted_probs, dim=-1)# 找到累積概率超過p的位置nucleus = cumulative_probs < p# 確保至少選擇一個token(如果所有nucleus都是False)if not nucleus.any():nucleus[0] = True# 將概率低于閾值的token概率設為0nucleus_probs = torch.zeros_like(probs)nucleus_probs[sorted_indices[nucleus]] = sorted_probs[nucleus]# 重新歸一化概率if nucleus_probs.sum() > 0:nucleus_probs = nucleus_probs / nucleus_probs.sum()else:# 如果所有概率都為0,則使用原始概率的top-1nucleus_probs[sorted_indices[0]] = 1.0# 采樣return torch.multinomial(nucleus_probs, num_samples=1)def compare_decoding_strategies(self, text: str = None):"""比較不同解碼策略的結果"""if text is None:# 使用更具歧義性的例子,讓不同策略有可能生成不同結果text = "The scientist made a [MASK] discovery that changed the field."print(f"使用示例文本: '{text}'")# 擴展策略集合,使用多種參數strategies = {"貪婪解碼": lambda logits: torch.argmax(logits, dim=-1),"Top-K=3": lambda logits: torch.multinomial(self._top_k_sampling(logits, k=3),num_samples=1),"Top-K=10": lambda logits: torch.multinomial(self._top_k_sampling(logits, k=10),num_samples=1),"Top-P=0.5": lambda logits: self._nucleus_sampling(logits, 0.5),"Top-P=0.9": lambda logits: self._nucleus_sampling(logits, 0.9)}# Top-K采樣函數def _top_k_sampling(self, logits, k=5):values, _ = torch.topk(logits, k)min_value = values[-1]# 創建一個掩碼,保留top-k的值mask = logits >= min_valuefiltered_logits = logits.clone()# 將非top-k的logits設為負無窮filtered_logits[~mask] = float('-inf')# 應用softmax獲取概率分布probs = torch.softmax(filtered_logits, dim=-1)return probs# 為類添加輔助方法self._top_k_sampling = _top_k_sampling.__get__(self, type(self))masked_data = self.prepare_masked_input(text)inputs = masked_data["inputs"]# 查找[MASK]的位置input_ids = inputs["input_ids"]mask_positions = [i for i, id in enumerate(input_ids[0]) if id == self.mask_token_id]if mask_positions:mask_position = mask_positions[0]else:# 如果沒有找到MASK標記,使用默認位置mask_position = masked_data["masked_position"]print(f"\n掩碼詞: '{self.mask_token}'")print(f"掩碼文本: '{masked_data['masked_text']}'")with torch.no_grad():outputs = self.model(**inputs)logits = outputs.logits[0, mask_position]probs = torch.softmax(logits, dim=-1)# 獲取原始單詞的概率和排名if masked_data["original_word"]:original_word_tokens = self.tokenizer.encode(masked_data["original_word"],add_special_tokens=False)if original_word_tokens:original_id = original_word_tokens[0]original_prob = probs[original_id].item()original_rank = torch.where(torch.argsort(probs, descending=True) == original_id)[0].item() + 1print(f"原始詞: '{masked_data['original_word']}', 概率: {original_prob:.4f}, 在詞表中排名: #{original_rank}")# 獲取總體詞匯表預測的Top 10top_probs, top_indices = torch.topk(probs, 10)print("\n模型Top-10預測詞:")for i, (index, prob) in enumerate(zip(top_indices, top_probs)):token = self.tokenizer.decode([index.item()])print(f" {i + 1}. '{token}': {prob:.4f}")print("\n不同解碼策略比較結果:")results = {}for name, strategy in strategies.items():# 對每個策略運行5次,查看隨機性效果strategy_results = []for i in range(5 if "Top" in name else 1): # 貪婪解碼是確定性的,只需運行一次pred_id = strategy(logits).item()pred_token = self.tokenizer.decode([pred_id])pred_prob = probs[pred_id].item()strategy_results.append((pred_token, pred_prob))results[name] = strategy_results# 顯示結果for name, strategy_results in results.items():print(f"\n{name}:")if len(strategy_results) == 1:token, prob = strategy_results[0]print(f" 預測詞: '{token}', 概率: {prob:.4f}")else:# 對于隨機策略,分析多次運行的結果tokens = [r[0] for r in strategy_results]# 顯示唯一詞及其出現次數unique_tokens = {}for token in tokens:if token not in unique_tokens:unique_tokens[token] = 0unique_tokens[token] += 1# 顯示結果print(f" 5次采樣結果:")for token, count in unique_tokens.items():prob = next(p for t, p in strategy_results if t == token)print(f" '{token}': {count}/5次, 概率: {prob:.4f}")# 額外嘗試幾個更具歧義的文本if text == "The scientist made a [MASK] discovery that changed the field.":extra_examples = ["The weather forecast for tomorrow is [MASK].","She felt [MASK] after hearing the unexpected news.","The movie was both entertaining and [MASK]."]print("\n\n更多測試示例:")for example in extra_examples:print(f"\n文本: '{example}'")# 使用top-p和貪婪解碼做對比masked_data = self.prepare_masked_input(example)inputs = masked_data["inputs"]# 查找[MASK]的位置input_ids = inputs["input_ids"]mask_positions = [i for i, id in enumerate(input_ids[0]) if id == self.mask_token_id]if mask_positions:mask_position = mask_positions[0]else:mask_position = masked_data["masked_position"]with torch.no_grad():outputs = self.model(**inputs)logits = outputs.logits[0, mask_position]# 使用貪婪解碼greedy_id = torch.argmax(logits, dim=-1).item()greedy_token = self.tokenizer.decode([greedy_id])# 使用top-p解碼topp_results = []for _ in range(5):topp_id = self._nucleus_sampling(logits, 0.7).item()topp_token = self.tokenizer.decode([topp_id])topp_results.append(topp_token)print(f" 貪婪解碼: '{greedy_token}'")print(f" Top-P=0.7 采樣 (5次): {', '.join([f'\'{r}\'' for r in topp_results])}")def main():"""主函數"""print("初始化BERT模型解碼可視化器...")# 您可以指定不同的預訓練模型,如"bert-large-uncased"或"bert-base-chinese"等visualizer = DecodingVisualizer("bert-base-uncased")# 示例1: 基本解碼過程# 展示模型如何從隱藏狀態向量解碼出詞匯表中的tokeninput_text = "The neural network transforms vectors into tokens through [MASK]."visualizer.decode_step_by_step(input_text)# 示例2: 可視化線性變換過程# 展示隱藏狀態和詞向量之間的關系input_text2 = "Language models convert hidden states to vocabulary logits by [MASK]."visualizer.visualize_linear_transformation(input_text2)# 示例3: 完整BERT掩碼語言模型演示# 展示BERT如何預測被掩碼的多個位置input_text3 = "Deep learning models perform vector transformations to process natural language."# 指定具體位置進行掩碼演示,這些索引對應分詞后的位置positions_to_mask = [4, 7, 11] # 對應某些單詞位置visualizer.demonstrate_bert_mlm(input_text3, positions_to_mask)# 示例4: 比較不同解碼策略# 使用更具歧義性的文本能更好地展示不同解碼策略的差異print("\n" + "=" * 60)print("不同解碼策略比較")print("=" * 60)# 不傳入參數,讓函數使用默認的歧義性更強的文本visualizer.compare_decoding_strategies()# 可選:嘗試其他歧義性文本來比較解碼策略# ambiguous_texts = [# "The weather forecast for tomorrow is [MASK].",# "She felt [MASK] after hearing the unexpected news."# ]# for text in ambiguous_texts:# visualizer.compare_decoding_strategies(text)if __name__ == "__main__":main()