以下是一個完整的詞嵌入(Word Embedding)示例代碼,使用?modelscope
?下載?tiansz/bert-base-chinese
?模型,并通過?transformers
?加載模型,獲取中文句子的詞嵌入。
from modelscope.hub.snapshot_download import snapshot_download
from transformers import BertTokenizer, BertModel
import torch# 下載模型到本地目錄
model_dir = snapshot_download('tiansz/bert-base-chinese', cache_dir='./bert-base-chinese')
print(f"模型已下載到: {model_dir}")# 本地模型路徑
model_path = model_dir # 使用下載的模型路徑# 從本地加載分詞器和模型
tokenizer = BertTokenizer.from_pretrained(model_path)
model = BertModel.from_pretrained(model_path)# 將模型設置為評估模式
model.eval()# 輸入句子
sentence = "你好,今天天氣怎么樣?"# 分詞并轉換為模型輸入格式
inputs = tokenizer(sentence, return_tensors='pt')# 獲取詞嵌入
with torch.no_grad():outputs = model(**inputs)# 輸出的最后一層隱藏狀態(即詞嵌入)
last_hidden_states = outputs.last_hidden_state# 打印詞嵌入的形狀
print("Embeddings shape:", last_hidden_states.shape) # [batch_size, sequence_length, hidden_size]# 獲取所有 token 的文本表示
tokens = tokenizer.convert_ids_to_tokens(inputs['input_ids'][0])# 打印每個 token 及其對應的嵌入
for i, (token, embedding) in enumerate(zip(tokens, last_hidden_states[0])):print(f"Token {i}: {token}")print(f"Embedding: {embedding[:10]}...") # 只打印前 10 維
-
下載模型:
使用?modelscope
?的?snapshot_download
?方法下載?tiansz/bert-base-chinese
?模型到本地目錄?./bert-base-chinese
。 -
加載模型:
使用?transformers
?的?BertTokenizer
?和?BertModel
?從本地路徑加載模型和分詞器。 -
輸入句子:
定義一個中文句子?"你好,今天天氣怎么樣?"
。 -
分詞和編碼:
使用分詞器將句子轉換為模型輸入格式(包括?input_ids
?和?attention_mask
)。 -
獲取詞嵌入:
將輸入傳遞給模型,獲取最后一層隱藏狀態(即詞嵌入)。 -
輸出結果:
打印每個 token 及其對應的嵌入向量(只打印前 10 維)。
Downloading Model to directory: ./bert-base-chinese/tiansz/bert-base-chinese
模型已下載到: ./bert-base-chinese/tiansz/bert-base-chinese
Embeddings shape: torch.Size([1, 13, 768])
Token 0: [CLS]
Embedding: tensor([ 1.0592, 0.1071, 0.4324, 0.0860, 0.9301, -0.6972, 0.7214, -0.0408,-0.1321, -0.1840])...
Token 1: 你
Embedding: tensor([ 0.2686, 0.1246, 0.4344, 0.5293, 0.7844, -0.7398, 0.4845, -0.3669,-0.6001, 0.8876])...
Token 2: 好
Embedding: tensor([ 0.9697, 0.3952, 0.6012, -0.0386, 0.6996, -0.4031, 1.0839, 0.0119,0.0551, 0.2817])...
Token 3: ,
Embedding: tensor([ 0.8255, 0.6987, 0.0310, 0.4167, -0.0159, -0.5835, 1.4922, 0.3883,0.9030, -0.1529])...
Token 4: 今
Embedding: tensor([ 0.1640, 0.2744, 0.6168, 0.0693, 1.0125, -0.4001, -0.2779, 0.6306,-0.1302, -0.0534])...
Token 5: 天
Embedding: tensor([ 0.5449, -0.1022, 0.0316, -0.4571, 0.6967, 0.0789, 0.6432, 0.0501,0.3832, -0.3269])...
Token 6: 天
Embedding: tensor([ 1.0107, -0.3673, -1.0272, -0.1893, 0.3766, 0.2341, 0.3552, 0.0228,-0.2411, -0.2227])...
Token 7: 氣
Embedding: tensor([ 0.9320, -0.8562, -0.9696, 0.2202, 0.1046, 0.3335, -0.2725, -0.3014,-0.0057, -0.2503])...
Token 8: 怎
Embedding: tensor([ 0.7004, -0.3408, 0.1803, -0.0093, -0.0996, 0.9946, 0.0251, 0.0321,0.1867, -0.6998])...
Token 9: 么
Embedding: tensor([ 0.7296, 0.0704, 0.2153, -0.2680, -0.4890, 0.8920, 0.0324, -0.0820,0.5248, -0.6742])...
Token 10: 樣
Embedding: tensor([ 0.2482, 0.0567, 0.2574, 0.1359, 0.4210, 0.9753, 0.2528, -0.2645,0.3426, -0.4405])...
Token 11: ?
Embedding: tensor([ 1.4162, 0.4149, 0.1098, -0.7175, 0.9875, -0.4366, 0.8482, 0.2046,0.2398, -0.1031])...
Token 12: [SEP]
Embedding: tensor([ 0.2140, 0.1362, 0.3720, 0.5722, 0.3005, -0.1858, 1.1392, 0.2413,-0.1240, 0.0177])...