?
目錄
1.1 蒸餾目標
2 環境準備
2.1依賴庫安裝
2.2 硬件要求
2.3 模型與數據集下載
2.3.1 教師模型下載
2.3.2 學生模型下載
?2.3.3 數據集準備或下載
?3.過程日志
?4. 模型加載與配置
4.1 加載教師模型
4.2 加載學生模型
4.3 數據預處理函數??
?4.4 數據收集器
4.5 定義訓練參數
4.6 定義蒸餾配置
4.7 定義訓練配置
4.8 創建蒸餾器?
4.9 開始蒸餾?
?5. 完整代碼
6.結合上述內容和TextBrewer,自己重新整理了一遍代碼,僅供參考:
1.1 蒸餾目標
將?DeepSeek?的推理能力遷移到?Qwen-2.5;
確保學生模型與?Qwen-2.5?的原始功能(如對話、多語言支持)兼容。
2 環境準備
2.1依賴庫安裝
pip install torch torchvision transformers datasets2.2
pip install accelerate # 加速分布式訓練
pip install evaluate # 評估指標
2.2 硬件要求
GPU:建議使用單張或多張?NVIDIA GPU(如?V100、A100)建議至少?24GB。
CUDA:安裝與?PyTorch?兼容的?CUDA?版本。
2.3 模型與數據集下載
2.3.1 教師模型下載
Qwen-2.5-1.5B從huggingface 下載,離線下載方式(從https://hf-mirror.com離線下載):
$env:HF_ENDPOINT = "https://hf-mirror.com"huggingface-cli download Qwen/Qwen2.5-1.5B --local-dir ./models/qwen2.5-1.5B --local-dir-use-symlinks False
2.3.2 學生模型下載
Qwen-2.5-1.5B
$env:HF_ENDPOINT = "https://hf-mirror.com"huggingface-cli download Qwen/Qwen2.5-1.5B --local-dir ./models/qwen2.5-1.5B --local-dir-use-symlinks False
?2.3.3 數據集準備或下載
建議使用大規模文本數據集(如?wikitex、Wikipedia、BooksCorpus、OpenWebText?等)。離線下載地址(從https://www.kaggle.com/datasets/jayanthbontha/wikitext下載)
?3.過程日志
# 配置日志 logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s') logger = logging.getLogger(__name__)# 獲取當前腳本文件的絕對路徑 current_script_path = os.path.abspath(__file__) logger.info(f"Current script path: {current_script_path}")# 獲取當前腳本文件所在的目錄 current_script_dir = os.path.dirname(current_script_path) logger.info(f"Current script directory: {current_script_dir}")
?4. 模型加載與配置
4.1 加載教師模型
# 加載教師模型(DeepSeek-R1:1.5B)
teacher_model_name = os.path.join(current_script_dir, "../models/DeepSeek-R1-Distill-Qwen-1.5B")
logger.info(f"Loading teacher model: {teacher_model_name}")
teacher_tokenizer = AutoTokenizer.from_pretrained(teacher_model_name,local_files_only=True
)
teacher_model = AutoModelForCausalLM.from_pretrained(teacher_model_name,local_files_only=True
)
4.2 加載學生模型
# 加載學生模型(Qwen)
student_model_name = os.path.join(current_script_dir, "../models/qwen2.5-1.5B") # 確保模型名稱正確
logger.info(f"Loading student model: {student_model_name}")
student_tokenizer = AutoTokenizer.from_pretrained(student_model_name,local_files_only=True
)
student_model = AutoModelForCausalLM.from_pretrained(student_model_name,local_files_only=True
)
4.3 數據預處理函數??
dataset.map()?是?Hugging Face datasets?庫中用于對數據集進行批量預處理的核心方法。當?batched=True?時,它會將數據集分批(batch)傳遞給?preprocess_function,而不是逐個樣本處理。這種批量處理方式效率更高,尤其適合大規模數據集。
# 數據預處理 logger.info(f"Preprocess_function") def preprocess_function(examples):return teacher_tokenizer(examples["text"], truncation=True, padding="max_length", max_length=512)logger.info("Preprocessing train dataset") train_dataset = train_dataset.map(preprocess_function, batched=True) logger.info("Preprocessing eval dataset") eval_dataset = eval_dataset.map(preprocess_function, batched=True)
?4.4 數據收集器
DataCollatorForLanguageModeling?是?Hugging Face transformers?庫中的一個數據整理類(Data Collator),用于在訓練語言模型(如?BERT、GPT?等)時動態生成訓練樣本。它可以根據任務需求(如掩碼語言模型(MLM)或因果語言模型(CLM))對輸入數據進行預處理。
# 數據收集器 logger.info("DataCollatorForLanguageModeling") data_collator = DataCollatorForLanguageModeling(tokenizer=teacher_tokenizer, mlm=False)mlm(關鍵參數):作用:控制是否啟用**掩碼語言模型(MLM)**模式。
mlm=True:隨機掩碼輸入中的部分?token(如?BERT?訓練方式),生成?[MASK]?標記。
mlm=False:禁用掩碼,適用于因果語言模型(CLM)(如?GPT?訓練方式),輸入和標簽為原始?token?序列。
4.5 定義訓練參數
# 定義訓練參數 logger.info("Creating trainer") training_args = TrainingArguments(output_dir="./results", # 訓練結果保存路徑eval_strategy="epoch", # 每個 epoch 結束時評估learning_rate=5e-5, # 學習率(默認 5e-5 是常見選擇)per_device_train_batch_size=2, # 每個設備的訓練 batch size(GPU 單卡)per_device_eval_batch_size=2, # 每個設備的評估 batch sizenum_train_epochs=3, # 訓練輪次(3 輪可能較短,需根據任務調整)weight_decay=0.01, # 權重衰減(L2 正則化)logging_dir="./logs", # 日志保存路徑logging_steps=100, # 每 100 步記錄一次日志fp16=False, # 是否啟用混合精度訓練(建議開啟)gradient_accumulation_steps=4, # 梯度累積步數(等效 batch_size=8)report_to="tensorboard", # 使用 TensorBoard 記錄訓練過程# tensorboard_dir="./tensorboard" # 可選:指定 TensorBoard 日志目錄 )
4.6 定義蒸餾配置
# 定義蒸餾配置 weight:添加權重,"loss": "mse" logger.info("Creating distillation config") distill_config = DistillationConfig(temperature=2.0, # 溫度參數,控制軟標簽的平滑程度hard_label_weight=0.5, # 真實標簽損失權重kd_loss_type="ce", # 知識蒸餾損失類型(交叉熵)intermediate_matches=[ # 中間層匹配配置{"layer_T": 6, # 教師模型的第6層"layer_S": 6, # 學生模型的第6層"feature": "hidden", # 匹配隱藏層特征"weight": 1.0, # 中間層損失權重"loss": "mse" # 使用均方誤差損失}])
4.7 定義訓練配置
# 定義訓練配置 logger.info("Creating training config") train_config = TrainingConfig(device="cuda" if torch.cuda.is_available() else "cpu", # 設備選擇log_dir="./logs", # 日志目錄output_dir="./outputs" # 模型輸出目錄# save_best_model=True, # 是否保存最佳模型(注釋狀態)# save_last_model=True, # 是否保存最后模型(注釋狀態)# save_model_every_epoch=True, # 是否每輪保存模型(注釋狀態)# tensorboard_dir="./tensorboard" # TensorBoard 日志目錄(注釋狀態))
4.8 創建蒸餾器?
# 創建蒸餾器 logger.info("Creating distiller") distiller = GeneralDistiller(train_config=train_config, # 訓練配置(包含設備、路徑等)distill_config=distill_config, # 蒸餾配置(溫度、損失權重等)model_T=teacher_model, # 教師模型model_S=student_model, # 學生模型adaptor_T=None, # 教師模型適配器(未配置)adaptor_S=None # 學生模型適配器(未配置) )
4.9 開始蒸餾?
# 開始蒸餾 with distiller: # 使用蒸餾器上下文管理器,確保資源正確初始化和釋放logger.info("Starting training") # 記錄訓練開始日志# 初始化 Trainer,集成模型蒸餾配置trainer = Trainer(model=student_model, # 學生模型(需要訓練的小模型)args=training_args, # 訓練參數(如學習率、批次大小、設備等)train_dataset=train_dataset, # 訓練數據集(包含輸入和標簽)eval_dataset=eval_dataset, # 驗證數據集(用于評估模型性能)data_collator=data_collator, # 數據批量處理函數(將單條數據組合成批次)# processing_class=teacher_tokenizer # 注意:此處可能存在問題(見下方說明)# 正確做法:適配器或數據處理邏輯應在蒸餾配置中處理)# 開始模型訓練trainer.train() # 啟動訓練循環,包含前向傳播、損失計算、反向傳播等logger.info("Training finished") # 記錄訓練結束日志
?5. 完整代碼
import osimport torch
from transformers import AutoModelForCausalLM, AutoTokenizer, DataCollatorForLanguageModeling, Trainer, \TrainingArguments
from textbrewer import GeneralDistiller, TrainingConfig, DistillationConfig
from datasets import load_dataset
import logging# 配置日志
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
logger = logging.getLogger(__name__)# 獲取當前腳本文件的絕對路徑
current_script_path = os.path.abspath(__file__)
logger.info(f"Current script path: {current_script_path}")# 獲取當前腳本文件所在的目錄
current_script_dir = os.path.dirname(current_script_path)
logger.info(f"Current script directory: {current_script_dir}")# 加載教師模型(DeepSeek-R1:1.5B)
teacher_model_name = os.path.join(current_script_dir, "../models/DeepSeek-R1-Distill-Qwen-1.5B")
logger.info(f"Loading teacher model: {teacher_model_name}")
teacher_tokenizer = AutoTokenizer.from_pretrained(teacher_model_name,local_files_only=True
)
teacher_model = AutoModelForCausalLM.from_pretrained(teacher_model_name,local_files_only=True
)# 加載學生模型(Qwen)
student_model_name = os.path.join(current_script_dir, "../models/qwen2.5-1.5B") # 確保模型名稱正確
logger.info(f"Loading student model: {student_model_name}")
student_tokenizer = AutoTokenizer.from_pretrained(student_model_name,local_files_only=True
)
student_model = AutoModelForCausalLM.from_pretrained(student_model_name,local_files_only=True
)# 準備數據集
datasets_name = os.path.join(current_script_dir, "../models/Dataset/wikitext-2-raw/") # 確保模型名稱正確
data_files = {"train": datasets_name+"wiki.train.raw","test": datasets_name+"wiki.test.raw"
}
logger.info(f"Loading dataset from local files: {data_files}")
dataset = load_dataset("text", data_files=data_files)
train_dataset = dataset["train"]
eval_dataset = dataset["test"]# 數據預處理
logger.info(f"Preprocess_function")
def preprocess_function(examples):return teacher_tokenizer(examples["text"], truncation=True, padding="max_length", max_length=512)logger.info("Preprocessing train dataset")
train_dataset = train_dataset.map(preprocess_function, batched=True)
logger.info("Preprocessing eval dataset")
eval_dataset = eval_dataset.map(preprocess_function, batched=True)# 數據收集器
logger.info("DataCollatorForLanguageModeling")
data_collator = DataCollatorForLanguageModeling(tokenizer=teacher_tokenizer, mlm=False)# 定義訓練參數
logger.info("Creating trainer")
training_args = TrainingArguments(output_dir="./results", # 訓練結果保存路徑eval_strategy="epoch", # 每個 epoch 結束時評估learning_rate=5e-5, # 學習率(默認 5e-5 是常見選擇)per_device_train_batch_size=2, # 每個設備的訓練 batch size(GPU 單卡)per_device_eval_batch_size=2, # 每個設備的評估 batch sizenum_train_epochs=3, # 訓練輪次(3 輪可能較短,需根據任務調整)weight_decay=0.01, # 權重衰減(L2 正則化)logging_dir="./logs", # 日志保存路徑logging_steps=100, # 每 100 步記錄一次日志fp16=False, # 是否啟用混合精度訓練(建議開啟)gradient_accumulation_steps=4, # 梯度累積步數(等效 batch_size=8)report_to="tensorboard", # 使用 TensorBoard 記錄訓練過程# tensorboard_dir="./tensorboard" # 可選:指定 TensorBoard 日志目錄
)# 定義蒸餾配置 weight:添加權重,"loss": "mse"
logger.info("Creating distillation config")
distill_config = DistillationConfig(temperature=2.0, # 溫度參數,控制軟標簽的平滑程度hard_label_weight=0.5, # 真實標簽損失權重kd_loss_type="ce", # 知識蒸餾損失類型(交叉熵)intermediate_matches=[ # 中間層匹配配置{"layer_T": 6, # 教師模型的第6層"layer_S": 6, # 學生模型的第6層"feature": "hidden", # 匹配隱藏層特征"weight": 1.0, # 中間層損失權重"loss": "mse" # 使用均方誤差損失}]
)# 定義訓練配置
logger.info("Creating training config")
train_config = TrainingConfig(device="cuda" if torch.cuda.is_available() else "cpu", # 設備選擇log_dir="./logs", # 日志目錄output_dir="./outputs" # 模型輸出目錄# save_best_model=True, # 是否保存最佳模型(注釋狀態)# save_last_model=True, # 是否保存最后模型(注釋狀態)# save_model_every_epoch=True, # 是否每輪保存模型(注釋狀態)# tensorboard_dir="./tensorboard" # TensorBoard 日志目錄(注釋狀態)
)# 創建蒸餾器
logger.info("Creating distiller")
distiller = GeneralDistiller(train_config=train_config, # 訓練配置(包含設備、路徑等)distill_config=distill_config, # 蒸餾配置(溫度、損失權重等)model_T=teacher_model, # 教師模型model_S=student_model, # 學生模型adaptor_T=None, # 教師模型適配器(未配置)adaptor_S=None # 學生模型適配器(未配置)
)# 開始蒸餾
with distiller: # 使用蒸餾器上下文管理器,確保資源正確初始化和釋放logger.info("Starting training") # 記錄訓練開始日志# 初始化 Trainer,集成模型蒸餾配置trainer = Trainer(model=student_model, # 學生模型(需要訓練的小模型)args=training_args, # 訓練參數(如學習率、批次大小、設備等)train_dataset=train_dataset, # 訓練數據集(包含輸入和標簽)eval_dataset=eval_dataset, # 驗證數據集(用于評估模型性能)data_collator=data_collator, # 數據批量處理函數(將單條數據組合成批次)# processing_class=teacher_tokenizer # 注意:此處可能存在問題(見下方說明)# 正確做法:適配器或數據處理邏輯應在蒸餾配置中處理)# 開始模型訓練trainer.train() # 啟動訓練循環,包含前向傳播、損失計算、反向傳播等trainer.save_model()logger.info("Training finished") # 記錄訓練結束日志
復制代碼
參考:
模型蒸餾(Distillation)案例--從DeepSeek-R1-1.5B 到 Qwen-2.5-1.5B 的模型蒸餾 - InProsperity - 博客園
模型蒸餾(Distillation)案例--從DeepSeek-R1-1.5B 到 Qwen-2.5-1.5B 的模型蒸餾-CSDN博客
6.結合上述內容和TextBrewer,自己重新整理了一遍代碼,僅供參考:
import os
import torch
import logging
from transformers import (AutoModelForCausalLM,AutoTokenizer,DataCollatorForLanguageModeling,get_linear_schedule_with_warmup
)
from textbrewer import GeneralDistiller, TrainingConfig, DistillationConfig
from datasets import load_dataset
from torch.optim import AdamW# 配置日志
logging.basicConfig(level=logging.INFO,format='%(asctime)s - %(levelname)s - %(message)s',handlers=[logging.FileHandler("distillation.log"),logging.StreamHandler()]
)
logger = logging.getLogger(__name__)# 設備設置
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
logger.info(f"Using device: {device}")# ======================
# 1. 加載模型和Tokenizer
# ======================
def load_models_and_tokenizers():"""加載教師和學生模型"""# 教師模型 (DeepSeek-R1 1.5B)teacher_model_name = "deepseek-ai/deepseek-r1-1.5b"logger.info(f"Loading teacher model: {teacher_model_name}")teacher_tokenizer = AutoTokenizer.from_pretrained(teacher_model_name)teacher_model = AutoModelForCausalLM.from_pretrained(teacher_model_name,torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32).to(device)# 學生模型 (Qwen 1.5B)student_model_name = "Qwen/Qwen1.5-1.8B"logger.info(f"Loading student model: {student_model_name}")student_tokenizer = AutoTokenizer.from_pretrained(student_model_name)student_model = AutoModelForCausalLM.from_pretrained(student_model_name,torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32).to(device)# 對齊tokenizer(關鍵步驟!)if teacher_tokenizer.vocab != student_tokenizer.vocab:logger.warning("Tokenizers not aligned, adding special tokens...")student_tokenizer.add_special_tokens({'pad_token': '[PAD]'})student_model.resize_token_embeddings(len(student_tokenizer))return teacher_model, student_model, teacher_tokenizer, student_tokenizer# ======================
# 2. 數據準備
# ======================
def prepare_data(student_tokenizer):"""加載并預處理數據"""# 加載數據集(示例使用wikitext)dataset = load_dataset("wikitext", "wikitext-2-raw-v1")# 預處理函數def preprocess_function(examples):return student_tokenizer(examples["text"],truncation=True,padding="max_length",max_length=512,return_tensors="pt")# 處理數據集train_dataset = dataset["train"].map(preprocess_function,batched=True,remove_columns=["text"])eval_dataset = dataset["validation"].map(preprocess_function,batched=True,remove_columns=["text"])# 數據收集器data_collator = DataCollatorForLanguageModeling(tokenizer=student_tokenizer,mlm=False)return train_dataset, eval_dataset, data_collator# ======================
# 3. 蒸餾配置
# ======================
def get_distillation_config():"""配置蒸餾參數"""return DistillationConfig(temperature=2.0, # 初始溫度temperature_scheduler=lambda x: max(0.5, 2.0 - 0.1 * x), # 溫度衰減hard_label_weight=0.3, # 真實標簽權重kd_loss_weight=0.7, # 蒸餾損失權重kd_loss_type="ce", # 交叉熵損失intermediate_matches=[{"layer_T": [6, 12, 18], # 教師模型層"layer_S": [3, 6, 9], # 學生模型層"feature": "hidden", # 隱藏狀態"loss": "cosine", # 余弦相似度損失"weight": 0.5,"proj": ["linear", 1536, 1024] # 維度投影},{"layer_T": [9, 15],"layer_S": [4, 7],"feature": "attention", # 注意力矩陣"loss": "mse","weight": 0.3}])# ======================
# 4. 訓練配置
# ======================
def get_training_config():"""配置訓練參數"""return TrainingConfig(output_dir="./distill_output",device=device,fp16=torch.cuda.is_available(),gradient_accumulation_steps=4,ckpt_frequency=500, # 每500步保存檢查點log_steps=100,max_grad_norm=1.0, # 梯度裁剪save_optimizer=False # 為節省空間不保存優化器)# ======================
# 5. 優化器設置
# ======================
def get_optimizer(model):"""配置優化器和學習率調度"""optimizer = AdamW(model.parameters(),lr=5e-5,weight_decay=0.01)scheduler = get_linear_schedule_with_warmup(optimizer,num_warmup_steps=500,num_training_steps=3000)return optimizer, scheduler# ======================
# 主函數
# ======================
def main():# 1. 加載模型和數據teacher_model, student_model, teacher_tokenizer, student_tokenizer = load_models_and_tokenizers()train_dataset, eval_dataset, data_collator = prepare_data(student_tokenizer)# 2. 配置蒸餾distill_config = get_distillation_config()train_config = get_training_config()# 3. 初始化蒸餾器distiller = GeneralDistiller(train_config=train_config,distill_config=distill_config,model_T=teacher_model,model_S=student_model,adaptor_T=None, # 使用默認適配器adaptor_S=None)# 4. 準備優化器optimizer, scheduler = get_optimizer(student_model)# 5. 開始蒸餾logger.info("Starting distillation...")with distiller:distiller.train(optimizer=optimizer,scheduler=scheduler,train_dataset=train_dataset,eval_dataset=eval_dataset,batch_size=2,num_epochs=3,data_collator=data_collator,callback=None)# 6. 保存最終模型student_model.save_pretrained("./final_student_model")student_tokenizer.save_pretrained("./final_student_model")logger.info("Distillation completed!")if __name__ == "__main__":main()
另外,可以了解Text Generation WebUI,集成不同大模型進行推理,測試。
https://github.com/oobabooga/text-generation-webui