大模型微調(PEFT)
- PEFT(Parameter-Efficient Fine-Tuning)
- 一、PEFT 核心方法
- 1. LoRA(Low-Rank Adaptation)
- 2. Adapter
- 3. Prefix Tuning
- 4. Prompt Tuning
- 5. QLoRA(Quantized LoRA)
- 二、PEFT vs 全參數微調
- 三、微調大模型示例代碼
- 四、加載微調后的大模型
- 1. Lora
- 2. prefix tuning
大模型微調方法描述
PEFT(Parameter-Efficient Fine-Tuning)
PEFT(參數高效微調)是一類用于大幅降低大模型微調成本的技術,核心思想是僅微調少量參數,而非整個模型。以下是系統化的解析:
一、PEFT 核心方法
1. LoRA(Low-Rank Adaptation)
- 原理:
- 在原始權重旁添加低秩矩陣(
W = W? + BA
),僅訓練B
和A
。
- 在原始權重旁添加低秩矩陣(
- 適用場景:文本生成、對話系統
- 代碼示例:
r
(秩)通常為4~64,參數量減少90%+
from peft import LoraConfig, get_peft_modelconfig = LoraConfig(r=8, # 秩lora_alpha=32, # 縮放系數target_modules=["q_proj", "v_proj"], # 作用模塊lora_dropout=0.05,bias="none", ) model = get_peft_model(model, config) # 原始模型+LoRA
2. Adapter
- 原理:
- 在Transformer層間插入小型全連接網絡,僅訓練Adapter層。
- 參數量占比約0.5%~5%
- 適用場景:多任務學習
- 結構示例:
Transformer Layer → Adapter(Down→ReLU→Up) → Residual→ LayerNorm
3. Prefix Tuning
-
原理:
- 在輸入前添加可學習的“虛擬token”(prefix),引導模型生成。
- 完全不修改原始參數
-
適用場景:生成任務(如GPT)
-
結構示例:
import torch import torch.nn as nn from transformers import AutoModelForCausalLM, AutoTokenizer# 加載預訓練模型和分詞器 model_name = "gpt2" # 可替換為你想要使用的模型名稱 model = AutoModelForCausalLM.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name)# 定義Prefix Tuning模塊 class PrefixTuning(nn.Module):def __init__(self, num_virtual_tokens, hidden_size):super(PrefixTuning, self).__init__()self.prefix_embeddings = nn.Embedding(num_virtual_tokens, hidden_size)nn.init.normal_(self.prefix_embeddings.weight, mean=0, std=0.02)def forward(self, input_ids, attention_mask):batch_size = input_ids.shape[0]prefix = self.prefix_embeddings.weight.unsqueeze(0).repeat(batch_size, 1, 1)new_input_ids = torch.cat([torch.full((batch_size, prefix.shape[1]), tokenizer.pad_token_id).to(input_ids.device), input_ids], dim=1)new_attention_mask = torch.cat([torch.ones((batch_size, prefix.shape[1])).to(attention_mask.device), attention_mask], dim=1)return new_input_ids, new_attention_mask
4. Prompt Tuning
-
原理:
- 在輸入層加入prompt tokens。
- 完全不修改原始參數,簡化版的Prefix Tuning,無需MLP調整,隨著模型規模增大,效果接近full fine-tuning。
-
結構示例:
prompt = "請回答以下問題:" prompt_ids = tokenizer.encode(prompt, return_tensors="pt").to(input_ids.device) new_input_ids = torch.cat([prompt_ids.repeat(batch_size, 1), input_ids], dim=1)
5. QLoRA(Quantized LoRA)
- 原理:
4-bit量化基礎模型 + LoRA微調,顯存需求降低70% - 代碼示例:
from transformers import BitsAndBytesConfigbnb_config = BitsAndBytesConfig(load_in_4bit=True,bnb_4bit_compute_dtype=torch.bfloat16 ) model = AutoModel.from_pretrained("Llama-3-8B", quantization_config=bnb_config)
二、PEFT vs 全參數微調
指標 | PEFT | 全參數微調 |
---|---|---|
顯存占用 | 極低(可單卡微調70B) | 極高(需多卡) |
訓練速度 | 快(僅更新少量參數) | 慢 |
效果 | 接近全參數微調 | 最優但差異<5% |
部署便利性 | 需合并適配器 | 直接部署 |
三、微調大模型示例代碼
注意:使用model.save_pretrained("fine_tuned_internvl_3")
保存經過 PEFT(如 LoRA 或其他 Adapter 微調)后的模型時,保存的權重通常不包含基礎模型(base_model)的原始權重,僅保存微調過程中可訓練的部分。
import math
import pandas as pd
import numpy as np
import torch
import torchvision.transforms as T
from decord import VideoReader, cpu
from PIL import Image
from torchvision.transforms.functional import InterpolationMode
from transformers import AutoModel, AutoTokenizer, AutoConfig, TrainingArguments, Trainer
from peft import LoraConfig, get_peft_model
import os# 模型加載
path = 'InternVL3'
device_map = split_model(path)
model = AutoModel.from_pretrained(path,torch_dtype=torch.bfloat16,load_in_8bit=True,low_cpu_mem_usage=True,use_flash_attn=True,trust_remote_code=True,device_map=device_map).eval()
tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True, use_fast=False)# 配置LoRA
lora_config = LoraConfig(r=8,lora_alpha=16,target_modules=["q_proj", "v_proj"],lora_dropout=0.1,bias="none",task_type="CAUSAL_LM"
)model = get_peft_model(model, lora_config)
model.print_trainable_parameters()# 讀取數據集
data_path = 'data'
df = pd.read_parquet(data_path)
dataset = CustomDataset(df, tokenizer)# 訓練參數設置
training_args = TrainingArguments(output_dir='./results',num_train_epochs=3,per_device_train_batch_size=4,gradient_accumulation_steps=4,save_steps=10_000,save_total_limit=2,evaluation_strategy="no",logging_steps=10,fp16=True
)# 創建Trainer
trainer = Trainer(model=model,args=training_args,train_dataset=dataset
)# 開始訓練
trainer.train()# 保存微調后的模型
model.save_pretrained("fine_tuned_internvl3")
四、加載微調后的大模型
1. Lora
- 示例代碼:
from transformers import AutoModel from peft import PeftModel# 加載基礎的預訓練模型 base_model_path = "base_model_path" # 替換為基礎預訓練模型的路徑 base_model = AutoModel.from_pretrained(base_model_path)# 加載微調后的適配器 adapter_path = "fine_tuned_adapter_path" # 替換為微調后適配器的保存路徑 model = PeftModel.from_pretrained(base_model, adapter_path)
2. prefix tuning
- 示例代碼:
import torch from transformers import AutoModelForCausalLM, AutoTokenizer import torch.nn as nn# 定義 Prefix Tuning 模塊 class PrefixTuning(nn.Module):def __init__(self, num_virtual_tokens, hidden_size):super(PrefixTuning, self).__init__()self.prefix_embeddings = nn.Embedding(num_virtual_tokens, hidden_size)def forward(self, input_ids, attention_mask):batch_size = input_ids.shape[0]prefix = self.prefix_embeddings.weight.unsqueeze(0).repeat(batch_size, 1, 1)new_input_ids = torch.cat([torch.full((batch_size, prefix.shape[1]), tokenizer.pad_token_id).to(input_ids.device),input_ids], dim=1)new_attention_mask = torch.cat([torch.ones((batch_size, prefix.shape[1])).to(attention_mask.device),attention_mask], dim=1)return new_input_ids, new_attention_mask# 加載基礎的預訓練模型和分詞器 model_name = "gpt2" # 可替換為實際的模型名稱 model = AutoModelForCausalLM.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name)# 初始化 Prefix Tuning 模塊 num_virtual_tokens = 10 # 替換為實際的虛擬 token 數量 hidden_size = model.config.hidden_size prefix_tuning = PrefixTuning(num_virtual_tokens, hidden_size)# 加載 Prefix Tuning 的參數 try:prefix_tuning.load_state_dict(torch.load("path/to/prefix_tuning_weights.pth")) except FileNotFoundError:print("錯誤:未找到 Prefix Tuning 參數文件,請檢查路徑。")exit(1)# 將模型和 Prefix Tuning 模塊移動到 GPU(如果可用) device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model.to(device) prefix_tuning.to(device)# 輸入文本 input_text = "Once upon a time" input_ids = tokenizer.encode(input_text, return_tensors='pt').to(device) attention_mask = torch.ones_like(input_ids).to(device)# 使用 Prefix Tuning 處理輸入 new_input_ids, new_attention_mask = prefix_tuning(input_ids, attention_mask)# 進行推理 with torch.no_grad():outputs = model(new_input_ids, attention_mask=new_attention_mask)logits = outputs.logits# 生成文本 generated_ids = torch.argmax(logits, dim=-1) generated_text = tokenizer.decode(generated_ids[0], skip_special_tokens=True