Llama-Factory微調Qwen2.5-VL從數據集制作到部署記錄
電腦環境配置:
1.ubuntu24
2.3090(24G)
3.Cuda==12.9
一、數據集制作
我的數據集主要是對圖像內容進行描述
1.Label-studio制作數據集
這是最原始的從零開始制作數據集的方法,不建議這樣做!
安裝完label-studio后,輸入指令啟動
label-studio start
進入瀏覽器界面
創建項目:Create Project,引入圖片后,選擇圖像描述數據集制作(Image Captioning)
2.利用Qwen2.5-VL半自動制作數據集
既然qwen本身具有較好的圖像描述能力,那我們可以先使用qwen進行圖像描述,在此基礎上進行復核修改,這樣做可以減少人力成本。
我這編寫的腳本如下:
import torch
from modelscope import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
import time
import os
from pathlib import Path
import jsondef process_single_image(model, processor, image_path, prompt):messages = [{"role": "user","content": [{"type": "image","image": image_path,},{"type": "text", "text": prompt},],}]# Preparation for inferencetext = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)image_inputs, video_inputs = process_vision_info(messages)inputs = processor(text=[text],images=image_inputs,videos=video_inputs,padding=True,return_tensors="pt",)inputs = inputs.to("cuda")time_start = time.time()# Inference: Generation of the outputgenerated_ids = model.generate(**inputs, max_new_tokens=256, do_sample=False)time_end = time.time()print(f"Inference time for {Path(image_path).name}: {time_end - time_start:.2f}s")generated_ids_trimmed = [out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)]output_text = processor.batch_decode(generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False)return output_text[0]def process_images_in_folder(model, processor, image_folder, prompt, output_file=None):# 支持的圖像格式image_extensions = {'.jpg', '.jpeg', '.png', '.bmp', '.tiff', '.tif'}# 獲取文件夾中所有圖像文件image_files = []for file in Path(image_folder).iterdir():if file.suffix.lower() in image_extensions:image_files.append(file)image_files.sort()if not image_files:print(f"No image files found in {image_folder}")returnprint(f"Found {len(image_files)} image files")# 存儲結果results = []# 遍歷處理每張圖像for image_file in image_files:print(f"\nProcessing: {image_file.name}")try:result = process_single_image(model, processor, str(image_file), prompt)print(f"Result: {result}")# 保存結果results.append({'image': image_file.name,'path': str(image_file),'result': result})except Exception as e:print(f"Error processing {image_file.name}: {e}")results.append({'image': image_file.name,'path': str(image_file),'result': f"Error: {e}",'error': True})# 如果指定了輸出文件,則保存為JSONL格式if output_file:with open(output_file, 'w', encoding='utf-8') as f:for item in results:# 構造JSONL格式的字典json_line = {"image": item['path'],"text": item['result']}# 寫入一行JSONf.write(json.dumps(json_line, ensure_ascii=False) + '\n')print(f"\nResults saved to {output_file}")return resultsif __name__ == '__main__':# default: Load the model on the available device(s)model = Qwen2_5_VLForConditionalGeneration.from_pretrained("/home/ct/work/BigModel/Qwen2.5-VL/models/Qwen2.5-VL-7B-Instruct", torch_dtype="auto", device_map="auto")# The default range for the number of visual tokens per image in the model is 4-16384.# You can set min_pixels and max_pixels according to your needs, such as a token range of 256-1280, to balance performance and cost.min_pixels = 256*28*28max_pixels = 1280*28*28processor = AutoProcessor.from_pretrained("/home/ct/work/BigModel/Qwen2.5-VL/models/Qwen2.5-VL-7B-Instruct", min_pixels=min_pixels, max_pixels=max_pixels)# 設置圖像文件夾路徑和提示詞image_folder = "/home/ct/work/Label_tools/PICS/Flame/" prompt = "查看圖像中紅色矩形框中是否存在煙火,判定存在煙火需要看到明顯的煙霧和火焰,注意區分燈光、太陽光和一些其他的影響。"output_file = "inference_results.jsonl" # 結果輸出文件# 處理文件夾中的所有圖像results = process_images_in_folder(model, processor, image_folder, prompt, output_file)# 打印匯總信息print(f"\nProcessing completed. Total images processed: {len(results)}")
配置運行后,將會生成推理結果的JSONL文件。主要包含圖像路徑和對應描述。其他任務主要修改以下提示詞就可以。
接下來就是對這圖像查看與qwen2.5-vl描述的是否一致就行。
二、LLama-Factory微調
1.配置LLama-Factory環境
因為我是一邊測試一邊記錄,為了安全起見,建議使用anaconda建立LLama-Factory虛擬環境。
(1)克隆LLama-Factory項目
git clone https://github.com/hiyouga/LLaMA-Factory.git
cd LLaMA-Factory
(2)創建虛擬環境
# 使用 conda(推薦)
conda create -n llama-factory python=3.10
conda activate llama-factory# 或使用 venv
python -m venv venv
source venv/bin/activate
(3)安裝依賴
pip install -r requirements.txt
2.轉化標簽數據格式
目前我們的數據格式大概是:
{"image": "/path/to/image.jpg", "text": "圖像描述語句"}
而LLama-Factory對于多模態大模型的建議數據格式為:
{"images": ["/home/ct/work/Label_tools/PICS/Smoke/Smoke001.png"], "conversations": [{"content": "<image>\n請分析圖像中紅色矩形框內是否存在吸煙行為,并說明理由。", "from": "user"}, {"content": "紅色矩形框中的人在吸煙。", "from": "assistant"}]}
轉換腳本如下:
import json# 讀取原始文件
input_file = "/home/ct/work/LLaMA-Factory/inference_results_Smoke.jsonl"
output_file = "/home/ct/work/LLaMA-Factory/smoke_dataset.jsonl"with open(input_file, 'r', encoding='utf-8') as infile:lines = infile.readlines()# 轉換格式
converted_lines = []
for line in lines:data = json.loads(line.strip())# 構建新的數據結構new_data = {"images": [data["image"]],"conversations": [{"content": "<image>\n請分析圖像中紅色矩形框內是否存在吸煙行為,并說明理由。","from": "user"},{"content": data["text"],"from": "assistant"}]}converted_lines.append(json.dumps(new_data, ensure_ascii=False) + '\n')# 寫入新文件
with open(output_file, 'w', encoding='utf-8') as outfile:outfile.writelines(converted_lines)print(f"轉換完成!已保存到 {output_file}")
3.啟動微調
(1)下載模型
huggingface由于是外網,下載困難,建議去魔塔社區下載,下載后置于LLama-factory根目錄下,新建models文件夾。
(2)構建dataset_info.json
在LLama-factory的根目錄下新建該文件,并寫入:
{"smoke_dataset": {"file_name": "smoke_dataset.jsonl","formatting": "sharegpt","columns": {"messages": "conversations","images": "images"},"tags": {"role_tag": "from","content_tag": "content","user_tag": "user","assistant_tag": "assistant"}}
}
注意smoke_dataset和smoke_dataset.jsonl兩者需要對應。
(3)啟動微調
cd /home/ct/work/LLaMA-Factory
python src/train.py \--stage sft \--do_train \--model_name_or_path /home/ct/work/LLaMA-Factory/models/Qwen2.5-VL-7B-Instruct \--dataset smoke_dataset \--dataset_dir . \--template qwen2_vl \--finetuning_type lora \--lora_target all \--output_dir saves/Qwen2.5-VL-7B-Instruct-lora \--per_device_train_batch_size 1 \--gradient_accumulation_steps 8 \--lr_scheduler_type cosine \--logging_steps 10 \--save_steps 100 \--learning_rate 5e-5 \--num_train_epochs 3.0 \--plot_loss \--fp16
生成的權重文件在LLama-Factory根目錄下的Saves文件夾下。
內存占用大概22G.
三、模型合并
在模型微調訓練后,會在saves文件夾下生成一系列的微調權重文件,我使用的lora微調。大小在100~300m之間。需要與原始權重文件合并。
可以采用llama-factory和pytorch+transform等多種方法進行合并,我這的腳本如下:
# merge_lora_weights.py
import os
import torch
from transformers import AutoModelForVision2Seq, AutoTokenizer, AutoProcessor
from peft import PeftModeldef merge_lora_weights():# 配置路徑base_model_path = "models/Qwen2.5-VL-7B-Instruct" # 原始模型路徑lora_weights_path = "saves/Qwen2.5-VL-7B-Instruct-lora/checkpoint-3520" # LoRA權重路徑output_path = "./merged_qwen2.5-vl-finetuned" # 合并后模型保存路徑print("Loading base model...")base_model = AutoModelForVision2Seq.from_pretrained(base_model_path,torch_dtype=torch.float16,low_cpu_mem_usage=True,trust_remote_code=True)print("Loading LoRA adapter...")lora_model = PeftModel.from_pretrained(base_model, lora_weights_path)print("Merging weights...")merged_model = lora_model.merge_and_unload()print("Saving merged model...")# 創建輸出目錄os.makedirs(output_path, exist_ok=True)# 保存模型merged_model.save_pretrained(output_path, safe_serialization=True, max_shard_size="5GB")# 保存tokenizer和processortokenizer = AutoTokenizer.from_pretrained(base_model_path, trust_remote_code=True)tokenizer.save_pretrained(output_path)# 保存processor(對VL模型很重要)processor = AutoProcessor.from_pretrained(base_model_path, trust_remote_code=True)processor.save_pretrained(output_path)print(f"Merged model saved to {output_path}")if __name__ == "__main__":merge_lora_weights()
合并后模型權重大小:
接下來就是測試了。