PEFT QLora Deepspeed Zero Stage 3 Offload Trainning

使用 accelerate + deepspeed zero stage 3 offload 進行 sft trainning 的自動設備映射: GPU 訓練計算 + CPU 存儲

run_peft_qlora_deepspeed_stage3.sh

#!/bin/bashexport MAX_JOBS=4
export OMP_NUM_THREADS=4
export disable_exllama=True
export CUDA_VISIBLE_DEVICES=0,1
export TORCH_CUDA_ARCH_LIST="8.6"
export TOKENIZERS_PARALLELISM=false
export CUDA_DEVICE_ORDER=PCI_BUS_ID
export TORCH_DISTRIBUTED_DEBUG=DETAIL
export PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True# NCCL 優化
export NCCL_DEBUG=INFO
export NCCL_P2P_DISABLE=0
export NCCL_IGNORE_CPU_AFFINITY=1
export MASTER_ADDR=localhost
export MASTER_PORT=2022
export NCCL_SOCKET_IFNAME=ens34
export NCCL_MIN_NRINGS=8
export NCCL_MAX_NCHANNELS=8
export NCCL_ALGO=Ring
export NCCL_PROTO=Simpleaccelerate launch --config_file "deepspeed_config_z3_qlora.yaml"  train.py \
--seed 100 \
--model_name_or_path "models/Qwen3-Coder-30B-A3B-Instruct" \
--dataset_name "xxx.json" \
--chat_template_format "qwen3" \
--add_special_tokens False \
--append_concat_token False \
--splits "train,test" \
--max_seq_len 1024 \
--num_train_epochs 3 \
--logging_steps 5 \
--log_level "info" \
--logging_strategy "steps" \
--eval_strategy "epoch" \
--save_strategy "epoch" \
--bf16 True \
--packing False \
--learning_rate 1e-4 \
--lr_scheduler_type "cosine" \
--weight_decay 0.01 \
--warmup_ratio 0.1 \
--max_grad_norm 1.0 \
--output_dir "xxx_adapter" \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 1 \
--gradient_accumulation_steps 4 \
--gradient_checkpointing True \
--use_reentrant True \
--dataset_text_field "content" \
--use_flash_attn True \
--use_peft_lora True \
--lora_r 8 \
--lora_alpha 16 \
--lora_dropout 0.1 \
--lora_target_modules "all-linear" \
--use_4bit_quantization True \
--use_nested_quant True \
--bnb_4bit_compute_dtype "bfloat16" \
--bnb_4bit_quant_storage_dtype "bfloat16"

deepspeed_config_z3_qlora.yaml (使用 accelerate config 配置生成)

compute_environment: LOCAL_MACHINE
debug: false
deepspeed_config:deepspeed_config_file: ds_config.jsondeepspeed_moe_layer_cls_names: Qwen3MoeSparseMoeBlockzero3_init_flag: true
distributed_type: DEEPSPEED
downcast_bf16: "no"
dynamo_config:dynamo_backend: INDUCTOR
#  dynamo_mode: reduce-overheaddynamo_mode: defaultdynamo_use_dynamic: truedynamo_use_fullgraph: truedynamo_use_regional_compilation: true
enable_cpu_affinity: true
machine_rank: 0
main_training_function: main
num_machines: 1
num_processes: 2
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false

ds_config.json

{"fp16": {"enabled": "auto","loss_scale": 0,"loss_scale_window": 1000,"initial_scale_power": 16,"hysteresis": 2,"min_loss_scale": 1},"bf16": {"enabled": "auto"},"zero_optimization": {"stage": 3,"offload_optimizer": {"device": "cpu","pin_memory": true},"offload_param": {"device": "cpu","pin_memory": true},"overlap_comm": true,"contiguous_gradients": true,"reduce_bucket_size": "auto","stage3_prefetch_bucket_size": "auto","stage3_param_persistence_threshold": "auto","sub_group_size": 1e9,"stage3_max_live_parameters": 1e9,"stage3_max_reuse_distance": 1e9,"stage3_gather_16bit_weights_on_model_save": "auto"},"gradient_accumulation_steps": "auto","gradient_clipping": "auto","steps_per_print": 2000,"train_batch_size": "auto","train_micro_batch_size_per_gpu": "auto","wall_clock_breakdown": false,"aio": {"enabled": true,"block_size": 256,"queue_depth": 8,"thread_count": 1,"single_submit": false,"overlap_events": true}
}

utils.py

import os
import json
import torch
import transformers
import packaging.versionfrom enum import Enum
from peft import LoraConfig
from transformers import (AutoModelForCausalLM,AutoTokenizer,BitsAndBytesConfig,
)
from datasets.builder import DatasetGenerationError
from datasets import DatasetDict, load_dataset, load_from_diskDEFAULT_CHATML_CHAT_TEMPLATE = "{% for message in messages %}\n{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% if loop.last and add_generation_prompt %}{{'<|im_start|>assistant\n' }}{% endif %}{% endfor %}"
DEFAULT_ZEPHYR_CHAT_TEMPLATE = "{% for message in messages %}\n{% if message['role'] == 'user' %}\n{{ '<|user|>\n' + message['content'] + eos_token }}\n{% elif message['role'] == 'system' %}\n{{ '<|system|>\n' + message['content'] + eos_token }}\n{% elif message['role'] == 'assistant' %}\n{{ '<|assistant|>\n'  + message['content'] + eos_token }}\n{% endif %}\n{% if loop.last and add_generation_prompt %}\n{{ '<|assistant|>' }}\n{% endif %}\n{% endfor %}"class ZephyrSpecialTokens(str, Enum):user = "<|user|>"assistant = "<|assistant|>"system = "<|system|>"eos_token = "</s>"bos_token = "<s>"pad_token = "<pad>"@classmethoddef list(cls):return [c.value for c in cls]class ChatmlSpecialTokens(str, Enum):user = "<|im_start|>user"assistant = "<|im_start|>assistant"system = "<|im_start|>system"eos_token = "<|im_end|>"bos_token = "<s>"pad_token = "<pad>"@classmethoddef list(cls):return [c.value for c in cls]def format_alpaca_to_chatml(sample):"""Convert Alpaca format to ChatML format"""messages = []# Add system messagemessages.append({"role": "system","content": "You are a helpful assistant."})# Add user messageif sample["input"].strip():content = f"{sample['instruction']}\n{sample['input']}"else:content = sample["instruction"]messages.append({"role": "user","content": content})# Add assistant messagemessages.append({"role": "assistant","content": sample["output"]})return {"messages": messages}def create_datasets(tokenizer, data_args, training_args, apply_chat_template=False):def preprocess(samples):batch = []for conversation in samples["messages"]:batch.append(tokenizer.apply_chat_template(conversation, tokenize=False))return {"content": batch}def load_alpaca_dataset(data_path):"""Load and convert Alpaca format dataset to ChatML format"""with open(data_path, "r", encoding="utf-8") as f:data = json.load(f)# Convert Alpaca format to ChatML formatchatml_data = [format_alpaca_to_chatml(item) for item in data]# Convert to Hugging Face Datasetfrom datasets import Datasetreturn Dataset.from_list(chatml_data)raw_datasets = DatasetDict()# 檢查是否為本地JSON文件if data_args.dataset_name.endswith(".json") and os.path.exists(data_args.dataset_name):# 加載整個數據集full_dataset = load_alpaca_dataset(data_args.dataset_name)# 根據splits參數分割數據集if "train" in data_args.splits and "test" in data_args.splits:# 90% 訓練, 10% 測試split_dataset = full_dataset.train_test_split(test_size=0.1, seed=training_args.seed)raw_datasets["train"] = split_dataset["train"]raw_datasets["test"] = split_dataset["test"]elif "train" in data_args.splits:raw_datasets["train"] = full_datasetelif "test" in data_args.splits:raw_datasets["test"] = full_datasetelse:raise ValueError(f"Split type {data_args.splits} not recognized")else:# 處理Hub數據集或目錄結構for split in data_args.splits.split(","):try:# 嘗試從Hub加載dataset = load_dataset(data_args.dataset_name, split=split)raw_datasets[split] = datasetexcept Exception:# 檢查本地數據集try:dataset = load_from_disk(os.path.join(data_args.dataset_name, split))raw_datasets[split] = datasetexcept Exception as e:raise ValueError(f"Could not load dataset split {split}: {str(e)}")if apply_chat_template:raw_datasets = raw_datasets.map(preprocess,batched=True,remove_columns=raw_datasets["train"].column_names,)train_data = raw_datasets["train"]valid_data = raw_datasets["test"] if "test" in raw_datasets else Noneprint(f"Size of the train set: {len(train_data)}. Size of the validation set: {len(valid_data) if valid_data else 0}")print(f"A sample of train dataset: {train_data[0]}")return train_data, valid_datadef create_and_prepare_model(args, data_args, training_args):if args.use_unsloth:from unsloth import FastLanguageModelbnb_config = Nonequant_storage_dtype = Noneif (torch.distributed.is_available()and torch.distributed.is_initialized()and torch.distributed.get_world_size() > 1and args.use_unsloth):raise NotImplementedError("Unsloth is not supported in distributed training")if args.use_4bit_quantization:compute_dtype = getattr(torch, args.bnb_4bit_compute_dtype)quant_storage_dtype = getattr(torch, args.bnb_4bit_quant_storage_dtype)bnb_config = BitsAndBytesConfig(load_in_4bit=args.use_4bit_quantization,bnb_4bit_quant_type=args.bnb_4bit_quant_type,bnb_4bit_compute_dtype=compute_dtype,bnb_4bit_use_double_quant=args.use_nested_quant,bnb_4bit_quant_storage=quant_storage_dtype,)if compute_dtype == torch.float16 and args.use_4bit_quantization:major, _ = torch.cuda.get_device_capability()if major >= 8:print("=" * 80)print("Your GPU supports bfloat16, you can accelerate training with the argument --bf16")print("=" * 80)elif args.use_8bit_quantization:bnb_config = BitsAndBytesConfig(load_in_8bit=args.use_8bit_quantization)if args.use_unsloth:if torch.xpu.is_available():raise NotImplementedError("XPU hasn't supported unsloth yet")# Load modelmodel, _ = FastLanguageModel.from_pretrained(model_name=args.model_name_or_path,max_seq_length=training_args.max_seq_length,dtype=None,load_in_4bit=args.use_4bit_quantization,)else:torch_dtype = (quant_storage_dtype if quant_storage_dtype and quant_storage_dtype.is_floating_point else torch.float32)# Prepare model loading argumentsmodel_kwargs = {"trust_remote_code": True,"torch_dtype": torch_dtype,}if args.use_flash_attn:# 確保使用 float16# model_kwargs["torch_dtype"] = torch.float16if torch.xpu.is_available():print("XPU hasn't supported flash_attn yet, use eager implementation instead.")model_kwargs["attn_implementation"] = "eager"else:model_kwargs["attn_implementation"] = "flash_attention_2"# Only add quantization_config if bnb_config is not Noneif bnb_config is not None:model_kwargs["quantization_config"] = bnb_configmodel = AutoModelForCausalLM.from_pretrained(args.model_name_or_path, **model_kwargs)peft_config = Nonechat_template = Noneif args.use_peft_lora and not args.use_unsloth:peft_config = LoraConfig(lora_alpha=args.lora_alpha,lora_dropout=args.lora_dropout,r=args.lora_r,bias="none",task_type="CAUSAL_LM",target_modules=args.lora_target_modules.split(",")if args.lora_target_modules != "all-linear"else args.lora_target_modules,)special_tokens = Nonechat_template = Noneif args.chat_template_format == "chatml":special_tokens = ChatmlSpecialTokenschat_template = DEFAULT_CHATML_CHAT_TEMPLATEelif args.chat_template_format == "zephyr":special_tokens = ZephyrSpecialTokenschat_template = DEFAULT_ZEPHYR_CHAT_TEMPLATEif special_tokens is not None:tokenizer = AutoTokenizer.from_pretrained(args.model_name_or_path,pad_token=special_tokens.pad_token.value,bos_token=special_tokens.bos_token.value,eos_token=special_tokens.eos_token.value,additional_special_tokens=special_tokens.list(),trust_remote_code=True,)tokenizer.chat_template = chat_template# make embedding resizing configurable?# Transformers 4.46.0+ defaults uses mean_resizing by default, which fails with QLoRA + FSDP because the# embedding could be on meta device, therefore, we set mean_resizing=False in that case (i.e. the status quo# ante). See https://github.com/huggingface/accelerate/issues/1620.uses_transformers_4_46 = packaging.version.parse(transformers.__version__) >= packaging.version.parse("4.46.0")uses_fsdp = os.environ.get("ACCELERATE_USE_FSDP", "false").lower() == "true"# Check if the model is quantizedis_quantized = (bnb_config is not None) or (getattr(model, "hf_quantizer", None) is not None)if is_quantized and uses_fsdp and uses_transformers_4_46:model.resize_token_embeddings(len(tokenizer), pad_to_multiple_of=8, mean_resizing=False)else:model.resize_token_embeddings(len(tokenizer), pad_to_multiple_of=8)else:tokenizer = AutoTokenizer.from_pretrained(args.model_name_or_path, trust_remote_code=True)tokenizer.pad_token = tokenizer.eos_tokenif args.use_unsloth:# Do model patching and add fast LoRA weightsmodel = FastLanguageModel.get_peft_model(model,lora_alpha=args.lora_alpha,lora_dropout=args.lora_dropout,r=args.lora_r,target_modules=args.lora_target_modules.split(",")if args.lora_target_modules != "all-linear"else args.lora_target_modules,use_gradient_checkpointing=training_args.gradient_checkpointing,random_state=training_args.seed,max_seq_length=training_args.max_seq_length,)return model, peft_config, tokenizer

train.py

import os
import gc
import sys
import torchfrom typing import Optional
from trl import SFTConfig, SFTTrainer
from dataclasses import dataclass, field
from transformers import HfArgumentParser, set_seed
from new_utils import create_and_prepare_model, create_datasets# 清理緩存
gc.collect()
torch.cuda.empty_cache()# Define and parse arguments.
@dataclass
class ModelArguments:"""Arguments pertaining to which model/config/tokenizer we are going to fine-tune from."""model_name_or_path: str = field(metadata={"help": "Path to pretrained model or model identifier from huggingface.co/models"})max_seq_length: Optional[int] = field(default=512,metadata={"help": "The maximum total input sequence length after tokenization."},)chat_template_format: Optional[str] = field(default="none",metadata={"help": "chatml|zephyr|none. Pass `none` if the dataset is already formatted with the chat template."},)lora_alpha: Optional[int] = field(default=16)lora_dropout: Optional[float] = field(default=0.1)lora_r: Optional[int] = field(default=64)lora_target_modules: Optional[str] = field(default="q_proj,k_proj,v_proj,o_proj,down_proj,up_proj,gate_proj",metadata={"help": "comma separated list of target modules to apply LoRA layers to"},)use_nested_quant: Optional[bool] = field(default=False,metadata={"help": "Activate nested quantization for 4bit base models"},)bnb_4bit_compute_dtype: Optional[str] = field(default="float16",metadata={"help": "Compute dtype for 4bit base models"},)bnb_4bit_quant_storage_dtype: Optional[str] = field(default="uint8",metadata={"help": "Quantization storage dtype for 4bit base models"},)bnb_4bit_quant_type: Optional[str] = field(default="nf4",metadata={"help": "Quantization type fp4 or nf4"},)use_flash_attn: Optional[bool] = field(default=False,metadata={"help": "Enables Flash attention for training."},)use_peft_lora: Optional[bool] = field(default=False,metadata={"help": "Enables PEFT LoRA for training."},)use_8bit_quantization: Optional[bool] = field(default=False,metadata={"help": "Enables loading model in 8bit."},)use_4bit_quantization: Optional[bool] = field(default=False,metadata={"help": "Enables loading model in 4bit."},)use_reentrant: Optional[bool] = field(default=False,metadata={"help": "Gradient Checkpointing param. Refer the related docs"},)use_unsloth: Optional[bool] = field(default=False,metadata={"help": "Enables UnSloth for training."},)@dataclass
class DataTrainingArguments:dataset_name: Optional[str] = field(default="timdettmers/openassistant-guanaco",metadata={"help": "The preference dataset to use."},)append_concat_token: Optional[bool] = field(default=False,metadata={"help": "If True, appends `eos_token_id` at the end of each sample being packed."},)add_special_tokens: Optional[bool] = field(default=False,metadata={"help": "If True, tokenizers adds special tokens to each sample being packed."},)splits: Optional[str] = field(default="train,test",metadata={"help": "Comma separate list of the splits to use from the dataset."},)def main(model_args, data_args, training_args):# Set seed for reproducibilityset_seed(training_args.seed)# modelmodel, peft_config, tokenizer = create_and_prepare_model(model_args, data_args, training_args)# gradient ckptmodel.config.use_cache = not training_args.gradient_checkpointingtraining_args.gradient_checkpointing = training_args.gradient_checkpointing and not model_args.use_unslothif training_args.gradient_checkpointing:training_args.gradient_checkpointing_kwargs = {"use_reentrant": model_args.use_reentrant}training_args.dataset_kwargs = {"append_concat_token": data_args.append_concat_token,"add_special_tokens": data_args.add_special_tokens,}# datasetstrain_dataset, eval_dataset = create_datasets(tokenizer,data_args,training_args,apply_chat_template=model_args.chat_template_format != "none",)# trainertrainer = SFTTrainer(model=model,processing_class=tokenizer,args=training_args,train_dataset=train_dataset,eval_dataset=eval_dataset,peft_config=peft_config,)trainer.accelerator.print(f"{trainer.model}")if hasattr(trainer.model, "print_trainable_parameters"):trainer.model.print_trainable_parameters()# traincheckpoint = Noneif training_args.resume_from_checkpoint is not None:checkpoint = training_args.resume_from_checkpointtrainer.train(resume_from_checkpoint=checkpoint)# saving final modelif trainer.is_fsdp_enabled:trainer.accelerator.state.fsdp_plugin.set_state_dict_type("FULL_STATE_DICT")trainer.save_model()if __name__ == "__main__":parser = HfArgumentParser((ModelArguments, DataTrainingArguments, SFTConfig))if len(sys.argv) == 2 and sys.argv[1].endswith(".json"):# If we pass only one argument to the script and it's the path to a json file,# let's parse it to get our arguments.model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1]))else:model_args, data_args, training_args = parser.parse_args_into_dataclasses()try:main(model_args, data_args, training_args)# 清理緩存gc.collect()torch.cuda.empty_cache()finally:import torch.distributed as dist# 確保無論訓練成功或失敗都清理資源if dist.is_initialized():dist.destroy_process_group()

本文來自互聯網用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。
如若轉載,請注明出處:http://www.pswp.cn/bicheng/98214.shtml
繁體地址,請注明出處:http://hk.pswp.cn/bicheng/98214.shtml
英文地址,請注明出處:http://en.pswp.cn/bicheng/98214.shtml

如若內容造成侵權/違法違規/事實不符,請聯系多彩編程網進行投訴反饋email:809451989@qq.com,一經查實,立即刪除!

相關文章

JAVA上門家政維修服務系統源碼微信小程序+微信公眾號+APP+H5

一、功能介紹用戶端&#xff1a;精準分類、支持家政、維修、萬能服務、一口價、報價、線上、各類家政服務、優惠專區、師傅入駐、商家入駐、我的需求、補費明細、我的投訴&#xff1b;師傅端&#xff1a;接單池、消息通知、接單管理、今日訂單、師傅入駐、我的錢包、實名認證&a…

GCKontrol對嵌入式設備FPGA設計流程的高效優化

1 前言FPGA&#xff08;Field-Programmable Gate Array&#xff0c;現場可編程邏輯門陣列&#xff09;是一種可編程的半導體器件&#xff0c;因其硬件可重構性、硬件并行計算能力、低延遲和實時性的優勢&#xff0c;廣泛應用于數字電路設計、原型驗證和系統加速等領域。但開發…

DBAPI免費版對比apiSQL免費版

DBAPI簡介 零代碼開發api服務&#xff0c;只需編寫sql&#xff0c;就可以生成http api服務。支持api動態創建&#xff0c;兼容多種數據庫。 適用于BI報表、數據可視化大屏的后端接口快速開發。 旨在為企業數據服務的發布提供完整解決方案 一、DBAPI免費版本支持1個數據源連接支…

CTFHub SSRF通關筆記8:數字IP Bypass 原理詳解與滲透實戰

目錄 一、SSRF 二、數字IP原理 1、IP多進制 &#xff08;1&#xff09;十進制整數格式 (Dword / 長整數格式) &#xff08;2&#xff09;八進制格式 (Octal IP) &#xff08;3&#xff09;十六進制格式 (Hex IP) 2、SSRF繞過 三、滲透實戰 1、打開靶場 2、嘗試127.0.…

C++中雙引號和單引號的區別(全面分析)

我在刷算法題的時候經常遇到&#xff0c;用了 出現警告或者使用" "直接報錯&#xff0c;尤其是在字符串部分&#xff08;py玩家后遺癥/(ㄒoㄒ)/~~&#xff09;在詳細了解后總結一下加強記憶。 總的來說在 C 中&#xff0c;雙引號 "" 和單引號 是完全不同…

Ubuntu20.04仿真 |iris四旋翼添加云臺相機詳述

申明&#xff1a; 1、本人使用的是Ubuntu20.04ros1gazeboxtdronepx4的仿真組合 2、為了使傳感器模型和飛機模型解耦合&#xff0c;實現不同平臺對傳感器可直接調用&#xff0c;本系列博文涉及的所有傳感器均不直接添加在相應平臺的sdf當中&#xff0c;而是通過編寫xxx_joint.…

《人工智能AI之機器學習基石》系列 第 16 篇:關聯規則與數據挖掘——“啤酒與尿布”傳奇背后的增長秘密

《人工智能AI之機器學習基石》? 專欄核心理念: 用通俗語言講清楚機器學習的核心原理,強調“洞察+ 技術理解 + 應用連接”,構建一個完整的、富有啟發性的知識體系。 引言:藏在購物車里的“讀心術” 朋友們,歡迎回到我們的AI基石之旅。 在過去的兩次探索中,我們深入…

Spring Boot 的自動配置原理

Spring Boot 的自動配置是其 "約定大于配置" 理念的核心實現&#xff0c;它能自動配置 Spring 應用所需的各種組件&#xff0c;大幅減少手動配置。下面從核心注解、加載流程、條件過濾等方面詳細講解其原理&#xff0c;并結合關鍵源碼說明。一、自動配置的入口&#…

谷歌云平臺(Google Cloud Platform, GCP)介紹(全球領先的云計算服務平臺,為企業和開發者提供包括計算、存儲、數據分析、人工智能、機器學習、網絡和安全等在內的全面云服務)

文章目錄**1. GCP的核心優勢****1.1 全球領先的基礎設施****1.2 強大的數據分析和人工智能能力****1.3 卓越的安全性和合規性****1.4 靈活的定價模式****2. GCP的主要服務****2.1 計算服務****2.2 存儲和數據庫****2.3 網絡服務****2.4 人工智能與大數據****2.5 安全與管理工具…

RISC-V異常機制和異常定位

不少人在調試RISC-V core時&#xff0c;面對異常的出現不知所措&#xff0c;不知道如何定位代碼問題。這里將從RISC-V異常機制以及幾個異常實例學習下。 1 異常機制 1.1 什么是異常 異常是軟件程序員不得不要深入了解的&#xff0c;首先在學習異常機制前&#xff0c;對異常要…

c++中導出函數調用約定為__stdcall類型函數并指定導出函數名稱

開發環境在Visual studio 2022版本下&#xff0c;為防止編譯器重命名函數名稱&#xff08;會加上8等等亂七八糟的東西&#xff09;&#xff0c;我們對函數名稱進行指定&#xff1a;一、新建.def文件&#xff0c;名稱須與dll名稱相同&#xff0c;并放在與cpp文件相同文件夾下&am…

Vision Transformer (ViT) :Transformer在computer vision領域的應用(二)

METHOD,論文主要部分 In model design we follow the original Transformer (Vaswani et al., 2017) as closely as possible. An advantage of this intentionally simple setup is that scalable NLP Transformer architectures – and their efficient implementations –…

AI 論文周報丨紅隊測試語言模型/多視角 3D 點追蹤方法/蛋白質表示學習框架/密碼學漏洞檢測新框架……

近年來&#xff0c;已有若干方法嘗試從單目視頻實現 3D 點跟蹤&#xff0c;然而由于在遮擋和復雜運動等挑戰性場景中難以準確估計 3D 信息&#xff0c;這些方法的性能仍難以滿足實際應用對高精度與魯棒性的要求。 基于此&#xff0c;蘇黎世聯邦理工學院、卡內基梅隆大學聯合提出…

STM32 通過USB的Mass Storage Class讀寫掛載的SD卡出現卡死問題

問題描述&#xff1a;使用stm32cubemx生成的sdio和usb Mass Storage Class的代碼后&#xff0c;在USB_DEVICE\App\usbd_storage_if.c文件里面的接口調用以下函數出現卡死問題&#xff1a; SD_Driver.disk_initialize(0); SD_Driver.disk_read(lun, buf, blk_addr, blk_len) SD_…

Go語言中 error 接口與自定義錯誤類型的深入解析

在 Go 語言開發中&#xff0c;我們經常需要處理各種錯誤情況。Go 語言通過 error 接口提供了一套簡潔而強大的錯誤處理機制。然而&#xff0c;當涉及到自定義錯誤類型時&#xff0c;許多開發者會遇到一些令人困惑的問題。本文將通過一個實際案例來深入探討這個問題。 問題背景 …

字幕編輯工具推薦,Subtitle Edit v4.0.13發布:增強語音識別+優化翻譯功能

大家好呀&#xff0c;不知道大家有沒有做自媒體相關工作的呢&#xff0c;你們是不是也覺得剪輯視頻時最頭疼的往往不是畫面而是字幕&#xff0c;時間軸對不上、格式不兼容、需要手動翻譯&#xff0c;這些瑣碎工作消耗的精力甚至超過剪輯本身。 當你試遍各種在線工具卻發現要么…

【Java后端】Spring Boot 集成雪花算法唯一 ID

Spring Boot 實現基于雪花算法的分布式唯一 ID 生成器在分布式系統中&#xff0c;我們經常需要生成 全局唯一 ID&#xff0c;比如用戶 ID、訂單號、消息 ID 等。常見的方式有&#xff1a;數據庫自增主鍵、UUID、Redis/Zookeeper 分布式 ID 服務、百度 UidGenerator、美團 Leaf …

C語言初嘗試——洛谷

一、C數組&#xff1a;C 語言支持數組數據結構&#xff0c;它可以存儲一個固定大小的相同類型元素的順序集合。數組是用來存儲一系列數據&#xff0c;但它往往被認為是一系列相同類型的變量。聲明數組在 C 中要聲明一個數組&#xff0c;需要指定元素的類型和元素的數量&#xf…

C++八大排序

C排序算法一、概覽二、代碼實現1.冒泡排序2.插入排序3.希爾排序4.堆排序5.選擇排序6.快速排序7.歸并排序三、排序時間、空間復雜度總結排序&#xff0c;是C各大算法當中非常常見的一個步驟&#xff08;過程&#xff09;&#xff0c;通常我們使用便捷的algorithmalgorithmalgori…

每天五分鐘深度學習:深層神經網絡的優勢

本文重點 在人工智能領域,深層神經網絡(DNN)的崛起標志著技術范式的根本性轉變。相較于傳統淺層神經網絡(如單層感知機、線性回歸模型),深層網絡通過引入多層隱藏層,實現了對復雜數據模式的深度解析與高效建模。 深層神經網絡 神經網絡中輸入層表示神經網絡的第0層,…