如何訓練一個簡單的Transformer模型(附源碼)李宏毅2025大模型-作業4

摘要:

一、作業目標:使用只有2層transformer的GPT-2,生成完整寶可夢圖像。

二、源碼&解析:使用提供的Transformer模型(GPT-2)進行訓練,FID Score: 96.3425

一、作業目標

1)目標

  • 使用Transformer Decoder-only模型進行Next-token Prediction

  • 訓練目標是基于部分圖像(前60%)預測剩余部分(后40%),生成完整的寶可夢圖像。

2)評價指標

FID(Fréchet Inception Distance):衡量生成圖像與真實圖像之間的分布差異,越低越好

3)數據集

  • 圖像數量:共792張寶可夢圖像。

  • 圖像尺寸:20×20像素,共400個像素點。

  • 像素顏色類別:167種。

  • 數據劃分:

    • 訓練集:632張

    • 驗證集:80張

    • 測試集:80張

訓練集樣例

測試集樣例

二、源碼 & 解析

1、項目整體架構

1)數據處理部分 (第56-144行)

PixelSequenceDataset 類

  • 繼承自 PyTorch Dataset,專門處理像素序列數據

  • 支持三種模式:train(自回歸訓練)、dev(驗證最后160像素)、test(推理)

  • 將像素顏色索引轉換為 PyTorch 張量

數據加載

  • 從 Hugging Face Hub 加載 Pokemon 數據集和顏色映射表

  • 創建訓練、驗證、測試三個 DataLoader

  • 批次大小設為 16

2)圖像可視化工具?

pixel_to_image 函數

  • 將像素索引序列轉換為 20x20 RGB 圖像

  • 使用顏色映射表將索引映射到實際 RGB 值

  • 返回 PIL Image 對象

show_images 函數

  • 在 6x16 網格中顯示最多 96 張圖像

  • 使用 matplotlib 進行可視化

3)模型配置與創建

GPT-2 配置

gpt2_config = {"n_layer": 2,        # 2層 Transformer"n_head": 2,         # 2個注意力頭"n_embd": 64,        # 64維嵌入"vocab_size": 167,   # 167個顏色類別"n_ctx": 128,        # 上下文長度128"n_positions": 400,  # 最大位置400# ... 其他配置
}

模型創建結果

  • 總參數量:136,384

  • 輕量級設計,適合快速訓練和實驗

4)訓練過程?

訓練配置

  • 50個 epoch

  • 學習率:1e-3

  • AdamW 優化器,權重衰減 0.1

  • CrossEntropyLoss 損失函數

訓練循環

  • 每個 epoch 進行訓練和驗證

  • 保存最佳損失的模型檢查點

  • 驗證時評估重建準確率

5)推理與結果生成

推理過程

  • 加載最佳模型檢查點

  • 對測試集進行批量推理

  • 生成完整的 400 像素序列

  • 保存結果到文件并可視化

6)訓練過程表現

損失下降趨勢

  • Epoch 47: Loss 1.4920

  • Epoch 48: Loss 1.4855

  • Epoch 49: Loss 1.4820

  • Epoch 50: Loss 1.4746

重建準確率

  • 在驗證集上的重建準確率約為 0.31-0.32

  • 表示模型能正確預測約 31% 的像素

7)生成結果質量

第一張截圖 - 生成的測試圖像

  • 顯示了模型生成的 Pokemon 像素圖像

  • 圖像具有明顯的 Pokemon 特征和顏色模式

  • 雖然不夠精細,但能識別出基本的形狀和顏色分布

8)FID分數

評估指標解讀:
- FID分數越低越好 :表示生成圖像與真實圖像的分布越相似
- 典型FID分數范圍:0-500+,優秀的生成模型通常FID < 50
- 該實現會輸出詳細的評估結果和保存到文件

# -*- coding: utf-8 -*-Automatically generated by Colab.Original file is located athttps://colab.research.google.com/drive/1EIggOm6u7Giu3RiT5Bhyke1OjsiV-kCX# Training Transformer# Utilities### Download packages
"""
# jupyter用戶需要安裝
# !pip install datasets==3.3.2"""### Import Packages"""import os
import matplotlib.pyplot as plt
import numpy as np
import torch
import torch.optim as optim
from PIL import Image
from torch import nn
from torch.utils.data import DataLoader, Dataset
from tqdm import tqdm
from transformers import AutoModelForCausalLM, GPT2Config, set_seed
from datasets import load_dataset
from typing import Dict, Any, Optional"""### Check Devices"""
# jupyter用戶需要檢查 是否啟用GPU
# !nvidia-smi"""### Set Random Seed"""set_seed(0)"""# Prepare Data### Define Dataset
"""from typing import List, Tuple, Union
import torch
from torch.utils.data import Datasetclass PixelSequenceDataset(Dataset):def __init__(self, data: List[List[int]], mode: str = "train"):"""A dataset class for handling pixel sequences.Args:data (List[List[int]]): A list of sequences, where each sequence is a list of integers.mode (str): The mode of operation, either "train", "dev", or "test".- "train": Returns (input_ids, labels) where input_ids are sequence[:-1] and labels are sequence[1:].- "dev": Returns (input_ids, labels) where input_ids are sequence[:-160] and labels are sequence[-160:].- "test": Returns only input_ids, as labels are not available."""self.data = dataself.mode = modedef __len__(self) -> int:"""Returns the total number of sequences in the dataset."""return len(self.data)def __getitem__(self, idx: int) -> Union[Tuple[torch.Tensor, torch.Tensor], torch.Tensor]:"""Fetches a sequence from the dataset and processes it based on the mode.Args:idx (int): The index of the sequence.Returns:- If mode == "train": Tuple[torch.Tensor, torch.Tensor] -> (input_ids, labels)- If mode == "dev": Tuple[torch.Tensor, torch.Tensor] -> (input_ids, labels)- If mode == "test": torch.Tensor -> input_ids"""sequence = self.data[idx]if self.mode == "train":input_ids = torch.tensor(sequence[:-1], dtype=torch.long)labels = torch.tensor(sequence[1:], dtype=torch.long)return input_ids, labelselif self.mode == "dev":input_ids = torch.tensor(sequence[:-160], dtype=torch.long)labels = torch.tensor(sequence[-160:], dtype=torch.long)return input_ids, labelselif self.mode == "test":input_ids = torch.tensor(sequence, dtype=torch.long)return input_idsraise ValueError(f"Invalid mode: {self.mode}. Choose from 'train', 'dev', or 'test'.")"""### Download Dataset & Prepare Dataloader"""# Load the pokemon dataset from Hugging Face Hub
pokemon_dataset = load_dataset("lca0503/ml2025-hw4-pokemon")# Load the colormap from Hugging Face Hub
colormap = list(load_dataset("lca0503/ml2025-hw4-colormap")["train"]["color"])# Define number of classes
num_classes = len(colormap)# Define batch size
batch_size = 16# === Prepare Dataset and DataLoader for Training ===
train_dataset: PixelSequenceDataset = PixelSequenceDataset(pokemon_dataset["train"]["pixel_color"], mode="train"
)
train_dataloader: DataLoader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True
)# === Prepare Dataset and DataLoader for Validation ===
dev_dataset: PixelSequenceDataset = PixelSequenceDataset(pokemon_dataset["dev"]["pixel_color"], mode="dev"
)
dev_dataloader: DataLoader = DataLoader(dev_dataset, batch_size=batch_size, shuffle=False
)# === Prepare Dataset and DataLoader for Testing ===
test_dataset: PixelSequenceDataset = PixelSequenceDataset(pokemon_dataset["test"]["pixel_color"], mode="test"
)
test_dataloader: DataLoader = DataLoader(test_dataset, batch_size=batch_size, shuffle=False
)pokemon_datasetlen(pokemon_dataset['train']['pixel_color'][0])len(colormap)"""### Visualization"""def pixel_to_image(pixel_color: List[int], colormap: List[List[int]]) -> Image.Image:"""Converts a list of pixel indices into a 20x20 RGB image using a colormap.Args:pixel_color (List[int]): A list of pixel indices representing colors.colormap (List[List[int]]): A list where each index maps to an RGB color [R, G, B].Returns:Image.Image: A PIL Image object representing the reconstructed image."""# Ensure the pixel_color list has at least 400 elements (pad with 0s if needed)while len(pixel_color) < 400:pixel_color.append(0)# Map pixel indices to actual RGB colors using the colormappixel_data = [colormap[pixel] for pixel in pixel_color]# Convert to numpy array and reshape to 20x20x3 (RGB image)image_array = np.array(pixel_data, dtype=np.uint8).reshape(20, 20, 3)# Create a PIL Image from the arrayimage = Image.fromarray(image_array)return imagedef show_images(images: List[Image.Image]) -> None:"""Displays a grid of up to 96 images using Matplotlib.Args:images (List[Image.Image]): A list of PIL Image objects to display.Returns:None"""num_images = min(96, len(images))  # Limit to 96 images# Set up the figure size and grid layout (6 rows, 16 columns)fig, axes = plt.subplots(6, 16, figsize=(16, 6))axes = axes.flatten()  # Flatten to make iteration easier# Loop through images and display each one in the gridfor i, ax in enumerate(axes):if i < num_images:ax.imshow(images[i])ax.axis('off')  # Hide axiselse:ax.axis('off')  # Hide unused subplotsplt.tight_layout()  # Adjust layout to prevent overlapplt.show()# Visualize train images
train_images = [pixel_to_image(data["pixel_color"], colormap) for data in pokemon_dataset["train"]]
show_images(train_images)# Visualize test images
test_images = [pixel_to_image(data["pixel_color"], colormap) for data in pokemon_dataset["test"]]
show_images(test_images)"""# Prepare Model### Model Configuration
Here, we define the model configuration, including the architecture and key hyperparameters such as the number of attention heads, layers, embedding size, and more.
*   Hint 1: Adjust hyperparameters here for improved performance.
*   Hint 2: Experiment with different model architectures, such as Llama, Mistral, or Qwen, to enhance performance.* [LlamaConfig](https://huggingface.co/docs/transformers/model_doc/llama#transformers.LlamaConfig)* [MistralConfig](https://huggingface.co/docs/transformers/model_doc/mistral#transformers.MistralConfig)* [Qwen2Config](https://huggingface.co/docs/transformers/model_doc/qwen2#transformers.Qwen2Config)
"""# Define GPT-2 model configuration as a dictionary
gpt2_config = {"activation_function": "gelu_new",    # Activation function used in the model"architectures": ["GPT2LMHeadModel"],  # Specifies the model type"attn_pdrop": 0.1,            # Dropout rate for attention layers"embd_pdrop": 0.1,            # Dropout rate for embeddings"initializer_range": 0.02,        # Standard deviation for weight initialization"layer_norm_epsilon": 1e-05,       # Small constant to improve numerical stability in layer norm"model_type": "gpt2",           # Type of model"n_ctx": 128,               # Context size (maximum sequence length)"n_embd": 64,              # Embedding size"n_head": 2,               # Number of attention heads"n_layer": 2,              # Number of transformer layers"n_positions": 400,           # Maximum number of token positions"resid_pdrop": 0.1,           # Dropout rate for residual connections"vocab_size": num_classes,       # Number of unique tokens in vocabulary"pad_token_id": None,          # Padding token ID (None means no padding token)"eos_token_id": None,          # End-of-sequence token ID (None means not explicitly defined)
}# Load GPT-2 model configuration from dictionary
config = GPT2Config.from_dict(gpt2_config)"""### Load Model"""# Load the model using the configuration defined above
model = AutoModelForCausalLM.from_config(config)print(model)# Count trainable parameters
trainable_params = sum(p.numel() for p in model.parameters() if p.requires_grad)print(f"Trainable Parameters: {trainable_params:,}")"""# Train and Inference### Training Arguments
Here, we define the number of epochs for training, the learning rate, the optimizer, and the loss function.
*   Hint 3: Adjust the number of epochs and learning rate here to improve performance.
"""# Training Parameters
epochs = 50                                      # Number of training epochs
learning_rate = 1e-3                                 # Learning rate for optimizer
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")     # Check if CUDA is available for GPU
save_dir = "checkpoints"                               # Directory to save model checkpoints# Loss function and optimizer
criterion = nn.CrossEntropyLoss()                          # Loss function for classification tasks
optimizer = optim.AdamW(model.parameters(), lr=learning_rate, weight_decay=0.1) # AdamW optimizer with weight decay"""### Save Model Function"""def save_model(model: torch.nn.Module, optimizer: torch.optim.Optimizer, epoch: int, loss: float, save_dir: str, filename: str = "best_model.pth") -> None:"""Saves the model state, optimizer state, current epoch, and loss to a specified directory.Args:model (torch.nn.Module): The PyTorch model to be saved.optimizer (torch.optim.Optimizer): The optimizer whose state will be saved.epoch (int): The current epoch number (used for checkpointing).loss (float): The current loss value to track model performance.save_dir (str): The directory where the model checkpoint will be saved.filename (str, optional): The name of the file to save the model. Defaults to "best_model.pth".Returns:None"""# Construct the full path for saving the model checkpointsave_path = os.path.join(save_dir, filename)# Save the model, optimizer state, and additional metadata (epoch and loss)torch.save({'epoch': epoch + 1,                # Save epoch + 1 for easier tracking'model_state_dict': model.state_dict(),       # Save model weights'optimizer_state_dict': optimizer.state_dict(),  # Save optimizer state (important for resuming training)'loss': loss                     # Save the current loss value}, save_path)# Print a confirmation message indicating the model has been savedprint(f"Model saved at {save_path} (Loss: {loss:.4f}, Epoch: {epoch + 1})")"""### TrainWe save the checkpoint with the lowest training loss since validation set reconstruction accuracy doesn't directly reflect the model's image generation quality.
*   Hint 4: Train a classifier to check if an image looks like a Pokémon or not. (Optional)
"""# Create save directory if it doesn't exist
os.makedirs(save_dir, exist_ok=True)
# Initialize best loss as positive infinity for comparison during model checkpointing
best_loss: float = float('inf')
# Move model to the appropriate device (GPU or CPU)
model.to(device)# Training Loop
for epoch in range(epochs):model.train()  # Set the model to training modeepoch_loss = 0  # Initialize the epoch loss# Iterate over training data batchesfor input_ids, labels in tqdm(train_dataloader, desc=f"Training Epoch {epoch + 1}/{epochs}"):input_ids, labels = input_ids.to(device), labels.to(device)  # Move data to the same device as the model# Forward pass through the model to get logits (output probabilities)outputs = model(input_ids=input_ids).logits.view(-1, config.vocab_size)labels = labels.view(-1)  # Flatten labels to match logits shape# Calculate loss using CrossEntropyLossloss = criterion(outputs, labels)# Backpropagation and optimizer stepoptimizer.zero_grad()  # Reset gradients to zeroloss.backward()     # Compute gradientsoptimizer.step()     # Update model weights# Accumulate the loss for the epochepoch_loss += loss.item()# Compute average epoch lossavg_epoch_loss = epoch_loss / len(train_dataloader)print(f"Epoch {epoch + 1}/{epochs}, Loss: {avg_epoch_loss:.4f}")# Evaluation Loop (Validation)model.eval()      # Set the model to evaluation mode (disables dropout, etc.)total_accuracy = 0  # Initialize total accuracynum_batches = 0   # Initialize batch counterwith torch.no_grad():  # Disable gradient calculation for validation# Iterate over validation data batchesfor inputs, labels in tqdm(dev_dataloader, desc="Evaluating"):inputs, labels = inputs.to(device), labels.to(device)  # Move validation data to deviceattention_mask = torch.ones_like(inputs)          # Attention mask to ensure valid token positions# Perform batch inference using the modelgenerated_outputs = model.generate(inputs, attention_mask=attention_mask, max_length=400)# Extract the last 160 tokens from generated outputs and labelsgenerated_outputs = generated_outputs[:, -160:]# Calculate accuracy for the batchaccuracy = (generated_outputs == labels).float().mean().item()total_accuracy += accuracynum_batches += 1# Compute average reconstruction accuracy for the epochavg_accuracy = total_accuracy / num_batchesprint(f"Epoch {epoch + 1}/{epochs}, Reconstruction Accuracy: {avg_accuracy:.4f}")# If the current epoch loss is better (lower) than the best loss, save the modelif avg_epoch_loss < best_loss:best_loss = avg_epoch_loss                   # Update best losssave_model(model, optimizer, epoch, best_loss, save_dir)  # Save the model with the best loss"""### Inference"""# Load the best model from the saved checkpoint
best_model_path = os.path.join(save_dir, "best_model.pth")              # Path to the best model checkpoint
checkpoint = torch.load(best_model_path, weights_only=True, map_location=device)  # Load checkpoint from the file
model.load_state_dict(checkpoint["model_state_dict"])                  # Load the model weights from checkpoint
model.eval()                                        # Set the model to evaluation mode (disables dropout, etc.)# Testing Loop with Batch Inference
results: list = []  # List to store the generated sequences from the modelwith torch.no_grad():  # Disable gradient calculations for inference# Iterate over test data in batchesfor inputs in tqdm(test_dataloader, desc="Generating Outputs"):inputs = inputs.to(device)         # Move model to the appropriate device (GPU or CPU)attention_mask = torch.ones_like(inputs)  # Attention mask (ensure valid token positions)# Generate predictions for the entire batchgenerated_outputs = model.generate(inputs, attention_mask=attention_mask, max_length=400)# Convert batch outputs to a list and append to resultsbatch_results = generated_outputs.cpu().numpy().tolist()results.extend(batch_results)  # Extend the results list with batch results# Save the results to a file
output_file: str = "reconstructed_results.txt"  # File to save the output sequences
with open(output_file, "w") as f:# Write each sequence to the filefor seq in results:f.write(" ".join(map(str, seq)) + "\n")print(f"Reconstructed results saved to {output_file}")  # Confirmation message# Visualize generated test images
predicted_images = [pixel_to_image(sequence, colormap) for sequence in results]
show_images(predicted_images)"""# FID"""# === FID Evaluation Implementation ===
import numpy as np
from scipy import linalg
import torchvision.transforms as transforms
from torchvision.models import inception_v3
import torch.nn.functional as F
from PIL import Image
import warnings
warnings.filterwarnings('ignore')class FIDCalculator:def __init__(self, device='cuda' if torch.cuda.is_available() else 'cpu'):"""Initialize FID calculator with Inception v3 model.Args:device: Device to run calculations on"""self.device = device# Load pre-trained Inception v3 modelself.inception_model = inception_v3(pretrained=True, transform_input=False)self.inception_model.fc = torch.nn.Identity()  # Remove final classification layerself.inception_model.eval()self.inception_model.to(device)# Define image preprocessing transformsself.transform = transforms.Compose([transforms.Resize((299, 299)),  # Inception v3 expects 299x299 imagestransforms.ToTensor(),transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])])def preprocess_images(self, images):"""Preprocess images for Inception v3.Args:images: List of PIL ImagesReturns:Preprocessed tensor batch"""processed_images = []for img in images:# Convert to RGB if neededif img.mode != 'RGB':img = img.convert('RGB')# Apply transformsprocessed_img = self.transform(img)processed_images.append(processed_img)return torch.stack(processed_images)def extract_features(self, images, batch_size=32):"""Extract features from images using Inception v3.Args:images: List of PIL Imagesbatch_size: Batch size for processingReturns:Feature vectors as numpy array"""features = []with torch.no_grad():for i in range(0, len(images), batch_size):batch_images = images[i:i + batch_size]# Preprocess batchbatch_tensor = self.preprocess_images(batch_images).to(self.device)# Extract featuresbatch_features = self.inception_model(batch_tensor)features.append(batch_features.cpu().numpy())return np.concatenate(features, axis=0)def calculate_statistics(self, features):"""Calculate mean and covariance matrix of features.Args:features: Feature vectors as numpy arrayReturns:Tuple of (mean, covariance matrix)"""mu = np.mean(features, axis=0)sigma = np.cov(features, rowvar=False)return mu, sigmadef calculate_frechet_distance(self, mu1, sigma1, mu2, sigma2, eps=1e-6):"""Calculate Fréchet distance between two multivariate Gaussians.Args:mu1, mu2: Mean vectorssigma1, sigma2: Covariance matriceseps: Small value for numerical stabilityReturns:Fréchet distance"""mu1 = np.atleast_1d(mu1)mu2 = np.atleast_1d(mu2)sigma1 = np.atleast_2d(sigma1)sigma2 = np.atleast_2d(sigma2)assert mu1.shape == mu2.shape, "Mean vectors have different lengths"assert sigma1.shape == sigma2.shape, "Covariance matrices have different dimensions"diff = mu1 - mu2# Calculate sqrt of product of covariance matricescovmean, _ = linalg.sqrtm(sigma1.dot(sigma2), disp=False)# Handle numerical errorsif not np.isfinite(covmean).all():msg = ('fid calculation produces singular product; ''adding %s to diagonal of cov estimates') % epsprint(msg)offset = np.eye(sigma1.shape[0]) * epscovmean = linalg.sqrtm((sigma1 + offset).dot(sigma2 + offset))# Numerical error might give slight imaginary componentif np.iscomplexobj(covmean):if not np.allclose(np.diagonal(covmean).imag, 0, atol=1e-3):m = np.max(np.abs(covmean.imag))raise ValueError('Imaginary component {}'.format(m))covmean = covmean.realtr_covmean = np.trace(covmean)return (diff.dot(diff) + np.trace(sigma1) + np.trace(sigma2) - 2 * tr_covmean)def compute_fid(self, real_images, generated_images):"""Compute FID score between real and generated images.Args:real_images: List of PIL Images (real images)generated_images: List of PIL Images (generated images)Returns:FID score"""print("Extracting features from real images...")real_features = self.extract_features(real_images)print("Extracting features from generated images...")generated_features = self.extract_features(generated_images)print("Calculating statistics...")mu_real, sigma_real = self.calculate_statistics(real_features)mu_generated, sigma_generated = self.calculate_statistics(generated_features)print("Computing FID score...")fid_score = self.calculate_frechet_distance(mu_real, sigma_real,mu_generated, sigma_generated)return fid_score# === FID Evaluation Usage ===# Initialize FID calculator
fid_calculator = FIDCalculator(device=device)# Get real test images for comparison
real_test_images = [pixel_to_image(data["pixel_color"], colormap) for data in pokemon_dataset["test"]]# Use the generated images from your model
generated_test_images = predicted_images  # These are already generated from your model# Ensure we have the same number of images for fair comparison
min_images = min(len(real_test_images), len(generated_test_images))
real_test_images = real_test_images[:min_images]
generated_test_images = generated_test_images[:min_images]print(f"Evaluating FID with {min_images} images...")# Calculate FID score
fid_score = fid_calculator.compute_fid(real_test_images, generated_test_images)print(f"\n=== FID Evaluation Results ===")
print(f"FID Score: {fid_score:.4f}")
print(f"Number of images evaluated: {min_images}")
print(f"Lower FID scores indicate better image quality and diversity.")# Optional: Save FID results to file
with open("fid_results.txt", "w") as f:f.write(f"FID Score: {fid_score:.4f}\n")f.write(f"Number of images: {min_images}\n")f.write(f"Model configuration: {gpt2_config}\n")print(f"FID results saved to fid_results.txt")

本文來自互聯網用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。
如若轉載,請注明出處:http://www.pswp.cn/bicheng/95957.shtml
繁體地址,請注明出處:http://hk.pswp.cn/bicheng/95957.shtml
英文地址,請注明出處:http://en.pswp.cn/bicheng/95957.shtml

如若內容造成侵權/違法違規/事實不符,請聯系多彩編程網進行投訴反饋email:809451989@qq.com,一經查實,立即刪除!

相關文章

leetcode211.添加與搜索單詞-數據結構設計

與208.前綴樹的設計是一樣的&#xff0c;關鍵點在于word中存在通配符“."&#xff0c;所以針對該特殊情況&#xff0c;在search時針對這里進行全子節點的深度搜索class WordDictionary {TrieNode root;private class TrieNode {char val;// 當前節點的值&#xff0c;冗余了…

項目中的一些比較實用的自定義控件

本文是記錄項目開發中一些相對復雜但都比較實用的控件&#xff0c;這些控件都是基于自定義的方式去實現&#xff0c;如果有需要的朋友&#xff0c;這個可以作為一個參考&#xff0c;同時也做一個自我總結。 &#xff08;1&#xff09;子項大小不一致的RecyclerView&#xff08;…

[iOS] 折疊 cell

目錄 前言 1.原理 2.折疊 cell 的點擊選中 3.折疊 cell 高度的變化 4.實現效果 5.總結 前言 折疊 cell 是在 3GShare 中寫過的一個小控件&#xff0c;這篇博客是一個小小的總結。 1.原理 在這里的核心就是我們可以通過改變按鈕的 tag 值來判斷我們是否應該展開還是回收…

MySQL的組復制(MGR)高可用集群搭建

一、MySQL 組復制&#xff08;MGR&#xff09;核心概念 MySQL Group Replication&#xff08;簡稱 MGR&#xff09;是 MySQL 官方推出的 高可用&#xff08;HA&#xff09; 強一致性 解決方案&#xff0c;基于改進的 Paxos 協議實現&#xff0c;核心能力可概括為 3 點&#xf…

使用Shell腳本實現Linux系統資源監控郵件告警

前言 1. 問題背景與需求 2. Bash 腳本監控資源 3. Bash 腳本判斷閾值 4. 配置 msmtp 發送郵件 4.1 安裝 msmtp 4.2 創建配置文件 /etc/msmtprc 5. 發送郵件 5.1 給別人發郵件 6. 完整示例腳本 7. 測試方法 8. 常見問題解答 9. 總結 前言 在運維過程中&#xff0c…

設計整體 的 序分(三“釋”)、正宗分(雙“門”)和流通分(統一的通行表達式) 之3 “自明性”(騰訊元寶 之2)

Q&AQ11、可能還需要補充 魂軸、體軸 和 中心軸 并行 上升 的內容Q11.1、我剛才說“可能還需要補充 魂軸、體軸 和 中心軸 并行 上升 的內容” 是指的 我們今天前面討論 得出的整體設計 的一個概念整體 的一個雙螺旋上升結構中的三個軸。 您剛才是這樣 理解的嗎&#xff1f;…

使用Ansible自動化部署Hadoop集群(含源碼)--環境準備

現在我們有5臺虛擬機&#xff0c;已經配置好了主機名和網絡我們的目標是通過Ansible實現自動化部署hadoop集群。在此之前&#xff0c;我們先編寫一個shell腳本來配置hadoop集群的環境&#xff0c;包括安裝軟件、安裝配置Ansible&#xff08;一個主節點四個工作節點&#xff09;…

C#海康車牌識別實戰指南帶源碼

C#海康車牌識別實戰指南帶源碼前言車牌識別技術在智能交通、停車場管理等領域有著廣泛的應用。海康威視作為國內領先的安防廠商&#xff0c;其車牌識別相機提供了豐富的SDK接口供開發者使用。本文將詳細介紹如何使用C#語言結合海康威視SDK實現車牌識別功能&#xff0c;并解析關…

智慧能源新范式:數字孿生平臺如何驅動風電場的精細化管理?

摘要你有沒有想過&#xff0c;一座風力發電場背后&#xff0c;藏著一個“看不見的孿生兄弟”&#xff1f;它能提前預知風機故障&#xff0c;實時模擬極端天氣的影響&#xff0c;甚至能“訓練”運維人員在虛擬場景中演練搶修。這就是數字孿生——一個讓風電場從“靠經驗管理”轉…

STM32-FreeRTOS操作系統-任務管理

引言 隨著嵌入式技術的飛速發展&#xff0c;STM32與FreeRTOS的融合愈發緊密。本文聚焦于STM32平臺下FreeRTOS操作系統的任務管理&#xff0c;旨在為開發者提供清晰的思路與實用的技巧&#xff0c;助力高效開發。 為什么要進行任務管理&#xff1f; 在嵌入式系統中&#xff0c;…

工業領域 ACP 協議全解析:從入門到實戰案例

工業領域 ACP 協議全解析&#xff1a;從入門到實戰案例 文章目錄工業領域 ACP 協議全解析&#xff1a;從入門到實戰案例一、前言二、ACP 協議是什么&#xff1f;1. 基本定義2. 與數據傳輸協議的區別三、ACP 協議的核心功能1. 身份認證&#xff08;Authentication&#xff09;2.…

計算機組成原理:計算機硬件的基本組成

&#x1f4cc;目錄&#x1f5a5;? 計算機硬件的基本組成&#xff1a;從經典到現代的架構演進&#x1f9e9; 一、計算機硬件的五大部分&#xff1a;功能與協同&#x1f4e5; &#xff08;一&#xff09;輸入設備&#xff1a;人機交互的“入口”&#x1f4e4; &#xff08;二&am…

AI歌手功能終于上線!Suno AI 帶你保存歌曲的靈魂

當我們談論一首歌時&#xff0c;究竟是什么讓它“獨一無二”&#xff1f;是主唱的聲音質感&#xff1f;是旋律里的氛圍&#xff1f;還是那種無法復制的風格氣息&#xff1f; 如今&#xff0c;Suno AI 給出了答案—— AI歌手功能正式上線&#xff01; &#x1f31f;什么是「AI…

Dubbo3.3 Triple協議處理東西向流量

前言 Apache Dubbo 3.3 對 Triple 協議做了升級&#xff0c;現在 Dubbo 不僅可以處理東西向流量&#xff0c;也可以處理南北向流量。 **東西向流量&#xff08;East-West Traffic&#xff09; ** 指數據中心或網絡內部同級設備/服務之間的通信。例如&#xff0c;微服務之間的…

操作系統核心特點詳解:從并發到分布式,一文搞懂考研必備知識

操作系統核心特點詳解&#xff1a;從并發到分布式&#xff0c;一文搞懂考研必備知識 大家好&#xff0c;今天咱們來聊聊操作系統&#xff08;OS&#xff09;這個計算機世界的“大管家”。想象一下&#xff0c;你的電腦就像一個忙碌的廚房&#xff0c;操作系統就是那個廚師長&am…

2025精選5款AI視頻轉文字工具,高效轉錄秒變文字!

視頻轉文本的需求早已滲透到生活的方方面面&#xff1a;網課學習需要提取課件臺詞、會議記錄想快速整理要點、追劇時急需生肉轉字幕…… 手動記錄不僅費時&#xff0c;還容易遺漏關鍵信息。今天就分享5款實用工具&#xff0c;從免費到專業全覆蓋&#xff0c;幾步操作就能讓視頻…

MyBatis Example模式SQL注入風險

在使用MyBatis逆向工程生成的Example查詢模式時&#xff0c;很多開發者看到XML中存在${}占位符就會擔心SQL注入問題。但實際上&#xff0c;存在${}并不等同于存在SQL注入風險。本文將詳細分析何時會存在真正的注入風險。 存在SQL注入的兩個關鍵前提 前提一&#xff1a;Criteria…

寶塔PostgreSQL安裝pgvecto插件contrib包實現向量存儲

1. 寶塔安裝 首先確保你的寶塔已經安裝了 PostgreSQL。 安裝好后是能看到上面這個界面的。 我安裝的是 16.1 版本&#xff0c;下面的教程講的也是 16.1 版本的。 2.開放防火墻的端口號 5432 3.允許外部訪問所有數據庫 4.設置超級管理員用戶密碼 用戶名默認為&#xff1a;po…

麒麟系統 doc轉pdf

# 安裝LibreOffice&#xff08;如果尚未安裝&#xff09; sudo apt update sudo apt install libreoffice# 將DOC轉換為PDF libreoffice --headless --convert-to pdf 你的文檔.doc# 或者指定輸出目錄 libreoffice --headless --convert-to pdf --outdir /輸出目錄 你的文檔.do…

Python實現生成矩形框、三角形框、六邊形框和圓環點云

本節我們分享上節提到的不填充點云。在點云處理、計算機視覺與工業檢測中&#xff0c;幾何輪廓&#xff08;邊框/環&#xff09;點云比實心點云更能反映物體的邊緣特征、結構骨架與形貌突變區域。Python 借助 NumPy 即可快速生成矩形邊框、三角形邊框、六邊形邊框與圓環點云&am…