FastVLM-0.5B 模型解析

模型介紹

FastVLM(Fast Vision-Language Model)是蘋果團隊于2025年在CVPR會議上提出的高效視覺語言模型,專為移動設備(如iPhone、iPad、Mac)優化,核心創新在于通過全新設計的 FastViTHD混合視覺編碼器 解決了傳統視覺語言模型(VLM)在高分辨率圖像處理中的“編碼延遲高、視覺token冗余”等痛點,實現了速度與性能的雙重突破

FastViTHD 采用“卷積神經網絡(CNN)+ Transformer”的混合架構,兼顧局部特征提取與全局建模能力,通過三大設計大幅降低計算復雜度:

  1. 動態分辨率調整:基于特征圖信息熵動態分配計算資源,對圖像關鍵區域(如文字、物體)分配高分辨率,背景區域低分辨率,在ImageNet-1K上減少47%計算量;
  2. 層級 token 壓縮:將傳統VLM的1536個視覺token壓縮至576個(減少62.5%),大幅降低語言模型的處理負擔;
  3. 輕量卷積嵌入:用輕量卷積層(僅增加0.3%參數)替代傳統ViT的patch embedding,更快提取局部特征。

模型性能

FastVLM在“首token生成時間(TTFT)”和“模型輕量化”上表現突出:

  • 速度對比
    • 最小變體(FastVLM-0.5B)的TTFT比LLaVA-OneVision-0.5B快85倍,比Cambrian-1-8B(基于Qwen2-7B)快7.9倍
    • 在1152×1152高分辨率圖像上,整體性能媲美競品,但視覺編碼器體積小3.4倍
  • 硬件適配
    • 針對蘋果A18芯片和M2/M4處理器優化矩陣運算,支持CoreML集成,在iPad Pro M2上實現60 FPS連續對話
    • 動態INT8量化后內存占用減少40%,保持98%精度,0.5B模型的App僅占1.8GB內存。
      在這里插入圖片描述
      在這里插入圖片描述

模型加載

import torch
from PIL import Image
from modelscope import AutoTokenizer, AutoModelForCausalLMMID = "apple/FastVLM-0.5B"
IMAGE_TOKEN_INDEX = -200  # what the model code looks for# Load
tok = AutoTokenizer.from_pretrained(MID, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(MID,torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32,device_map="auto",trust_remote_code=True,
)

模型配置

tok 
Qwen2TokenizerFast(name_or_path='/home/six/.cache/modelscope/hub/models/apple/FastVLM-0___5B', vocab_size=151643, model_max_length=8192, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<|im_end|>', 'pad_token': '<|endoftext|>', 'additional_special_tokens': ['<|im_start|>', '<|im_end|>']}, clean_up_tokenization_spaces=False, added_tokens_decoder={151643: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),151644: AddedToken("<|im_start|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),151645: AddedToken("<|im_end|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
}
)
model.config
LlavaConfig {"architectures": ["LlavaQwen2ForCausalLM"],"attention_dropout": 0.0,"auto_map": {"AutoConfig": "llava_qwen.LlavaConfig","AutoModelForCausalLM": "llava_qwen.LlavaQwen2ForCausalLM"},"bos_token_id": 151643,"dtype": "float16","eos_token_id": 151645,"freeze_mm_mlp_adapter": false,"hidden_act": "silu","hidden_size": 896,"image_aspect_ratio": "pad","image_grid_pinpoints": null,"initializer_range": 0.02,"intermediate_size": 4864,"layer_types": ["full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention","full_attention"],"max_position_embeddings": 32768,"max_window_layers": 24,"mm_hidden_size": 3072,"mm_patch_merge_type": "flat","mm_projector_lr": null,"mm_projector_type": "mlp2x_gelu","mm_use_im_patch_token": false,"mm_use_im_start_end": false,"mm_vision_select_feature": "patch","mm_vision_select_layer": -2,"mm_vision_tower": "mobileclip_l_1024","model_type": "llava_qwen2","num_attention_heads": 14,"num_hidden_layers": 24,"num_key_value_heads": 2,"rms_norm_eps": 1e-06,"rope_scaling": null,"rope_theta": 1000000.0,"sliding_window": null,"tie_word_embeddings": true,"tokenizer_model_max_length": 8192,"tokenizer_padding_side": "right","transformers_version": "4.56.0","tune_mm_mlp_adapter": false,"unfreeze_mm_vision_tower": true,"use_cache": true,"use_mm_proj": true,"use_sliding_window": false,"vocab_size": 151936
}

模型結構

model
LlavaQwen2ForCausalLM((model): LlavaQwen2Model((embed_tokens): Embedding(151936, 896)(layers): ModuleList((0-23): 24 x Qwen2DecoderLayer((self_attn): Qwen2Attention((q_proj): Linear(in_features=896, out_features=896, bias=True)(k_proj): Linear(in_features=896, out_features=128, bias=True)(v_proj): Linear(in_features=896, out_features=128, bias=True)(o_proj): Linear(in_features=896, out_features=896, bias=False))(mlp): Qwen2MLP((gate_proj): Linear(in_features=896, out_features=4864, bias=False)(up_proj): Linear(in_features=896, out_features=4864, bias=False)(down_proj): Linear(in_features=4864, out_features=896, bias=False)(act_fn): SiLU())(input_layernorm): Qwen2RMSNorm((896,), eps=1e-06)(post_attention_layernorm): Qwen2RMSNorm((896,), eps=1e-06)))(norm): Qwen2RMSNorm((896,), eps=1e-06)(rotary_emb): Qwen2RotaryEmbedding()(vision_tower): MobileCLIPVisionTower((vision_tower): MCi((model): FastViT((patch_embed): Sequential((0): MobileOneBlock((se): Identity()(activation): GELU(approximate='none')(reparam_conv): Conv2d(3, 96, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)))(1): MobileOneBlock((se): Identity()(activation): GELU(approximate='none')(reparam_conv): Conv2d(96, 96, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=96))(2): MobileOneBlock((se): Identity()(activation): GELU(approximate='none')(reparam_conv): Conv2d(96, 96, kernel_size=(1, 1), stride=(1, 1))))(network): ModuleList((0): Sequential((0-1): RepMixerBlock((token_mixer): RepMixer((reparam_conv): Conv2d(96, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=96))(convffn): ConvFFN((conv): Sequential((conv): Conv2d(96, 96, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=96, bias=False)(bn): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True))(fc1): Conv2d(96, 384, kernel_size=(1, 1), stride=(1, 1))(act): GELU(approximate='none')(fc2): Conv2d(384, 96, kernel_size=(1, 1), stride=(1, 1))(drop): Dropout(p=0.0, inplace=False))(drop_path): Identity()))(1): PatchEmbed((proj): Sequential((0): ReparamLargeKernelConv((activation): GELU(approximate='none')(se): Identity()(lkb_reparam): Conv2d(96, 192, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), groups=96))(1): MobileOneBlock((se): Identity()(activation): GELU(approximate='none')(reparam_conv): Conv2d(192, 192, kernel_size=(1, 1), stride=(1, 1)))))(2): Sequential((0-11): RepMixerBlock((token_mixer): RepMixer((reparam_conv): Conv2d(192, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=192))(convffn): ConvFFN((conv): Sequential((conv): Conv2d(192, 192, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=192, bias=False)(bn): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True))(fc1): Conv2d(192, 768, kernel_size=(1, 1), stride=(1, 1))(act): GELU(approximate='none')(fc2): Conv2d(768, 192, kernel_size=(1, 1), stride=(1, 1))(drop): Dropout(p=0.0, inplace=False))(drop_path): Identity()))(3): PatchEmbed((proj): Sequential((0): ReparamLargeKernelConv((activation): GELU(approximate='none')(se): Identity()(lkb_reparam): Conv2d(192, 384, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), groups=192))(1): MobileOneBlock((se): Identity()(activation): GELU(approximate='none')(reparam_conv): Conv2d(384, 384, kernel_size=(1, 1), stride=(1, 1)))))(4): Sequential((0-23): RepMixerBlock((token_mixer): RepMixer((reparam_conv): Conv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=384))(convffn): ConvFFN((conv): Sequential((conv): Conv2d(384, 384, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=384, bias=False)(bn): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True))(fc1): Conv2d(384, 1536, kernel_size=(1, 1), stride=(1, 1))(act): GELU(approximate='none')(fc2): Conv2d(1536, 384, kernel_size=(1, 1), stride=(1, 1))(drop): Dropout(p=0.0, inplace=False))(drop_path): Identity())(5): PatchEmbed((proj): Sequential((0): ReparamLargeKernelConv((activation): GELU(approximate='none')(se): Identity()(lkb_reparam): Conv2d(384, 768, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), groups=384))(1): MobileOneBlock((se): Identity()(activation): GELU(approximate='none')(reparam_conv): Conv2d(768, 768, kernel_size=(1, 1), stride=(1, 1)))))(6): RepCPE((reparam_conv): Conv2d(768, 768, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=768))(7): Sequential((0-3): AttentionBlock((norm): LayerNormChannel()(token_mixer): MHSA((qkv): Linear(in_features=768, out_features=2304, bias=False)(attn_drop): Dropout(p=0.0, inplace=False)(proj): Linear(in_features=768, out_features=768, bias=True)(proj_drop): Dropout(p=0.0, inplace=False))(convffn): ConvFFN((conv): Sequential((conv): Conv2d(768, 768, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=768, bias=False)(bn): BatchNorm2d(768, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True))(fc1): Conv2d(768, 3072, kernel_size=(1, 1), stride=(1, 1))(act): GELU(approximate='none')(fc2): Conv2d(3072, 768, kernel_size=(1, 1), stride=(1, 1))(drop): Dropout(p=0.0, inplace=False))(drop_path): Identity())(8): PatchEmbed((proj): Sequential((0): ReparamLargeKernelConv((activation): GELU(approximate='none')(se): Identity()(lkb_reparam): Conv2d(768, 1536, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), groups=768))(1): MobileOneBlock((se): Identity()(activation): GELU(approximate='none')(reparam_conv): Conv2d(1536, 1536, kernel_size=(1, 1), stride=(1, 1)))))(9): RepCPE((reparam_conv): Conv2d(1536, 1536, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=1536))(10): Sequential((0-1): AttentionBlock((norm): LayerNormChannel()(token_mixer): MHSA((qkv): Linear(in_features=1536, out_features=4608, bias=False)(attn_drop): Dropout(p=0.0, inplace=False)(proj): Linear(in_features=1536, out_features=1536, bias=True)(proj_drop): Dropout(p=0.0, inplace=False))(convffn): ConvFFN((conv): Sequential((conv): Conv2d(1536, 1536, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), groups=1536, bias=False)(bn): BatchNorm2d(1536, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True))(fc1): Conv2d(1536, 6144, kernel_size=(1, 1), stride=(1, 1))(act): GELU(approximate='none')(fc2): Conv2d(6144, 1536, kernel_size=(1, 1), stride=(1, 1))(drop): Dropout(p=0.0, inplace=False))(drop_path): Identity()))(conv_exp): MobileOneBlock((se): SEBlock((reduce): Conv2d(3072, 192, kernel_size=(1, 1), stride=(1, 1))(expand): Conv2d(192, 3072, kernel_size=(1, 1), stride=(1, 1)))(activation): GELU(approximate='none')(reparam_conv): Conv2d(1536, 3072, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=1536))(head): GlobalPool2D())))(mm_projector): Sequential((0): Linear(in_features=3072, out_features=896, bias=True)(1): GELU(approximate='none')(2): Linear(in_features=896, out_features=896, bias=True)))(lm_head): Linear(in_features=896, out_features=151936, bias=False)
)

模型調用

在這里插入圖片描述

# Build chat -> render to string (not tokens) so we can place <image> exactly
messages = [{"role": "user", "content": "<image>\nDescribe this image in detail."}
]
rendered = tok.apply_chat_template(messages, add_generation_prompt=True, tokenize=False
)pre, post = rendered.split("<image>", 1)# Tokenize the text *around* the image token (no extra specials!)
pre_ids  = tok(pre,  return_tensors="pt", add_special_tokens=False).input_ids
post_ids = tok(post, return_tensors="pt", add_special_tokens=False).input_ids# Splice in the IMAGE token id (-200) at the placeholder position
img_tok = torch.tensor([[IMAGE_TOKEN_INDEX]], dtype=pre_ids.dtype)
input_ids = torch.cat([pre_ids, img_tok, post_ids], dim=1).to(model.device)
attention_mask = torch.ones_like(input_ids, device=model.device)# Preprocess image via the model's own processor
img = Image.open("image.png").convert("RGB")
px = model.get_vision_tower().image_processor(images=img, return_tensors="pt")["pixel_values"]
px = px.to(model.device, dtype=model.dtype)# Generate
with torch.no_grad():out = model.generate(inputs=input_ids,attention_mask=attention_mask,images=px,max_new_tokens=1024,)print(tok.decode(out[0], skip_special_tokens=True))
### Image DescriptionThe image is a photograph of handwritten notes. It is formatted in a columnar, portrait mode. The notes are written in a somewhat cursive and formal style with regular spacing between lines. The content of the notes is not followed by a specific topic or question, but rather appears to be a detailed narrative or reflection.#### Breakdown of the Content:1. **Title or Notation**:- The first line reads "Remind the both part of realistic history and interpret...". The precise context or terms suggest it might be a summary or introduction to a theoretical discussion, possibly related to historical real-world interpretations or an analytical piece on it.2. **Paragraph Structure**:- The text proceeds sequentially down the page, which looks like a detailed narrative or argument. Each paragraph begins with a header, followed by an initial statement or heading.3. **Content Analysis**:- **First Paragraph:** - There appears to be an initial statement emphasizing the comparison between realism, perhaps discussing historical periods such as "the dark" and the "internet buying and Internet buying of things". Parts of the heading might indicate a topic related to real-world analysis or comparison.- **Second Paragraph:**- The language becomes more descriptive, discussing the growth of "internet buying and Internet buying of things". Timeframes, statistical data, and percentages hint at a trend or progression being discussed, which indicates it could be a case study or comparative study.- **Third Paragraph:**- This part of the document mentions "four years," suggesting it is about a four-year period of observation or change within the context it refers to.- **Final Paragraph:**- It concludes with a concise conclusion or observation, indicating that the results of the previous analysis provided are valid or noteworthy.### Knowledge Integration:1. **Historical Realism**: Historically, realism is a philosophical approach that posits that we have all knowledge and the nature of reality. This perspective often frames history as an objective recounting of past events without subjective interpretation. Reputations and perceptions have naturally developed over time, often evolving in different ways due to various influences.2. **Internet Buying of Things**: The term "internet buying of things" suggests a reference to purchasing trends using computer systems, which are pivotal in today's digital economy. The reference to "2019" could be indicating a specific year's perspective, possibly within a historical context for analysis.### Chain of Thought:Given the structured format and the reference to "four years," it is plausible that the notes might be part of an analytical and reflective discussion, perhaps comparing old historical realist perspectives of the same historical period with contemporary digital trends, such as internet buying practices.This comprehensive description should enable a pure text model to effectively parse and answer questions related to the content or structure of the handwritten notes captured in the image.
---### AnalysisThe handwritten notes appear to be an analytical and reflective piece addressing historical realist interpretations and predictions in the context of online buying behaviors. The notes discuss the comparative development of historical realist views about historical periods and their evolution over time. They reference significant dates and percentages, likely from 2019. The notes conclude by noting that there is a direct comparison with current trends, specifically regarding "internet buying" as noted in the 2019 context. The narrative suggests a methodical approach, reflective of a theoretical or analytical examination of past and present trends, possibly using historical realist techniques to contextualize contemporary practices.The text you provided can be directly converted into a markdown table for better clarity and readability:| Column  | Content |
|---------|-----------|
| 1       | Remind the both part of realistic history and interpret...
| 2       | A comparison between historical periods such as "the dark" and the "internet buying and Internet buying of things".
| 3       | Timeframes of statistical data showing 2019.
| 4       | An example of a year with an increase of 24.5%.
| 5       | An increase of 233 with the year 2023 and a trend of 4303 of years.
| 6       | An additional detail suggesting the possibility of an observer's friends change.
| 7       | Likely a conclusion that the results of previous analysis of realist are valid.The markdown format simplifies the content and makes it formatted for further reading and

本文來自互聯網用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。
如若轉載,請注明出處:http://www.pswp.cn/pingmian/95534.shtml
繁體地址,請注明出處:http://hk.pswp.cn/pingmian/95534.shtml
英文地址,請注明出處:http://en.pswp.cn/pingmian/95534.shtml

如若內容造成侵權/違法違規/事實不符,請聯系多彩編程網進行投訴反饋email:809451989@qq.com,一經查實,立即刪除!

相關文章

集成學習 | MATLAB基于CNN-LSTM-Adaboost多輸入單輸出回歸預測

集成學習 | MATLAB基于CNN-LSTM-Adaboost多輸入單輸出回歸預測 一、主要功能 該代碼使用 CNN 提取特征,LSTM 捕捉時序依賴,并通過 AdaBoost 集成多個弱學習器(每個弱學習器是一個 CNN-LSTM 網絡),最終組合成一個強預測器,用于回歸預測任務。代碼完成了從數據預處理、模型…

關于Homebrew:Mac快速安裝Homebrew

關于macOS 安裝HomebrewHomebrewHomebrew介紹Homebrew 官網地址Homebrew 能安裝什么&#xff1f;Mac上安裝Homebrew主要步驟&#xff1a;打開終端&#xff0c;執行官網安裝腳本注意遇到問題①&#xff1a;腳本在克隆 Homebrew 核心倉庫時&#xff0c;??無法連接 GitHub??&a…

【前端】使用Vercel部署前端項目,api轉發到后端服務器

文章目錄Vercel是什么概要Vercel部署分為兩種方案&#xff1a;一、使用GitHub構建部署二、通過 Vercel CLI 上傳本地構建資源注意事項轉發API到后端小結Vercel是什么 Vercel是一款專為前端開發者打造的云部署平臺&#xff0c;它支持一鍵部署靜態網站、AI工具和現代Web應用。Ve…

滾珠導軌在工業制造領域如何實現高效運行?

在工業制造領域中滾珠導軌憑借其高精度、低摩擦、高剛性等特點&#xff0c;被廣泛應用于多種設備和場景&#xff0c;并在設備性能中起著關鍵作用&#xff0c;以下是具體應用&#xff1a;加工中心&#xff1a;滾珠導軌用于加工中心的工作臺和主軸箱等部件的移動&#xff0c;能保…

大基座模型與 Scaling Law:AI 時代的邏輯與困境

一、背景&#xff1a;為什么大模型一定要“做大”&#xff1f; 在人工智能的發展歷程中&#xff0c;有一個不容忽視的“鐵律”&#xff1a;更大的模型往往意味著更強的性能。從 GPT-2 到 GPT-4&#xff0c;從 BERT 到 PaLM&#xff0c;從 LLaMA 到 Claude&#xff0c;每一代的…

內網的應用系統間通信需要HTTPS嗎

內網是否需要 HTTPS&#xff1f; 雖然內網通常被視為“相對安全”的環境&#xff0c;但仍需根據具體情況決定是否使用 HTTPS&#xff0c;原因如下&#xff1a; 內部威脅風險 ● 內網可能面臨內部人員攻擊、橫向滲透&#xff08;如黑客突破邊界后在內網掃描&#xff09;、設備…

6.ImGui-顏色(色板)

免責聲明&#xff1a;內容僅供學習參考&#xff0c;請合法利用知識&#xff0c;禁止進行違法犯罪活動&#xff01; 本次游戲沒法給 內容參考于&#xff1a;微塵網絡安全 上一個內容&#xff1a;5.ImGui-按鈕 IMGui中表示顏色的的結構體 ImVec4和ImU32&#xff0c;如下圖紅框…

【C++】Vector完全指南:動態數組高效使用

0. 官方文檔 vector 1. vector介紹 Vector 簡單來說就是順序表&#xff0c;是一個可以動態增長的數組。 vector是表示可變大小數組的序列容器。 就像數組一樣&#xff0c;vector也采用的連續存儲空間來存儲元素。也就是意味著可以采用下標對vector的元素進行訪問&#xff0c…

關于無法導入父路徑的問題

問題重現 有下面的代碼&#xff1a; from ..utils import Config,set_DATA_PATH DATA_PATH set_DATA_PATH()報錯如下&#xff1a;from ..utils import Config,set_DATA_PATH ImportError: attempted relative import beyond top-level package解決方案 #獲取當前腳本所在目錄的…

C/C++包管理工具:Conan

Conan是一個專為C/C設計的開源、去中心化、跨平臺的包管理器&#xff0c;致力于簡化依賴管理和二進制分發流程。Conan基于Python進行開發&#xff0c;支持與主流的構建系統集成&#xff0c;提供了強大的跨平臺和交叉編譯能力。通過Conan&#xff0c;開發者可以高效的創建、共享…

核心高并發復雜接口重構方案

核心高并發復雜接口重構方案 一、重構目標與原則 核心目標 提升接口性能:降低響應時間,提高吞吐量,降低資源使用 增強可維護性:拆解復雜邏輯,模塊化設計,降低后續迭代成本 保障穩定性:通過架構優化和灰度策略,確保重構過程無服務中斷 提升擴展性:設計靈活的擴展點,…

C++容器內存布局與性能優化指南

C容器的內存布局和緩存友好性對程序性能有決定性影響。理解這些底層機制&#xff0c;能幫你寫出更高效的代碼。 一、容器內存布局概述 不同容器在內存中的組織方式差異顯著&#xff0c;這直接影響了它們的訪問效率和適用場景。容器類型內存布局特點元數據位置元素存儲位置std::…

Beautiful.ai:AI輔助PPT工具高效搞定排版,告別熬夜做匯報煩惱

你是不是每次做 PPT 都頭大&#xff1f;找模板、調排版、湊內容&#xff0c;熬大半夜出來的東西還沒眼看&#xff1f;尤其是遇到 “明天就要交匯報” 的緊急情況&#xff0c;打開 PPT 軟件半天&#xff0c;光標在空白頁上晃來晃去&#xff0c;連標題都想不出來 —— 這種抓瞎的…

阿里云攜手MiniMax構建云原生數倉最佳實踐:大模型時代的 Data + AI 數據處理平臺

MiniMax簡介MiniMax是全球領先的通用人工智能科技公司。自2022年初成立以來&#xff0c;MiniMax以“與所有人共創智能”為使命&#xff0c;致力于推動人工智能科技前沿發展&#xff0c;實現通用人工智能(AGI&#xff09;。MiniMax自主研發了一系列多模態通用大模型&#xff0c;…

一鍵生成PPT的AI工具排名:2025年能讀懂你思路的AI演示工具

人工智能正在重塑PPT制作方式&#xff0c;讓專業演示變得觸手可及。隨著人工智能技術的飛速發展&#xff0c;AI生成PPT工具已成為職場人士、學生和創作者提升效率的得力助手。這些工具通過智能算法&#xff0c;能夠快速將文本、數據或創意轉化為結構化、視覺化的演示文稿&#…

數據庫基礎知識——聚合函數、分組查詢

目錄 一、聚合函數 1.1 count 1.1.1 統計整張表中所有記錄的總條數 1.1.2 統計單列的數據 1.1.3 統計單列記錄限制條件 1.2 sum 1.3 avg 1.4 max, min 二、group by 分組查詢 2.1 語法 2.2 示例 2.3 having 一、聚合函數 常用的聚合函數 函數說明count ([distinc…

改 TDengine 數據庫的時間寫入限制

一 sql連數據庫改 改 TDengine 數據庫的時間寫入限制 之前默認了可寫入時間為一個月&#xff0c;調整為10年&#xff0c;方便測試&#xff1a; SHOW DATABASES;use wi; SELECT CONCAT(ALTER TABLE , table_name, KEEP 3650;) FROM information_schema.ins_tables WHERE db_…

數碼視訊TR100-OTT-G1_國科GK6323_安卓9_廣東聯通原機修改-TTL燒錄包-可救磚

數碼視訊TR100-OTT-G1_國科GK6323_安卓9_廣東聯通原機修改-TTL燒錄包-可救磚刷機教程數碼視訊 TR100-G1 TTL 燒錄刷機教程固件由廣東聯通 TR100-G1 28 原版修改&#xff0c;測試一切正常1、把刷機文件解壓出 備用&#xff0c;盒子主板接好 TTL&#xff0c;不會接自行查找 TTl 接…

TVS防護靜電二極管選型需要注意哪些參數?-ASIM阿賽姆

TVS防護靜電二極管選型關鍵參數詳解TVS(Transient Voltage Suppressor)二極管作為電路防護的核心器件&#xff0c;在電子設備靜電防護(ESD)、浪涌保護等領域發揮著重要作用。本文將系統性地介紹TVS二極管選型過程中需要重點關注的參數指標&#xff0c;幫助工程師做出合理選擇。…

項目經理為什么要有一張PMP?認證?

在項目管理日益成為企業核心競爭力的今天&#xff0c;PMP已成為項目經理職業發展的重要“通行證”。這張由美國項目管理協會&#xff08;PMI&#xff09;頒發的全球公認證書&#xff0c;不僅是專業能力的象征&#xff0c;更在職業競爭力、項目成功率、團隊協作等多個維度為項目…