windows中kafka4.0集群搭建

參考文獻

Apache Kafka

windows啟動kafka4.0(不再需要zookeeper)_kafka壓縮包-CSDN博客

Kafka 4.0 KRaft集群部署_kafka4.0集群部署-CSDN博客

正文

注意jdk需要17版本以上的

修改D:\software\kafka_2.13-4.0.0\node1\config\server.properties配置文件

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.############################# Server Basics ############################## The role of this server. Setting this puts us in KRaft mode
process.roles=broker,controller# The node id associated with this instance's roles
node.id=1# List of controller endpoints used connect to the controller cluster
controller.quorum.bootstrap.servers=localhost:19093############################# Socket Server Settings ############################## The address the socket server listens on.
# Combined nodes (i.e. those with `process.roles=broker,controller`) must list the controller listener here at a minimum.
# If the broker listener is not defined, the default listener will use a host name that is equal to the value of java.net.InetAddress.getCanonicalHostName(),
# with PLAINTEXT listener name, and port 9092.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
listeners=PLAINTEXT://localhost:19092,CONTROLLER://localhost:19093# Name of listener used for communication between brokers.
inter.broker.listener.name=PLAINTEXT# Listener name, hostname and port the broker or the controller will advertise to clients.
# If not set, it uses the value for "listeners".
advertised.listeners=PLAINTEXT://localhost:19092,CONTROLLER://localhost:19093# A comma-separated list of the names of the listeners used by the controller.
# If no explicit mapping set in `listener.security.protocol.map`, default will be using PLAINTEXT protocol
# This is required if running in KRaft mode.
controller.listener.names=CONTROLLER# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
listener.security.protocol.map=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600############################# Log Basics ############################## A comma separated list of directories under which to store log files
log.dirs=D:\\software\\kafka_2.13-4.0.0\\node1\\kraft-combined-logs
# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1############################# Internal Topic Settings  #############################
# The replication factor for the group metadata internal topics "__consumer_offsets", "__share_group_state" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
offsets.topic.replication.factor=1
share.coordinator.state.topic.replication.factor=1
share.coordinator.state.topic.min.isr=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1############################# Log Flush Policy ############################## Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using replication.
#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000############################# Log Retention Policy ############################## The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000controller.quorum.voters=1@127.0.0.1:19093,2@127.0.0.1:29093,3@127.0.0.1:39093

修改D:\software\kafka_2.13-4.0.0\node2\config\server.properties配置文件

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.############################# Server Basics ############################## The role of this server. Setting this puts us in KRaft mode
process.roles=broker,controller# The node id associated with this instance's roles
node.id=2# List of controller endpoints used connect to the controller cluster
controller.quorum.bootstrap.servers=localhost:29093############################# Socket Server Settings ############################## The address the socket server listens on.
# Combined nodes (i.e. those with `process.roles=broker,controller`) must list the controller listener here at a minimum.
# If the broker listener is not defined, the default listener will use a host name that is equal to the value of java.net.InetAddress.getCanonicalHostName(),
# with PLAINTEXT listener name, and port 9092.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
listeners=PLAINTEXT://localhost:29092,CONTROLLER://localhost:29093# Name of listener used for communication between brokers.
inter.broker.listener.name=PLAINTEXT# Listener name, hostname and port the broker or the controller will advertise to clients.
# If not set, it uses the value for "listeners".
advertised.listeners=PLAINTEXT://localhost:29092,CONTROLLER://localhost:29093# A comma-separated list of the names of the listeners used by the controller.
# If no explicit mapping set in `listener.security.protocol.map`, default will be using PLAINTEXT protocol
# This is required if running in KRaft mode.
controller.listener.names=CONTROLLER# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
listener.security.protocol.map=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600############################# Log Basics ############################## A comma separated list of directories under which to store log files
log.dirs=D:\\software\\kafka_2.13-4.0.0\\node2\\kraft-combined-logs# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1############################# Internal Topic Settings  #############################
# The replication factor for the group metadata internal topics "__consumer_offsets", "__share_group_state" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
offsets.topic.replication.factor=1
share.coordinator.state.topic.replication.factor=1
share.coordinator.state.topic.min.isr=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1############################# Log Flush Policy ############################## Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using replication.
#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000############################# Log Retention Policy ############################## The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000controller.quorum.voters=1@127.0.0.1:19093,2@127.0.0.1:29093,3@127.0.0.1:39093

修改D:\software\kafka_2.13-4.0.0\node3\config\server.properties配置文件

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.############################# Server Basics ############################## The role of this server. Setting this puts us in KRaft mode
process.roles=broker,controller# The node id associated with this instance's roles
node.id=3# List of controller endpoints used connect to the controller cluster
controller.quorum.bootstrap.servers=localhost:39093
############################# Socket Server Settings ############################## The address the socket server listens on.
# Combined nodes (i.e. those with `process.roles=broker,controller`) must list the controller listener here at a minimum.
# If the broker listener is not defined, the default listener will use a host name that is equal to the value of java.net.InetAddress.getCanonicalHostName(),
# with PLAINTEXT listener name, and port 9092.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
listeners=PLAINTEXT://localhost:39092,CONTROLLER://localhost:39093# Name of listener used for communication between brokers.
inter.broker.listener.name=PLAINTEXT# Listener name, hostname and port the broker or the controller will advertise to clients.
# If not set, it uses the value for "listeners".
advertised.listeners=PLAINTEXT://localhost:39092,CONTROLLER://localhost:39093# A comma-separated list of the names of the listeners used by the controller.
# If no explicit mapping set in `listener.security.protocol.map`, default will be using PLAINTEXT protocol
# This is required if running in KRaft mode.
controller.listener.names=CONTROLLER# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
listener.security.protocol.map=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600############################# Log Basics ############################## A comma separated list of directories under which to store log files
log.dirs=D:\\software\\kafka_2.13-4.0.0\\node3\\kraft-combined-logs# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1############################# Internal Topic Settings  #############################
# The replication factor for the group metadata internal topics "__consumer_offsets", "__share_group_state" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
offsets.topic.replication.factor=1
share.coordinator.state.topic.replication.factor=1
share.coordinator.state.topic.min.isr=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1############################# Log Flush Policy ############################## Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using replication.
#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000############################# Log Retention Policy ############################## The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000controller.quorum.voters=1@127.0.0.1:19093,2@127.0.0.1:29093,3@127.0.0.1:39093

會生成一個隨機的cluster.id(集群id)

.\bin\windows\kafka-storage.bat random-uuid

?雖然有error錯誤信息,但是這個不影響。我的uuid是xDd-c_vwSCOda91tgPLX2g

下面用這個uuid來作為cluster.id(集群id)來格式化日志

.\bin\windows\kafka-storage.bat format  -t xDd-c_vwSCOda91tgPLX2g -c .\config\server.properties

?

?啟動kafka

.\bin\windows\kafka-server-start.bat .\config\server.properties

Create a topic to store your events

.\bin\windows\kafka-topics.bat --create --topic quickstart-events --bootstrap-server localhost:19092,localhost:29092,localhost:39092

Write some events into the topic

.\bin\windows\kafka-console-producer.bat --topic quickstart-events --bootstrap-server localhost:19092,localhost:29092,localhost:39092


Read the events

.\bin\windows\kafka-console-consumer.bat --topic quickstart-events --from-beginning --bootstrap-server localhost:19092,localhost:29092,localhost:39092

使用kafka消息可視化工具offset Explorer進行連接

?

編寫一個bat腳本啟動kafka集群

start.bat

@echo off
start cmd /k "D:\software\kafka_2.13-4.0.0\node1\bin\windows\kafka-server-start.bat D:\software\kafka_2.13-4.0.0\node1\config\server.properties"
start cmd /k "D:\software\kafka_2.13-4.0.0\node2\bin\windows\kafka-server-start.bat D:\software\kafka_2.13-4.0.0\node2\config\server.properties"
start cmd /k "D:\software\kafka_2.13-4.0.0\node3\bin\windows\kafka-server-start.bat D:\software\kafka_2.13-4.0.0\node3\config\server.properties"

本文來自互聯網用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。
如若轉載,請注明出處:http://www.pswp.cn/news/902925.shtml
繁體地址,請注明出處:http://hk.pswp.cn/news/902925.shtml
英文地址,請注明出處:http://en.pswp.cn/news/902925.shtml

如若內容造成侵權/違法違規/事實不符,請聯系多彩編程網進行投訴反饋email:809451989@qq.com,一經查實,立即刪除!

相關文章

無線通信網

注意區分CA(無線)和CD(有線) 無線局域網擴頻技術 FHSS/DSSS 無線頻譜和信道:2.4G/5GHz,2.4GHz共13個信道,3個不重疊信道 CSMA/CA,隱藏節點 MANET 無線安全:WEP、WPA、WPA2、AES/TP…

嵌入式開發:基礎知識介紹

一、嵌入式系統 1、介紹 以提高對象體系智能性、控制力和人機交互能力為目的,通過相互作用和內在指標評價的,嵌入到對象體系中的專用計算機系統。 2、分類 按其形態的差異,一般可將嵌入式系統分為:芯片級(MCU、SoC&am…

uv包管理器如何安裝依賴?

uv包管理器如何安裝依賴? 輸入 uv pip install 包名 uv pip install python-docx

大模型驅動智能服務變革:從全流程賦能到行業縱深落地

大模型技術的快速發展,正深刻改變著人工智能的研發與應用模式。作為"軟硬協同、開箱即用"的智能化基礎設施,大模型一體機通過整合計算硬件、部署平臺和預置模型,重構了傳統AI部署方式,成為推動AI普惠化和行業落地的重要…

【MQ篇】RabbitMQ之簡單模式!

目錄 引言一、 初識 RabbitMQ 與工作模式二、 簡單模式 (Simple Queue) 詳解:最直接的“點對點快遞” 📮三、 Java (Spring Boot) 代碼實戰:讓小兔子跑起來! 🐰🏃?♂?四、 深入理解:簡單模式的…

Lua 第7部分 輸入輸出

由于 Lua 語言強調可移植性和嵌入性 , 所以 Lua 語言本身并沒有提供太多與外部交互的機制 。 在真實的 Lua 程序中,從圖形、數據庫到網絡的訪問等大多數 I/O 操作,要么由宿主程序實現,要么通過不包括在發行版中的外部庫實現。 單就…

【開源】STM32HAL庫移植Arduino OneWire庫驅動DS18B20和MAX31850

項目開源鏈接 github主頁https://github.com/snqx-lqh本項目github地址https://github.com/snqx-lqh/STM32F103C8T6HalDemo作者 VXQinghua-Li7 📖 歡迎交流 如果開源的代碼對你有幫助,希望可以幫我點個贊👍和收藏 項目說明 最近在做一個項目…

【合新通信】浸沒式液冷光模塊與冷媒兼容性測試技術報告

一、測試背景與核心挑戰 行業需求驅動 隨著800G/1.6T光模塊功耗突破30W/端口,傳統風冷已無法滿足散熱需求,浸沒式液冷成為超算/AI數據中心的主流方案。冷媒兼容性是系統可靠性的關鍵指標,涉及材料腐蝕、光學性能、長期穩定性等維度。 核心…

Pandas中的日期時間date處理

Pandas提供了強大的日期和時間處理功能,這對于時間序列分析至關重要。本教程將介紹Pandas中處理日期時間的主要方法。包括: 日期時間數據的創建和轉換日期時間屬性的提取時間差計算和日期運算重采樣和頻率轉換時區處理基于日期時間的索引操作 Pandas中…

Vue3文件上傳組件實戰:打造高效的Element Plus上傳解決方案,可以對文件進行刪除,查看,下載功能。

在現代Web開發中,文件上傳功能是許多應用的核心需求之一。無論是企業管理系統、內容管理系統還是醫療信息系統,上傳附件的功能都至關重要。本文將分享一個基于 Vue3 和 Element Plus 實現的文件上傳組件,結合父子組件的協作,展示如何構建一個功能強大、用戶體驗友好的文件上…

AI 工程師崛起:科技浪潮下的新興力量

在當今科技迅猛發展的時代,人工智能(AI)無疑是最熱門的領域之一。隨著基礎模型的涌現和開源 / API 的普及,一種新興的職業 ——AI 工程師,正逐漸嶄露頭角。他們在 AI 技術的應用和開發中扮演著關鍵角色,其崛…

人工智能與機器學習:Python從零實現邏輯回歸模型

🧠 向所有學習者致敬! “學習不是裝滿一桶水,而是點燃一把火。” —— 葉芝 我的博客主頁: https://lizheng.blog.csdn.net 🌐 歡迎點擊加入AI人工智能社區! 🚀 讓我們一起努力,共創…

濟南國網數字化培訓班學習筆記-第二組-5節-輸電線路設計

輸電線路設計 工程設計階段劃分 35kv及以上輸變電工程勘測設計全過程 可行性研究(包括規劃、工程選站)(包括電力系統一次二次,站址選擇及工程設想,線路工程選擇及工程設想,節能降耗分析,環境…

【Linux網絡】TCP服務中IOService應用與實現

📢博客主頁:https://blog.csdn.net/2301_779549673 📢博客倉庫:https://gitee.com/JohnKingW/linux_test/tree/master/lesson 📢歡迎點贊 👍 收藏 ?留言 📝 如有錯誤敬請指正! &…

Linux 怎么找Java程序的監聽的端口

Linux 怎么找Java程序的監聽的端口 1、假設你知道啟動該Java應用的進程ID (PID),可以通過以下命令查找其監聽的端口: 首先找到該Java應用的PID: ps -ef | grep xxxx-1.0-RELEASE.jar或者,如果你知道啟動命令的一部分&#xff0…

解讀《數據資產質量評估實施規則》:企業數據資產認證落地的關鍵指南

隨著“數據要素市場”建設加速,數據資產逐步成為企業核心資產之一。2024年4月,由中國質量認證中心(CQC)發布的《數據資產質量評估實施規則》(編號:CQC96-831160-2024)正式實施,為企業…

[吾愛出品] 【鍵鼠自動化工具】支持識別窗口、識圖、發送文本、按鍵組合等

鍵鼠自動化工具 鏈接:https://pan.xunlei.com/s/VOOhDZkj-E0mdDZCvo3jp6s4A1?pwdfufb# 1、增加的找圖點擊功能(不算增加,只能算縫補),各種的不完善,但是能運行。 2、因為受限于原程序的界面,…

【計算機視覺】CV實戰項目 - 基于YOLOv5的人臉檢測與關鍵點定位系統深度解析

基于YOLOv5的人臉檢測與關鍵點定位系統深度解析 1. 技術背景與項目意義傳統方案的局限性YOLOv5多任務方案的優勢 2. 核心算法原理網絡架構改進關鍵點回歸分支損失函數設計 3. 實戰指南:從環境搭建到模型應用環境配置數據準備數據格式要求數據目錄結構 模型訓練配置文…

IntelliJ IDEA 中配置 Spring MVC 環境的詳細步驟

以下是在 IntelliJ IDEA 中配置 Spring MVC 環境的詳細步驟: 步驟 1:創建 Maven Web 項目 新建項目 File -> New -> Project → 選擇 Maven → 勾選 Create from archetype → 選擇 maven-archetype-webapp。輸入 GroupId(如 com.examp…

基于javaweb的SpringBoot+MyBatis通訊錄管理系統設計與實現(源碼+文檔+部署講解)

技術范圍:SpringBoot、Vue、SSM、HLMT、Jsp、PHP、Nodejs、Python、爬蟲、數據可視化、小程序、安卓app、大數據、物聯網、機器學習等設計與開發。 主要內容:免費功能設計、開題報告、任務書、中期檢查PPT、系統功能實現、代碼編寫、論文編寫和輔導、論文…