鯤鵬麒麟安裝Kafka-v1.1.1

因項目需要在鯤鵬麒麟服務器上安裝Kafka v1.1.1,因此這里將安裝配置過程記錄下來。

環境說明

# 查看系統相關詳細信息
[root@test kafka_2.12-1.1.1]# uname -a
Linux test.novalocal 4.19.148+ #1 SMP Mon Oct 5 22:04:46 EDT 2020 aarch64 aarch64 aarch64 GNU/Linux
# 查看操作系統版本信息
[root@test kafka_2.12-1.1.1]# cat /etc/kylin-release 
Kylin Linux Advanced Server release V10 (Tercel)
# 查看邏輯CPU數量
[root@test kafka_2.12-1.1.1]# cat /proc/cpuinfo| grep "processor"| wc -l
32
# 查看CPU信息
[root@test kafka_2.12-1.1.1]# lscpu
Architecture:                    aarch64
CPU op-mode(s):                  64-bit
Byte Order:                      Little Endian
CPU(s):                          32
On-line CPU(s) list:             0-31
Thread(s) per core:              1
Core(s) per socket:              16
Socket(s):                       2
NUMA node(s):                    2
Vendor ID:                       HiSilicon
Model:                           0
Model name:                      Kunpeng-920
Stepping:                        0x1
CPU max MHz:                     2400.0000
CPU min MHz:                     2400.0000
BogoMIPS:                        200.00
L1d cache:                       2 MiB
L1i cache:                       2 MiB
L2 cache:                        16 MiB
L3 cache:                        64 MiB
NUMA node0 CPU(s):               0-15
NUMA node1 CPU(s):               16-31
Vulnerability Itlb multihit:     Not affected
Vulnerability L1tf:              Not affected
Vulnerability Mds:               Not affected
Vulnerability Meltdown:          Not affected
Vulnerability Spec store bypass: Not affected
Vulnerability Spectre v1:        Mitigation; __user pointer sanitization
Vulnerability Spectre v2:        Not affected
Vulnerability Srbds:             Not affected
Vulnerability Tsx async abort:   Not affected
Flags:                           fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma dcpop asimddp asimdfhm
# 查看Java的版本信息
[root@test kafka_2.12-1.1.1]# java -version
openjdk version "1.8.0_242"
OpenJDK Runtime Environment (build 1.8.0_242-b08)
OpenJDK 64-Bit Server VM (build 25.242-b08, mixed mode)

下載

從apache官網下載相應的版本信息,地址:https://kafka.apache.org/downloads,找到1.1.1版本下載即可,這里下載kafka_2.12-1.1.1.tgz,下載地址:https://archive.apache.org/dist/kafka/1.1.1/kafka_2.12-1.1.1.tgz
這里的版本號有兩段,前一段為Scala的版本號,后一段為Kafka的版本,現在最新版本已經到3.X,這里用1.1.1,主要是項目特殊需要,建議還是用高版本的。

Scala 2.11 和 Scala 2.12 是 Scala 編程語言的兩個主要版本,它們之間存在一些關鍵的區別
1. 性能優化Scala 2.12 在性能方面做了很多優化,特別是在 JVM 上。它引入了值類(Value Classes)的改進,減少了運行時的開銷。Scala 2.11 的性能相對較低,但它的優化主要集中在穩定性和兼容性上。
2. 字符串插值Scala 2.12 引入了新的字符串插值語法,使用 s 前綴,例如 s"Hello, $name"。Scala 2.11 使用的是舊的字符串插值語法,使用 #{},例如 "Hello, #{name}"。
3. 隱式轉換Scala 2.12 對隱式轉換進行了一些改進,使其更加安全和易于理解。Scala 2.11 的隱式轉換機制相對較為復雜,容易引起混淆。
4. 模塊化Scala 2.12 引入了模塊化系統,允許開發者將代碼分割成多個模塊,便于管理和維護。Scala 2.11 沒有模塊化系統,所有代碼都放在一個項目中。
5. 兼容性Scala 2.12 相對于 Scala 2.11 有一些不兼容的更改,特別是在 API 和庫的使用上。因此,從 Scala 2.11 遷移到 Scala 2.12 可能需要一些工作。Scala 2.11 是一個長期支持(LTS)版本,這意味著它將獲得更長時間的支持和維護。
6. 社區支持Scala 2.12 是目前的主流版本,得到了廣泛的社區支持和庫的更新。Scala 2.11 雖然仍然在使用,但社區支持逐漸減少。
總結如果你正在開發一個新的項目,建議使用 Scala 2.12 或更高版本,以獲得更好的性能和更多的功能。如果你正在維護一個現有的 Scala 2.11 項目,可以考慮在適當的時候遷移到 Scala 2.12,但需要注意兼容性問題。

部署

1、解壓文件執行
tar -zxvf kafka_2.12-1.1.1.tgz
2、解壓后路徑為
[root@test kafka_2.12-1.1.1]# pwd
/data/public/kafka/kafka_2.12-1.1.1
3、修改config/server.properties,注意修改listeners和advertised.listeners即可

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.# see kafka.server.KafkaConfig for additional details and defaults############################# Server Basics ############################## The id of the broker. This must be set to a unique integer for each broker.
broker.id=0############################# Socket Server Settings ############################## The address the socket server listens on. It will get the value returned from 
# java.net.InetAddress.getCanonicalHostName() if not configured.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
listeners=PLAINTEXT://192.138.31.100:9092# Hostname and port the broker will advertise to producers and consumers. If not set, 
# it uses the value for "listeners" if configured.  Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
advertised.listeners=PLAINTEXT://192.138.31.100:9092# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=409600000############################# Log Basics ############################## A comma separated list of directories under which to store log files
log.dirs=/tmp/kafka-logs# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1############################# Internal Topic Settings  #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended for to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1############################# Log Flush Policy ############################## Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using replication.
#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000############################# Log Retention Policy ############################## The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000############################# Zookeeper ############################## Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=localhost:2181# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000############################# Group Coordinator Settings ############################## The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0

啟動

1、先啟動Zookeeper,執行
nohup /data/public/kafka/kafka_2.12-1.1.1/bin/zookeeper-server-start.sh /data/public/kafka/kafka_2.12-1.1.1/config/zookeeper.properties > /dev/null 2>&1 &

2、然后啟動Kafka,執行
nohup /data/public/kafka/kafka_2.12-1.1.1/bin/kafka-server-start.sh /data/public/kafka/kafka_2.12-1.1.1/config/server.properties > /dev/null 2>&1 &

3、檢查是否啟動

[root@test kafka_2.12-1.1.1]# netstat -nltp | grep -E '(2181|9092)'
tcp6       0      0 192.168.31.100:9092     :::*                    LISTEN      970022/java         
tcp6       0      0 :::2181                 :::*                    LISTEN      966577/java

表明Kafka已經啟動起來,其中2181為Zookeeper的端口,9092為Kafka的端口

開啟客戶端訪問

firewall-cmd --zone=public --add-port=2181/tcp --permanent
firewall-cmd --zone=public --add-port=9092/tcp --permanent
firewall-cmd --reload

使用Offset Explorer 2.0訪問

在這里插入圖片描述
在這里插入圖片描述
配置完成后,連接服務器,這時就可以使用Kafka進行生產和消費消息了。
在這里插入圖片描述

本文來自互聯網用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。
如若轉載,請注明出處:http://www.pswp.cn/web/62677.shtml
繁體地址,請注明出處:http://hk.pswp.cn/web/62677.shtml
英文地址,請注明出處:http://en.pswp.cn/web/62677.shtml

如若內容造成侵權/違法違規/事實不符,請聯系多彩編程網進行投訴反饋email:809451989@qq.com,一經查實,立即刪除!

相關文章

群控系統服務端開發模式-應用開發-登錄退出發送郵件

一、登錄成功發送郵件 在根目錄下app文件夾下controller文件夾下common文件夾下&#xff0c;修改Login.php&#xff0c;代碼如下 <?php /*** 登錄退出操作* User: 龍哥三年風水* Date: 2024/10/29* Time: 15:53*/ namespace app\controller\common; use app\controller\Em…

[游戲開發] Unity中使用FlatBuffer

什么是FlatBuffer 官網&#xff1a; GitHub - google/flatbuffers: FlatBuffers: Memory Efficient Serialization LibraryFlatBuffers: Memory Efficient Serialization Library - google/flatbuffershttps://github.com/google/flatbuffers 為什么用FloatBuffer&#xff0c…

MySQL其一,概念學習,可視化軟件安裝以及增刪改查語句

目錄 MySQL 1、數據庫的概念 2、數據庫分類 3、MySQL的安裝 4、安裝過程中的問題 DataGrip的使用&#xff1a; SQLynx的使用&#xff1a; 5、編寫SQL語句 6、DDL語句 7、DML 新增數據&#xff1a; 刪除數據&#xff1a; 修改數據&#xff1a; MySQL SQL其實是一門…

05 在 Linux 使用 AXI DMA

DMA簡介 DMA 是一種采用硬件實現存儲器與存儲器之間或存儲器與外設之間直接進行高速數據傳輸的技術&#xff0c;傳輸過程無需 CPU 參與&#xff08;但是CPU需要提前配置傳輸規則&#xff09;&#xff0c;可以大大減輕 CPU 的負擔。 DMA 存儲傳輸的過程如下&#xff1a; CPU 向…

linux 安裝 vsftpd 服務以及配置全攻略,vsftpd 虛擬多用戶多目錄配置,為每個用戶配置不同的使用權限

linux 安裝 vsftpd 服務以及配置全攻略&#xff0c;vsftpd 虛擬多用戶多目錄配置&#xff0c;為每個用戶配置不同的使用權限。 linux 安裝 vsftpd 服務以及配置全攻略 FTP 是 File Transfer Protocol 的簡稱&#xff0c;用于 Internet 上的控制文件的雙向傳輸。同時&#xff0…

SQL語句在MySQL中如何執行

MySQL的基礎架構 首先就是客戶端&#xff0c;其次Server服務層&#xff0c;大多數MySQL的核心服務都在這一層&#xff0c;包括連接、分析、優化、緩存以及所有的內置函數&#xff08;時間、日期、加密函數&#xff09;&#xff0c;所有跨存儲引擎功能都在這一層實現&#xff1…

ragflow連不上ollama的解決方案

由于前期wsl默認裝在C盤&#xff0c;后期部署好RagFlow后C盤爆紅&#xff0c;在連接ollama的時候一直在轉圈圈&#xff0c;問其他人沒有遇到這種情況&#xff0c;猜測是因為內存不足無法加載模型導致&#xff0c;今天重新在E盤安裝wsl 使用wsl裝Ubuntu Win11 wsl-安裝教程 如…

力扣-漢明距離

1.兩個整數之間的 漢明距離 指的是這兩個數字對應二進制位不同的位置的數目。 給你兩個整數 x 和 y&#xff0c;計算并返回它們之間的漢明距離。 看到這題&#xff0c;當然想到了按位異或^,并且c內置了計算二進制數中1數量的函數__builtin_popcount() class Solution { publ…

關于成功插入 SQLite 但沒有數據的問題

背景 技術棧&#xff1a;SpringBoot Mybatis-flex SQLite 項目中集成了SQLite&#xff0c;配置如下&#xff1a; spring:datasource:url: jdbc:sqlite::resource:db/project.dbdriver-class-name: org.sqlite.JDBC在進行測試時&#xff0c;使用Mybatis-flex往表中插入數據&…

C#常見錯誤—空對象錯誤

System.NullReferenceException&#xff1a;未將對象引用設置到對象的實例 在C#編程中&#xff0c;System.NullReferenceException是一個常見的運行時異常&#xff0c;其錯誤信息“未將對象引用設置到對象的實例”意味著代碼試圖訪問一個未被初始化或已被設置為null的對象的成…

沁恒CH32V208藍牙串口透傳例程:修改透傳的串口;UART-CH32V208-APP代碼分析;APP-CH32V208-UART代碼分析

從事嵌入式單片機的工作算是符合我個人興趣愛好的,當面對一個新的芯片我即想把芯片盡快搞懂完成項目賺錢,也想著能夠把自己遇到的坑和注意事項記錄下來,即方便自己后面查閱也可以分享給大家,這是一種沖動,但是這個或許并不是原廠希望的,盡管這樣有可能會犧牲一些時間也有哪天原…

Scala的隱式對象

Scala中&#xff0c;隱式對象&#xff08;implicit object&#xff09;是一種特殊的對象&#xff0c;它可以使得其成員&#xff08;如方法和值&#xff09;在特定的上下文中自動可用&#xff0c;而無需顯式地傳遞它們。隱式對象通常與隱式參數和隱式轉換一起使用&#xff0c;以…

矩陣的乘(包括乘方)和除

矩陣的乘分為兩種&#xff1a; 一種是高等代數中對矩陣的乘的定義&#xff1a;可以去這里看看包含矩陣的乘。總的來說&#xff0c;若矩陣 A s ? n A_{s*n} As?n?列數和矩陣 B n ? t B_{n*t} Bn?t?的行數相等&#xff0c;則 A A A和 B B B可相乘&#xff0c;得到一個矩陣 …

DVWA親測sql注入漏洞

LOW等級 我們先輸入1 我們加上一個單引號&#xff0c;頁面報錯 我們看一下源代碼&#xff1a; <?php if( isset( $_REQUEST[ Submit ] ) ) { // Get input $id $_REQUEST[ id ]; // Check database $query "SELECT first_name, last_name FROM users WHERE user_id …

C++,提供函數接口,函數如何做到接收外部變量隨時結束

在C中&#xff0c;如果你想要創建一個函數&#xff0c;該函數可以接收外部變量并在變量改變時作出響應&#xff0c;你可以使用回調函數或者將變量包裝在可以觀察其變化的設計模式中&#xff0c;例如觀察者模式。 以下是一個使用標準庫中的std::function和std::bind來創建響應外…

機器學習01-發展歷史

機器學習01-發展歷史 文章目錄 機器學習01-發展歷史1-傳統機器學習的發展進展1. 初始階段&#xff1a;統計學習和模式識別2. 集成方法和核方法的興起3. 特征工程和模型優化4. 大規模數據和分布式計算5. 自動化機器學習和特征選擇總結 2-隱馬爾科夫鏈為什么不能解決較長上下文問…

想了解操作系統,有什么書籍推薦?

推薦一本操作系統經典書&#xff1a; 操作系統導論 《操作系統導論》虛擬化(virtualization)、并發(concurrency)和持久性(persistence)。這是我們要學習的3個關鍵概念。通過學習這3個概念&#xff0c;我們將理解操作系統是如何工作的&#xff0c;包括它如何決定接下來哪個程序…

[Collection與數據結構] 位圖與布隆過濾器

&#x1f338;個人主頁:https://blog.csdn.net/2301_80050796?spm1000.2115.3001.5343 &#x1f3f5;?熱門專欄: &#x1f9ca; Java基本語法(97平均質量分)https://blog.csdn.net/2301_80050796/category_12615970.html?spm1001.2014.3001.5482 &#x1f355; Collection與…

【大數據學習 | 面經】Spark的shuffle hash join的具體細節

1. 前言 shuffle hash join是Spark中一種常見的連接策略&#xff0c;尤其適用于兩個數據集都比較大且無法通過廣播來優化的情況。其核心思想是通過對連接鍵進行哈希分區&#xff0c;使得相同鍵值的數據被分配到相同的分區中&#xff0c;從而可以在每個分區獨立的執行連接操作。…

設計模式從入門到精通之(一)工廠模式

工廠模式&#xff1a;為每個工廠找到"生意經" 在現實生活中&#xff0c;我們隨處可見"工廠"的影子&#xff0c;比如汽車工廠生產汽車&#xff0c;食品工廠生產食品。但你有沒有想過&#xff0c;為什么我們需要工廠&#xff1f;如果沒有工廠&#xff0c;我們…