zookeeper+kafka+logstash+elasticsearc+kibana

研究背景

1、之所以選用kafka是因為量起來的話單臺logstash的抗壓能力比較差

2、為了解決整個鏈路查詢的問題,多個Feign傳層的話,可以按照一個ID進行穿層,所以采用logback的MDC進行對唯一標識存儲并且在Feign的調用鏈放在Header里,這里命名為TID

下載地址:

ZK+Kafka

https://mirrors.bfsu.edu.cn/apache/kafka/2.7.0/kafka_2.13-2.7.0.tgz

https://mirrors.bfsu.edu.cn/apache/zookeeper/zookeeper-3.7.0/apache-zookeeper-3.7.0-bin.tar.gz

ELK

https://artifacts.elastic.co/downloads/kibana/kibana-7.12.0-windows-x86_64.zip

https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.12.0-windows-x86_64.zip

https://artifacts.elastic.co/downloads/logstash/logstash-7.12.0-windows-x86_64.zip

?

在攔截器里增加相對應的攔截代碼

@Component
@Slf4j
public class ContextInterceptor implements HandlerInterceptor {RequestContext context = RequestContext.getCurrentContext();context.reset();log.debug("traceId:" + MDC.get("traceId"));String requestId = MDC.get("traceId");requestId = StringUtils.isEmpty(requestId) ? request.getHeader(RequestContext.REQUEST_ID) : requestId;requestId = StringUtils.isEmpty(requestId) ? request.getParameter(RequestContext.REQUEST_ID) : requestId;requestId = StringUtils.isEmpty(requestId) ? UUIDUtil.uuid() : requestId;MDC.put("TID", requestId);}

配置日志配置文件logback-spring.xml

<?xml version="1.0" encoding="UTF-8"?>
<configuration><!-- springProfile用于指定當前激活的環境,如果spring.profile.active的值是哪個,就會激活對應節點下的配置 --><springProfile name="local"><!-- configuration to be enabled when the "staging" profile is active --><springProperty scope="context" name="module" source="spring.application.name"defaultValue="undefinded"/><!-- 該節點會讀取Environment中配置的值,在這里我們讀取application.yml中的值 --><springProperty scope="context" name="bootstrapServers" source="spring.kafka.bootstrap-servers"defaultValue="127.0.0.1:9092"/><appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender"><!-- encoders are assigned the typech.qos.logback.classic.encoder.PatternLayoutEncoder by default --><encoder><pattern>%boldYellow(${module})|%d|%highlight(%-5level)|%X{TID}|%cyan(%logger{15}) - %msg %n</pattern></encoder></appender><!-- kafka的appender配置 --><appender name="kafka" class="com.github.danielwegener.logback.kafka.KafkaAppender"><encoder><pattern>${module}|%d|%-5level|%X{TID}|%logger{15} - %msg</pattern></encoder><topic>test</topic><keyingStrategy class="com.github.danielwegener.logback.kafka.keying.NoKeyKeyingStrategy"/><deliveryStrategy class="com.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy"/><!-- Optional parameter to use a fixed partition --><!-- <partition>0</partition> --><!-- Optional parameter to include log timestamps into the kafka message --><!-- <appendTimestamp>true</appendTimestamp> --><!-- each <producerConfig> translates to regular kafka-client config (format: key=value) --><!-- producer configs are documented here: https://kafka.apache.org/documentation.html#newproducerconfigs --><!-- bootstrap.servers is the only mandatory producerConfig --><producerConfig>bootstrap.servers=${bootstrapServers}</producerConfig><!-- 如果kafka不可用則輸出到控制臺 --><appender-ref ref="STDOUT"/></appender><!-- 指定項目中的logger --><!--<logger name="org.springframework.test" level="INFO" ><appender-ref ref="kafka" /></logger>--><logger name="com.springcloudsite" level="INFO" ><appender-ref ref="kafka" /></logger><root level="info"><appender-ref ref="STDOUT" /></root></springProfile>
</configuration>

正則配置說明? ??

? ? pattern:為正則表達

? ? %boldYellow(${module}) : 黃色的模塊名稱

? ? %d :日期時間

? ?%highlight(%-5level):高亮的日志級別,如info error trace登

? ?%X{TID} : traceID 追蹤使用的ID

? ?%cyan(%logger{15}) :簡寫類名路徑

? %msg %n :具體日志信息

打印出來的效果如下:

?

配置zk+kafka

1. 安裝JDK

1.1 安裝文件:http://www.oracle.com/technetwork/java/javase/downloads/index.html 下載JDK
1.2 安裝完成后需要添加以下的環境變量(右鍵點擊“我的電腦” -> "高級系統設置" -> "環境變量"?):

  • JAVA_HOME:?C:\Program Files\Java\jdk1.8.0_171 (jdk的安裝路徑)
  • Path: 在現有的值后面添加"; %JAVA_HOME%\bin"

1.3 打開cmd運行 "java -version" 查看當前系統Java的版本:

2. 安裝ZOOKEEPER

Kafka的運行依賴于Zookeeper,所以在運行Kafka之前我們需要安裝并運行Zookeeper

2.1 下載安裝文件:?http://zookeeper.apache.org/releases.html

2.2 解壓文件?

2.3 打開zookeeper-3.4.13\conf,把zoo_sample.cfg重命名成zoo.cfg

2.4 從文本編輯器里打開zoo.cfg

2.5 把dataDir的值改成“./zookeeper-3.4.13/data”

2.6 添加如下系統變量:

  • ZOOKEEPER_HOME: C:\Users\localadmin\CODE\zookeeper-3.4.13 (zookeeper目錄)
  • Path: 在現有的值后面添加 ";%ZOOKEEPER_HOME%\bin;"

2.7 運行Zookeeper: 打開cmd然后執行 zkserver

cmd 窗口不要關閉

3. 安裝并運行KAFKA

3.1 下載安裝文件:?http://kafka.apache.org/downloads.html

3.2 解壓文件

3.3 打開kafka_2.11-2.0.0\config

3.4 從文本編輯器里打開 server.properties

3.5 把 log.dirs的值改成?“./logs”

3.6 打開cmd

3.7 進入kafka文件目錄: cd?C:\Users\localadmin\CODE\kafka_2.11-2.0.0(kafka目錄)

3.8 輸入并執行:? .\bin\windows\kafka-server-start.bat .\config\server.properties

cmd 窗口不要關閉

4. 創建TOPICS

4.1 打開cmd 并進入cd?C:\Users\localadmin\CODE\kafka_2.11-2.0.0\bin\windows

4.2 創建一個topic:?kafka-topics.bat --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test

5. 打開一個PRODUCER:

cd C:\Users\localadmin\CODE\kafka_2.11-2.0.0\bin\windows
kafka-console-producer.bat --broker-list localhost:9092 --topic test

6. 打開一個CONSUMER:

cd?C:\Users\localadmin\CODE\kafka_2.11-2.0.0\bin\windows
kafka-console-consumer.bat --bootstrap-server localhost:9092 --topic test --from-beginning

7. 測試:

配置ELK

kibana.yml

# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "localhost"# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# This setting was effectively always `false` before Kibana 6.3 and will
# default to `true` starting in Kibana 7.0.
#server.rewriteBasePath: false# Specifies the public URL at which Kibana is available for end users. If
# `server.basePath` is configured this URL should end with the same basePath.
#server.publicBaseUrl: ""# The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576# The Kibana server's name.  This is used for display purposes.
#server.name: "your-hostname"# The URLs of the Elasticsearch instances to use for all your queries.
elasticsearch.hosts: ["http://localhost:9200"]# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
#kibana.index: ".kibana"# The default application to load.
#kibana.defaultAppId: "home"# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "kibana_system"
#elasticsearch.password: "pass"# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files are used to verify the identity of Kibana to Elasticsearch and are required when
# xpack.security.http.ssl.client_authentication in Elasticsearch is set to required.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
elasticsearch.requestTimeout: 30000# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 30000# Logs queries sent to Elasticsearch. Requires logging.verbose set to true.
#elasticsearch.logQueries: false# Specifies the path where Kibana creates the process ID file.
#pid.file: /run/kibana/kibana.pid# Enables you to specify a file where Kibana stores log output.
#logging.dest: stdout# Set the value of this setting to true to suppress all logging output.
#logging.silent: false# Set the value of this setting to true to suppress all logging output other than error messages.
#logging.quiet: false# Set the value of this setting to true to log all events, including system usage information
# and all requests.
#logging.verbose: false# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
#ops.interval: 5000# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English - en , by default , Chinese - zh-CN .
#i18n.locale: "en"

然后到對應bin目錄下啟動,直接點擊?kibana.bat啟動即可,或者在CMD命令啟動

之后是啟動效果

配置elasticsearch.yml

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
cluster.name: "docker-cluster"
node.name: "node-1"
node.master: true
network.host: 0.0.0.0#xpack.license.self_generated.type: trial
#xpack.security.enabled: true
#xpack.monitoring.collection.enabled: true #
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
cluster.initial_master_nodes: ["node-1"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true

啟動bin目錄下的elasticsearch.bat

以下是啟動效果

配置logstash.conf

# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.input {kafka {bootstrap_servers => "localhost:9092"topics => ["test"]group_id => "test"}
}filter { mutate {split => { "message" => "|" }}if [message][0] {mutate {                add_field =>   {"apiname" => "%{[message][0]}"}}}if [message][1] {mutate {                add_field =>   {"current_time" => "%{[message][1]}"}}} if [message][2] {mutate {                add_field =>   {"current_level" => "%{[message][2]}"}}}  	if [message][3] {mutate {                add_field =>   {"traceid" => "%{[message][3]}"}}}}output {elasticsearch {hosts => ["http://localhost:9200"]#index => "local-purchase-order | %{+YYYY-MM-dd}"index => "logstash-%{+YYYY-MM-dd}"#template_name => "logstash"#template_overwrite => true#index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"#user => "elastic"#password => "changeme"}stdout{codec => rubydebug}
}

配置logstash.yml

#/usr/share/logstash/config/logstash.yml
#jvm.options  log4j2.properties  logstash-sample.conf  logstash.yml  pipelines.yml  startup.options
http.host: "0.0.0.0"
# [ "http://elasticsearch:9200" ]
xpack.monitoring.elasticsearch.hosts: ${ELASTICSEARCH_URL}

啟動使用命令

可以進到bin下

D:\app\elk\logstash\bin

輸入命令:logstash -f D:\app\elk\logstash\config\logstash.conf

最后打開地址

http://localhost:9600/

http://localhost:9200/

http://localhost:5601/

分別驗證結果

?

?

?

本文來自互聯網用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。
如若轉載,請注明出處:http://www.pswp.cn/news/386447.shtml
繁體地址,請注明出處:http://hk.pswp.cn/news/386447.shtml
英文地址,請注明出處:http://en.pswp.cn/news/386447.shtml

如若內容造成侵權/違法違規/事實不符,請聯系多彩編程網進行投訴反饋email:809451989@qq.com,一經查實,立即刪除!

相關文章

還沒吃透內存緩存LruCache實現原理的看這篇文章,跳槽薪資翻倍

目前情況&#xff1a;10屆某民辦大學本科生&#xff0c;實際接觸Android年限6年多了&#xff0c;工作年限五年半&#xff08;注意&#xff0c;我說的是工作年限&#xff0c;不是工作經驗&#xff09;&#xff0c;今年1月份裸辭后歇了大半年&#xff0c;經常一周也收不到幾個off…

利用 Docker 搭建單機的 Cloudera CDH 以及使用實踐

利用 Docker 搭建單機的 Cloudera CDH 以及使用實踐 想用 CDH 大禮包&#xff0c;于是先在 Mac 上和 Centos7.4 上分別搞個了單機的測試用。其實操作的流和使用到的命令差不多就一并說了: 首先前往官方下載包&#xff1a; https://www.cloudera.com/downloads/quickstart_vm…

前端有用JavaScript技巧

數組去重 var arr [1, 2, 3, 3, 4];console.log(...new Set(arr))// [1, 2, 3, 4] 數組和布爾值 有時我們需要過濾數組中值為 false 的值. 例如(0, undefined, null, false) var myArray [1, 0 , undefined, null, false];myArray.filter(Boolean);//[1] 合并對象 const page…

還沒吃透內存緩存LruCache實現原理的看這篇文章,面試必會

前言 這篇文章主要是分享今年上半年的面試心得&#xff0c;現已就職于某大廠有三個月了&#xff0c;近期有很多公司均已啟動秋招&#xff0c;也祝大家在 2020 的下半年面試順利&#xff0c;獲得理想的offer&#xff01; 之前找工作的那段時間感想頗多&#xff0c;總結一點面試…

fastjson反序列化漏洞原理及利用

重要漏洞利用poc及版本 我是從github上的參考中直接copy的exp&#xff0c;這個類就是要注入的類 import java.lang.Runtime; import java.lang.Process; public class Exploit { public Exploit() { try{ // 要執行的命令 String commands "calc.exe"; Process pc …

這個回答讓我錯失offer!offer拿到手軟

開頭 每到“金三銀四”的季節&#xff0c;總人很多人去尋找名叫“面經”一樣的東西&#xff0c;其實就是一個個具體的題目&#xff0c;然后臨陣磨槍&#xff0c;去“背”答案&#xff0c;如果一直是這樣的話&#xff0c;我相信你的能力不會有任何提高&#xff0c;即使工作三年…

Spark Windows

本文主要是講解Spark在Windows環境是如何搭建的 一、JDK的安裝 1、1 下載JDK 首先需要安裝JDK&#xff0c;并且將環境變量配置好&#xff0c;如果已經安裝了的老司機可以忽略。JDK&#xff08;全稱是JavaTM Platform Standard Edition Development Kit&#xff09;的安裝&…

ts

ts文件中使用以下注釋來臨時忽略規則出現的錯誤。如在定義變量是為定義類型就報錯誤 // tslint:disable——忽略該行以下所有代碼出現的錯誤提示// tslint:enable——當前ts文件重新啟用tslint// tslint:disable-line——忽略當前行代碼出現的錯誤提示// tslint:disable-next-l…

這個回答讓我錯失offer!成功收獲美團,小米安卓offer

前言 我們移動開發程序員應該首先明白一個要點&#xff0c;能夠學習的東西可以區分為『知識』和『技能』。 知識&#xff0c;就是你知道就知道、不知道就不知道的東西&#xff0c;比如『計算機系統中一個字節是包含8個bit』&#xff0c;你知道了之后就算掌握了。 技能&#…

vue 雙數據綁定原理

Vue的雙向數據綁定原理是什么&#xff1f; 答&#xff1a;vue.js 是采用數據劫持結合發布者-訂閱者模式的方式&#xff0c;通過Object.defineProperty()來劫持各個屬性的setter&#xff0c;getter&#xff0c;在數據變動時發布消息給訂閱者&#xff0c;觸發相應的監聽回調。 具…

Java lamda表達式快速分組

public class ProductDto {private long month;private String cate;private double count;} 分組 Map<String,List<ProductDto>> categoryMap alllist.getValue().stream().collect(Col lectors.groupingBy(ProductDto::getCate));求和 Double sumCount catego…

這么香的技術還不快點學起來,不吃透都對不起自己

大家應該看過很多分享面試成功的經驗&#xff0c;但根據幸存者偏差的理論&#xff0c;也許多看看別人面試失敗在哪里&#xff0c;對自己才更有幫助。 最近跟一個朋友聊天&#xff0c;他準備了幾個月&#xff0c;剛剛參加完字節跳動面試&#xff0c;第二面結束后&#xff0c;嗯&…

Unity3D熱更新之LuaFramework篇[06]--Lua中是怎么實現腳本生命周期的

前言 用c#開發的時候&#xff0c;新建的腳本都默認繼承自Monobehaviour, 因此腳本才有了自己的生命周期函數&#xff0c;如Awake,Start, Update, OnDestroy等。 在相應的方法中實現游戲邏輯&#xff0c;引擎會適時調用。 而Lua在這里做為c#的一個外延語言&#xff0c;自然是不受…

這么香的技術還不快點學起來,含BATJM大廠

前言 北京字節跳動科技有限公司成立于2012年3月&#xff0c;是最早將人工智能應用于移動互聯網場景的科技企業之一。其獨立研發的“今日頭條”客戶端&#xff0c;開創了一種全新的新聞閱讀模式。 我一直很向往這樣有創新精神&#xff0c;并做出了巨大成果的大公司&#xff0c…

.net Core把一個list集合里面的所有字段的數值匯總

前言&#xff1a;此隨筆僅供自己學習&#xff0c;如有不足還請指出 在很多時候&#xff0c;得到一個list集合&#xff0c;需要把里面的數據匯總&#xff0c;但我又不想寫那么多循環&#xff0c;于是去.net core 官方文檔找有沒有相關方法&#xff0c;很可惜我沒有找到&#xff…

openshift for linux

安裝openshift 1、下載地址&#xff1a; https://github.com/openshift/origin/releases 3.11下載&#xff1a; https://github.com/openshift/origin/releases/tag/v3.11.0 https://github.com/openshift/origin/releases/download/v3.11.0/openshift-origin-server-v3.11…

這些Android高級必會知識點你能答出來幾個?含BATJM大廠

前言 首先介紹一下自己&#xff0c;計算機水本&#xff0c;考研與我無緣。之前在帝都某公司算法部實習&#xff0c;公司算大公司吧&#xff0c;然而個人愛好偏開發&#xff0c;大二的時候寫個一個app&#xff0c;主要是用各種框架。 一、掌握架構師筑基必備技能 二、掌握Andr…

Docker kafka

閱讀目錄 一、下載鏡像二、先啟動zookeeper三、啟動kafka四、創建一個topic&#xff08;使用代碼次步可省略&#xff09;五、kafka設置分區數量六、python代碼回到頂部 一、下載鏡像 docker pull wurstmeister/zookeeper docker pull wurstmeister/kafka 回到頂部 二、先啟…

這些年Android面試的那些套路,社招面試心得

前言 說不焦慮其實是假的&#xff0c;因為無論是現在還是最近幾年&#xff0c;很早就有人察覺Android開發的野蠻生長時代已經過去。過去的優勢是市場需要&#xff0c;這個技術少有人有&#xff0c;所以在搶占市場的時候&#xff0c;基本上滿足需要就已經可以了。但是現在&…

flask第一章:項目環境搭建

windows環境pycharmpython3 1、命令提示窗口 1&#xff09;創建項目目錄&#xff1a;mkdir myblog 2&#xff09;cd myblog 3&#xff09;創建虛擬環境&#xff1a;python -m venv myvenv 4&#xff09;激活虛擬環境&#xff1a;venv\Scripts\activate 5&#xff09;安裝flask&…