目錄
1、組件介紹
2 、項目環境
2.1 各個環境版本
2.2 Docker-Compose變量配置
2.3 Docker-Compose服務配置
3、在Services中聲明了四個服務
3.1 ElasticSearch服務
3.2 Logstash服務
3.3 Kibana服務
3.4 Filebeat服務
4、使用方法
4.1 方法一
4.2 方法二
5、啟動
1、組件介紹
在ELK Stack中同時包括了Elastic Search、LogStash、Kibana以及Filebeat;
各個組件的作用如下:
-
Filebeat:采集文件等日志數據;
-
LogStash:過濾日志數據;
-
Elastic Search:存儲、索引日志;
-
Kibana:用戶界面;
各個組件之間的關系如下圖所示:
2 、項目環境
因為ElasticSearch是用Java語言編寫的,所以必須安裝JDK的環境,并且是JDK 1.8以上。
# 安裝
sudo yum install java-11-openjdk -y
# 安裝完成查看java版本
java -version
>>>:
[root@VM-0-5-centos config]# java --version
openjdk 11.0.16.1 2022-08-12 LTS
OpenJDK Runtime Environment (Red_Hat-11.0.16.1.1-1.el7_9) (build 11.0.16.1+1-LTS)
OpenJDK 64-Bit Server VM (Red_Hat-11.0.16.1.1-1.el7_9) (build 11.0.16.1+1-LTS, mixed mode, sharing)
2.1 各個環境版本
-
操作系統:CentOS 7
-
Docker:20.10.18
-
Docker-Compose:2.4.1
-
ELK Version:7.4.2
-
Filebeat:7.4.2
-
JAVA:11.0.16.1
2.2 Docker-Compose變量配置
首先,在配置文件
.env
中統一聲明了ES以及各個組件的版本:
.env
ES_VERSION=7.1.0
2.3 Docker-Compose服務配置
創建Docker-Compose的配置文件:
version: '3.4'
?
services:elasticsearch:image: "docker.elastic.co/elasticsearch/elasticsearch:${ES_VERSION}"environment:- discovery.type=single-nodevolumes:- /etc/localtime:/etc/localtime- /elk/elasticsearch/data:/usr/share/elasticsearch/data- /elk/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml- /elk/elasticsearch/plugins:/usr/share/elasticsearch/pluginsports:- "9200:9200"- "9300:9300"logstash:depends_on:- elasticsearchimage: "docker.elastic.co/logstash/logstash:${ES_VERSION}"volumes:- /elk/logstash/config/conf.d/logstash.conf:/usr/share/logstash/pipeline/logstash.confports:- "5044:5044"links:- elasticsearch
?kibana:depends_on:- elasticsearchimage: "docker.elastic.co/kibana/kibana:${ES_VERSION}"volumes:- /etc/localtime:/etc/localtime# kibana.yml配置文件放在宿主機目錄下,方便后續漢化- /elk/kibana/config/kibana.yml:/usr/share/kibana/config/kibana.ymlports:- "5601:5601"links:- elasticsearch
?filebeat:depends_on:- elasticsearch- logstashimage: "docker.elastic.co/beats/filebeat:${ES_VERSION}"user: root # 必須為rootenvironment:- strict.perms=falsevolumes:- /elk/filebeat/config/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro# 映射到容器中[作為數據源]- /elk/filebeat/logs:/usr/share/filebeat/logs:rw- /elk/filebeat/data:/usr/share/filebeat/data:rw# 將指定容器連接到當前連接,可以設置別名,避免ip方式導致的容器重啟動態改變的無法連接情況links:- logstash
3、在Services中聲明了四個服務
-
elasticsearch
-
logstash
-
kibana
-
filebeat
3.1 ElasticSearch服務
創建docker容器掛在的目錄
注意:chmod -R 777 /elk/elasticsearch 要有訪問權限
mkdir -p /elk/elasticsearch/config/
mkdir -p /elk/elasticsearch/data/
mkdir -p /elk/elasticsearch/plugins/
echo "http.host: 0.0.0.0">>/elk/elasticsearch/config/elasticsearch.yml
在elasticsearch服務的配置中有幾點需要特別注意:
-
discovery.type=single-node
:將ES的集群發現模式配置為單節點模式; -
/etc/localtime:/etc/localtime
:Docker容器中時間和宿主機同步; -
/docker_es/data:/usr/share/elasticsearch/data
:將ES的數據映射并持久化至宿主機中; -
/elk/elasticsearch/plugins:/usr/share/elasticsearch/plugins
:將插件掛載到主機; -
/elk/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
:將配置文件掛載到主機;
3.2 Logstash服務
創建docker容器掛在的目錄
注意:chmod -R 777 /elk/logstash 要有訪問權限
mkdir -p /elk/logstash/config/conf.d
在logstash服務的配置中有幾點需要特別注意:
-
/elk/logstash/config/conf.d/logstash.conf:/usr/share/logstash/pipeline/logstash.conf
:將宿主機本地的logstash配置映射至logstash容器內部;
下面是LogStash的配置,在使用時可以自定義logstash.conf:
input {# 來源beatsbeats {# 端口port => "5044"}
}
?
output {elasticsearch {hosts => ["http://elasticsearch:9200"]index => "test"}stdout { codec => rubydebug }
}
在這里我們將原來tcp收集方式修改為由filebeat上報,同時固定了索引為test
;
3.3 Kibana服務
創建docker容器掛在的目錄
注意:chmod -R 777 /elk/kibana 要有訪問權限
mkdir -p /elk/kibana/config
在kibana服務的配置中有幾點需要特別注意:
-
/elk/kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml
:配置ES的地址; -
/etc/localtime:/etc/localtime
:Docker容器中時間和宿主機同步;
修改 kibana.yml 配置文件,新增(修改)配置項i18n.locale: "zh-CN"
[root@VM-0-5-centos ~]# cd /mydata/kibana/config
?
[root@VM-0-5-centos config]# cat kibana.yml
# Default Kibana configuration for docker target
server.name: kibana
server.host: "0"
elasticsearch.hosts: [ "http://elasticsearch:9200" ]
xpack.monitoring.ui.container.elasticsearch.enabled: true
i18n.locale: "zh-CN" # 設置為中文
?
[root@VM-0-5-centos config]#
3.4 Filebeat服務
注意:chmod -R 777 /elk/filebeat 要有訪問權限
創建docker容器掛在的目錄
mkdir -p /elk/filebeat/config
mkdir -p /elk/filebeat/logs
mkdir -p /elk/filebeat/data
在Filebeat服務的配置中有幾點需要特別注意
-
配置
user: root
和環境變量strict.perms=false
:如果不配置可能會因為權限問題無法啟動;
volumes:
- ?- /elk/filebeat/config/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro
+ - <your_log_path>/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro
- ?- /elk/filebeat/logs:/usr/share/filebeat/logs:rw
+ - <your_log_path>:/usr/share/filebeat/logs:rw
- ?- /elk/filebeat/data:/usr/share/filebeat/data:rw
+ - <your_data_path>:/usr/share/filebeat/logs:rw
同時還需要創建Filebeat配置文件:
filebeat.yml
filebeat.inputs:- type: logenabled: truepaths:# 容器中目錄下的所有.log文件- /usr/share/filebeat/logs/*.logmultiline.pattern: ^\[multiline.negate: truemultiline.match: after
?
filebeat.config.modules:path: ${path.config}/modules.d/*.ymlreload.enabled: false
?
setup.template.settings:index.number_of_shards: 1
?
setup.dashboards.enabled: false
?
setup.kibana:host: "http://kibana:5601"
?
# 直接傳輸至ES
#output.elasticsearch:
# hosts: ["http://es-master:9200"]
# index: "filebeat-%{[beat.version]}-%{+yyyy.MM.dd}"
?
# 傳輸至LogStash
output.logstash:hosts: ["logstash:5044"]
?
processors:- add_host_metadata: ~- add_cloud_metadata: ~
上面給出了一個filebeat配置文件示例,實際使用時可以根據需求進行修改;
4、使用方法
4.1 方法一
使用前必看:
① 修改ELK版本
可以修改在.env
中的ES_VERSION
字段,修改你想要使用的ELK版本;
② LogStash配置
修改logstash.conf
為你需要的日志配置;
③ 修改ES文件映射路徑
修改docker-compose
中elasticsearch
服務的volumes
,將宿主機路徑修改為你實際的路徑:
volumes:- /etc/localtime:/etc/localtime
- - /docker_es/data:/usr/share/elasticsearch/data
+ - [your_path]:/usr/share/elasticsearch/data
并且修改宿主機文件所屬:
sudo chown -R 1000:1000 [your_path]
④ 修改filebeat服務配置
修改docker-compose
中filebeat
服務的volumes
,將宿主機路徑修改為你實際的路徑:
volumes:- ./filebeat.yml:/usr/share/filebeat/filebeat.yml:ro
- ? - /elk/filebeat/logs:/usr/share/filebeat/logs:rw
+ - <your_log_path>:/usr/share/filebeat/logs:rw
- ? - /elk/filebeat/data:/usr/share/filebeat/data:rw
+ - <your_data_path>:/usr/share/filebeat/logs:rw
⑤ 修改Filebeat配置
修改filebeat.yml
為你需要的配置;
Filebeat配置文件詳情參見如下:
[vagrant@localhost filebeat-7.7.1]$ vi filebeat.yml
###################### Filebeat Configuration Example #########################
#=========================== Filebeat inputs =============================filebeat.inputs:# Each - is an input. Most options can be set at the input level, so
#每個-是一個輸入。大多數選項可以在輸入級別設置,因此
# you can use different inputs for various configurations.
#您可以為各種配置使用不同的輸入。
# Below are the input specific configurations.
#下面是特定于輸入的配置。- type: log# Change to true to enable this input configuration.#更改為true以啟用此輸入配置。enabled: true# Paths that should be crawled and fetched. Glob based paths.#應該被爬取的路徑。基礎路徑。paths:#可配置多個路徑- /home/vagrant/apache-tomcat-9.0.20/logs/catalina.*.out#- c:\programdata\elasticsearch\logs\*# Exclude lines. A list of regular expressions to match. It drops the lines that are#排除線路。要匹配的正則表達式列表。它去掉了# matching any regular expression from the list.#匹配列表中的任何正則表達式。#exclude_lines: ['^DBG']# Include lines. A list of regular expressions to match. It exports the lines that are#要匹配的正則表達式列表。它導出# matching any regular expression from the list.#匹配列表中的任何正則表達式。#include_lines: ['^INFO','^ERR', '^WARN']# Exclude files. A list of regular expressions to match. Filebeat drops the files that#排除的文件。要匹配的正則表達式列表。Filebeat刪除的文件# are matching any regular expression from the list. By default, no files are dropped.#匹配列表中的任何正則表達式。默認情況下,沒有文件被刪除。#exclude_files: ['.gz$']# Optional additional fields. These fields can be freely picked#可選的附加字段。這些字段可以自由選擇# to add additional information to the crawled log files for filtering#添加附加信息到抓取的日志文件進行過濾#fields:# level: debug# review: 1### Multiline options# Multiline can be used for log messages spanning multiple lines. This is common# Multiline可用于記錄跨多行的消息。這是常見的# for Java Stack Traces or C-Line Continuation#用于Java堆棧跟蹤或c行延續# The regexp Pattern that has to be matched. The example pattern matches all lines starting with [#必須匹配的regexp模式。示例模式匹配以[開頭的所有行multiline.pattern: ^\[# Defines if the pattern set under pattern should be negated or not. Default is false.#定義模式下的模式集是否應該被否定。默認是falsemultiline.negate: true# Match can be set to "after" or "before". It is used to define if lines should be append to a pattern#Match可以設置為“after”或“before”。它用于定義是否應該將行追加到模式中# that was (not) matched before or after or as long as a pattern is not matched based on negate.#在之前或之后匹配的,或者只要模式沒有基于negate匹配。 # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash#注意:在Logstash中,After等同于previous, before等同于nextmultiline.match: after#============================= Filebeat modules ===============================filebeat.config.modules:# Glob pattern for configuration loading#配置加載的Glob模式path: ${path.config}/modules.d/*.yml# Set to true to enable config reloading#設置為true可重新加載配置reload.enabled: false# Period on which files under path should be checked for changes#應該檢查path下的文件是否有更改的時間段#reload.period: 10s#==================== Elasticsearch template setting ==========================setup.template.settings:index.number_of_shards: 1#index.codec: best_compression#_source.enabled: false#================================ General =====================================# The name of the shipper that publishes the network data. It can be used to group
#應該檢查path下文件更改的時間段#發布網絡數據的托運人的名稱。它可以用來分組
# all the transactions sent by a single shipper in the web interface.
#由一個托運人在web interfac中發送的所有事務
#name:# The tags of the shipper are included in their own field with each
#每個托運人的標簽都包含在它們自己的字段中
# transaction published.
#事務發表。
#tags: ["service-X", "web-tier"]# Optional fields that you can specify to add additional information to the
#屬性中添加附加信息的可選字段
# output.
#fields:
# env: staging#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
#這些設置控制將樣例指示板加載到Kibana索引。加載
# the dashboards is disabled by default and can be enabled either by setting the
#儀表板在默認情況下是禁用的,可以通過設置
# options here or by using the `setup` command.
#選項或使用' setup '命令。
#setup.dashboards.enabled: false# The URL from where to download the dashboards archive. By default this URL
#下載儀表板歸檔文件的URL。默認情況下,這個URL
# has a value which is computed based on the Beat name and version. For released
#有一個基于節拍名稱和版本計算的值。對發布的
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
#版本號,此URL指向工件.elastic.co上的儀表板存檔
# website.
#setup.dashboards.url:#============================== Kibana =====================================# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
#從Beats 6.0.0版本開始,儀表板是通過Kibana API加載的。
# This requires a Kibana endpoint configuration.
#這需要Kibana端點配置。
setup.kibana:# Kibana Host# Scheme and port can be left out and will be set to the default (http and 5601)# In case you specify and additional path, the scheme is required: http://localhost:5601/path# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601host: "192.168.0.140:5601"# Kibana Space ID# ID of the Kibana Space into which the dashboards should be loaded. By default,# the Default Space will be used.#space.id:#============================= Elastic Cloud ==================================# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:#================================ Outputs =====================================# Configure what output to use when sending the data collected by the beat.
#配置在發送由節拍收集的數據時使用的輸出。#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:# Array of hosts to connect to.#hosts: ["192.168.0.140:9200"]# Protocol - either `http` (default) or `https`.#protocol: "https"# Authentication credentials - either API key or username/password.#api_key: "id:api_key"#username: "elastic"#password: "changeme"#----------------------------- Logstash output --------------------------------
output.logstash:# The Logstash hostshosts: ["192.168.0.140:5044"]# Optional SSL. By default is off.# List of root certificates for HTTPS server verifications#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]# Certificate for SSL client authentication#ssl.certificate: "/etc/pki/client/cert.pem"# Client Certificate Key#ssl.key: "/etc/pki/client/cert.key"#================================ Processors =====================================# Configure processors to enhance or manipulate events generated by the beat.
#配置處理器以增強或操縱節拍生成的事件。processors:- add_host_metadata: ~- add_cloud_metadata: ~- add_docker_metadata: ~- add_kubernetes_metadata: ~#================================ Logging =====================================# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]#============================== X-Pack Monitoring ===============================
# filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster. This requires xpack monitoring to be enabled in Elasticsearch. The
# reporting is disabled by default.# Set to true to enable the monitoring reporter.
#monitoring.enabled: false# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:#================================= Migration ==================================# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true
4.2 方法二
cd ELK
#修改run.sh里面的ES_HOST、LOG_HOST、KB_HOST
chmod +x ./run.sh ?#使腳本具有執行權限
./run.sh ?#執行腳本
5、啟動
隨后使用docker-compose命令啟動:
docker-compose up -d
Creating network "docker_repo_default" with the default driver
Creating docker_repo_elasticsearch_1 ... done
Creating docker_repo_kibana_1 ? ? ? ... done
Creating docker_repo_logstash_1 ? ? ... done
Creating docker_repo_filebeat_1 ? ? ... done