docker安裝elk6.7.1-搜集java日志
如果對運維課程感興趣,可以在b站上、A站或csdn上搜索我的賬號: 運維實戰課程,可以關注我,學習更多免費的運維實戰技術視頻
0.規劃
192.168.171.130 ???tomcat日志+filebeat
192.168.171.131 ???tomcat日志+filebeat
192.168.171.128 ???redis
192.168.171.129 ???logstash
192.168.171.128 ???es1
192.168.171.129 ???es2
192.168.171.132 ???kibana
1.docker安裝es集群-6.7.1 和head插件(在192.168.171.128-es1和192.168.171.129-es2)
在192.168.171.128上安裝es6.7.1和es6.7.1-head插件:
1)安裝docker19.03.2:
[root@localhost ~]# docker info
.......
Server Version: 19.03.2
[root@localhost ~]# sysctl -w vm.max_map_count=262144??#設置elasticsearch用戶擁有的內存權限太小,至少需要262144
[root@localhost ~]# sysctl -a |grep vm.max_map_count ???#查看
vm.max_map_count = 262144
[root@localhost ~]# vim /etc/sysctl.conf
vm.max_map_count=262144
2)安裝es6.7.1:
上傳相關es的壓縮包到/data目錄:
[root@localhost ~]# cd /data/
[root@localhost data]# ls es-6.7.1.tar.gz
es-6.7.1.tar.gz
[root@localhost data]# tar -zxf es-6.7.1.tar.gz
[root@localhost data]# cd es-6.7.1
[root@localhost es-6.7.1]# ls
config ?image ?scripts
[root@localhost es-6.7.1]# ls config/
es.yml
[root@localhost es-6.7.1]# ls image/
elasticsearch_6.7.1.tar
[root@localhost es-6.7.1]# ls scripts/
run_es_6.7.1.sh
[root@localhost es-6.7.1]# docker load -i image/elasticsearch_6.7.1.tar
[root@localhost es-6.7.1]# docker images |grep elasticsearch
elasticsearch ???????6.7.1 ??????????????e2667f5db289 ???????11 months ago ??????812MB
[root@localhost es-6.7.1]# cat config/es.yml
cluster.name: elasticsearch-cluster
node.name: es-node1
network.host: 0.0.0.0
network.publish_host: 192.168.171.128
http.port: 9200
transport.tcp.port: 9300
http.cors.enabled: true
http.cors.allow-origin: "*"
node.master: true
node.data: true
discovery.zen.ping.unicast.hosts: ["192.168.171.128:9300","192.168.171.129:9300"]
discovery.zen.minimum_master_nodes: 1
#cluster.name 集群的名稱,可以自定義名字,但兩個es必須一樣,就是通過是不是同一個名稱判斷是不是一個集群
#node.name 本機的節點名,可自定義,沒必要必須hosts解析或配置該主機名
#下面兩個是默認基礎上新加的,允許跨域訪問
#http.cors.enabled: true
#http.cors.allow-origin: '*'
##注意:容器里有兩個端口,9200是:ES節點和外部通訊使用,9300是:ES節點之間通訊使用
[root@localhost es-6.7.1]#?cat scripts/run_es_6.7.1.sh
#!/bin/bash
docker run -e ES_JAVA_OPTS="-Xms1024m -Xmx1024m" -d --net=host --restart=always -v /data/es-6.7.1/config/es.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v /data/es6.7.1_data:/usr/share/elasticsearch/data -v /data/es6.7.1_logs:/usr/share/elasticsearch/logs ?--name es6.7.1 elasticsearch:6.7.1
#注意:容器里有兩個端口,9200是:ES節點和外部通訊使用,9300是:ES節點之間通訊使用
[root@localhost es-6.7.1]# mkdir /data/es6.7.1_data
[root@localhost es-6.7.1]# mkdir /data/es6.7.1_logs
[root@localhost es-6.7.1]# chmod -R 777 /data/es6.7.1_data/?????#需要es用戶能寫入,否則無法映射
[root@localhost es-6.7.1]# chmod -R 777 /data/es6.7.1_logs/?????#需要es用戶能寫入,否則無法映射
[root@localhost es-6.7.1]#?sh scripts/run_es_6.7.1.sh
[root@localhost es-6.7.1]# docker ps
CONTAINER ID ???????IMAGE ????????????????COMMAND ?????????????????CREATED ????????????STATUS ?????????????PORTS ??????????????NAMES
988abe7eedac ???????elasticsearch:6.7.1 ??"/usr/local/bin/dock…" ??23 seconds ago ?????Up 19 seconds ??????????????????????????es6.7.1
[root@localhost es-6.7.1]# netstat -anput |grep 9200
tcp6 ??????0 ?????0 :::9200 ????????????????:::* ???????????????????LISTEN ?????16196/java ?????????
[root@localhost es-6.7.1]# netstat -anput |grep 9300
tcp6 ??????0 ?????0 :::9300 ????????????????:::* ???????????????????LISTEN ?????16196/java ?????????
[root@localhost es-6.7.1]# cd
瀏覽器訪問es服務:??????http://192.168.171.128:9200/
3)安裝es6.7.1-head插件:
上傳相關es-head插件的壓縮包到/data目錄
[root@localhost ~]# cd /data/
[root@localhost data]# ls es-6.7.1-head.tar.gz
es-6.7.1-head.tar.gz
[root@localhost data]# tar -zxf es-6.7.1-head.tar.gz
[root@localhost data]# cd es-6.7.1-head
[root@localhost es-6.7.1-head]# ls
conf ?image ?scripts
[root@localhost es-6.7.1-head]# ls conf/
app.js ?Gruntfile.js
[root@localhost es-6.7.1-head]# ls image/
elasticsearch-head_6.7.1.tar
[root@localhost es-6.7.1-head]# ls scripts/
run_es-head.sh
[root@localhost es-6.7.1-head]# docker load -i image/elasticsearch-head_6.7.1.tar
[root@localhost es-6.7.1-head]# docker images
REPOSITORY ??????????TAG ????????????????IMAGE ID ???????????CREATED ????????????SIZE
elasticsearch ???????6.7.1 ??????????????e2667f5db289 ???????11 months ago ??????812MB
elasticsearch-head ??6.7.1 ??????????????b19a5c98e43b ???????3 years ago ????????824MB
[root@localhost es-6.7.1-head]# vim?conf/app.js
.....
this.base_uri = this.config.base_uri || this.prefs.get("app-base_uri") || "http://192.168.171.128:9200";?#修改為本機ip
....
[root@localhost es-6.7.1-head]# vim conf/Gruntfile.js
....
????????????????connect: {
????????????????????????server: {
????????????????????????????????options: {
????????????????????????????????????????hostname: '*',????#添加
????????????????????????????????????????port: 9100,
????????????????????????????????????????base: '.',
????????????????????????????????????????keepalive: true
????????????????????????????????}
????????????????????????}
....
[root@localhost es-6.7.1-head]# cat scripts/run_es-head.sh
#!/bin/bash
docker run -d --name es-head-6.7.1 --net=host --restart=always -v /data/es-6.7.1-head/conf/Gruntfile.js:/usr/src/app/Gruntfile.js -v /data/es-6.7.1-head/conf/app.js:/usr/src/app/_site/app.js elasticsearch-head:6.7.1
#容器端口是9100,是es的管理端口
[root@localhost es-6.7.1-head]# sh scripts/run_es-head.sh?
[root@localhost es-6.7.1-head]# docker ps
CONTAINER ID ???????IMAGE ?????????????????????COMMAND ?????????????????CREATED ????????????STATUS ?????????????PORTS ??????????????NAMES
c46189c3338b ???????elasticsearch-head:6.7.1 ??"/bin/sh -c 'grunt s…" ??42 seconds ago ?????Up 37 seconds ??????????????????????????es-head-6.7.1
988abe7eedac ???????elasticsearch:6.7.1 ???????"/usr/local/bin/dock…" ??9 minutes ago ??????Up 9 minutes ???????????????????????????es6.7.1
[root@localhost es-6.7.1-head]# netstat -anput |grep 9100
tcp6 ??????0 ?????0 :::9100 ????????????????:::* ???????????????????LISTEN ?????16840/grunt ????????
瀏覽器訪問es-head插件:http://192.168.171.128:9100/?
在192.168.171.129上安裝es6.7.1和es6.7.1-head插件:
1)安裝docker19.03.2:
[root@localhost ~]# docker info
Client:
?Debug Mode: false
Server:
?Containers: 2
??Running: 2
??Paused: 0
??Stopped: 0
?Images: 2
?Server Version: 19.03.2
[root@localhost ~]# sysctl -w vm.max_map_count=262144??#設置elasticsearch用戶擁有的內存權限太小,至少需要262144
[root@localhost ~]# sysctl -a |grep vm.max_map_count ???#查看
vm.max_map_count = 262144
[root@localhost ~]# vim /etc/sysctl.conf
vm.max_map_count=262144
2)安裝es6.7.1:
上傳相關es的壓縮包到/data目錄:
[root@localhost ~]# cd /data/
[root@localhost data]# ls es-6.7.1.tar.gz
es-6.7.1.tar.gz
[root@localhost data]# tar -zxf es-6.7.1.tar.gz
[root@localhost data]# cd es-6.7.1
[root@localhost es-6.7.1]# ls
config ?image ?scripts
[root@localhost es-6.7.1]# ls config/
es.yml
[root@localhost es-6.7.1]# ls image/
elasticsearch_6.7.1.tar
[root@localhost es-6.7.1]# ls scripts/
run_es_6.7.1.sh
[root@localhost es-6.7.1]# docker load -i image/elasticsearch_6.7.1.tar
[root@localhost es-6.7.1]# docker images |grep elasticsearch
elasticsearch ???????6.7.1 ??????????????e2667f5db289 ???????11 months ago ??????812MB
[root@localhost es-6.7.1]#?vim?config/es.yml
cluster.name: elasticsearch-cluster
node.name: es-node2
network.host: 0.0.0.0
network.publish_host:?192.168.171.129
http.port: 9200
transport.tcp.port: 9300
http.cors.enabled: true
http.cors.allow-origin: "*"
node.master: true
node.data: true
discovery.zen.ping.unicast.hosts: ["192.168.171.128:9300","192.168.171.129:9300"]
discovery.zen.minimum_master_nodes: 1
#cluster.name 集群的名稱,可以自定義名字,但兩個es必須一樣,就是通過是不是同一個名稱判斷是不是一個集群
#node.name 本機的節點名,可自定義,沒必要必須hosts解析或配置該主機名
#下面兩個是默認基礎上新加的,允許跨域訪問
#http.cors.enabled: true
#http.cors.allow-origin: '*'
##注意:容器里有兩個端口,9200是:ES節點和外部通訊使用,9300是:ES節點之間通訊使用
[root@localhost es-6.7.1]#?cat scripts/run_es_6.7.1.sh
#!/bin/bash
docker run -e ES_JAVA_OPTS="-Xms1024m -Xmx1024m" -d --net=host --restart=always -v /data/es-6.7.1/config/es.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v /data/es6.7.1_data:/usr/share/elasticsearch/data -v /data/es6.7.1_logs:/usr/share/elasticsearch/logs ?--name es6.7.1 elasticsearch:6.7.1
#注意:容器里有兩個端口,9200是:ES節點和外部通訊使用,9300是:ES節點之間通訊使用
[root@localhost es-6.7.1]# mkdir /data/es6.7.1_data
[root@localhost es-6.7.1]# mkdir /data/es6.7.1_logs
[root@localhost es-6.7.1]# chmod -R 777 /data/es6.7.1_data/?????#需要es用戶能寫入,否則無法映射
[root@localhost es-6.7.1]# chmod -R 777 /data/es6.7.1_logs/?????#需要es用戶能寫入,否則無法映射
[root@localhost es-6.7.1]#?sh scripts/run_es_6.7.1.sh
[root@localhost es-6.7.1]# docker ps
CONTAINER ID ???????IMAGE ????????????????COMMAND ?????????????????CREATED ????????????STATUS ?????????????PORTS ??????????????NAMES
a3b0a0187db8 ???????elasticsearch:6.7.1 ??"/usr/local/bin/dock…" ??9 seconds ago ??????Up 7 seconds ???????????????????????????es6.7.1
[root@localhost es-6.7.1]# netstat -anput |grep 9200
tcp6 ??????0 ?????0 :::9200 ????????????????:::* ???????????????????LISTEN ?????14171/java ?????????
[root@localhost es-6.7.1]# netstat -anput |grep 9300
tcp6 ??????0 ?????0 :::9300 ????????????????:::* ???????????????????LISTEN ?????14171/java ?????????
[root@localhost es-6.7.1]# cd
瀏覽器訪問es服務:http://192.168.171.129:9200/
3)安裝es6.7.1-head插件:
上傳相關es-head插件的壓縮包到/data目錄
[root@localhost ~]# cd /data/
[root@localhost data]# ls es-6.7.1-head.tar.gz
es-6.7.1-head.tar.gz
[root@localhost data]# tar -zxf es-6.7.1-head.tar.gz
[root@localhost data]# cd es-6.7.1-head
[root@localhost es-6.7.1-head]# ls
conf ?image ?scripts
[root@localhost es-6.7.1-head]# ls conf/
app.js ?Gruntfile.js
[root@localhost es-6.7.1-head]# ls image/
elasticsearch-head_6.7.1.tar
[root@localhost es-6.7.1-head]# ls scripts/
run_es-head.sh
[root@localhost es-6.7.1-head]# docker load -i image/elasticsearch-head_6.7.1.tar
[root@localhost es-6.7.1-head]# docker images
REPOSITORY ??????????TAG ????????????????IMAGE ID ???????????CREATED ????????????SIZE
elasticsearch ???????6.7.1 ??????????????e2667f5db289 ???????11 months ago ??????812MB
elasticsearch-head ??6.7.1 ??????????????b19a5c98e43b ???????3 years ago ????????824MB
[root@localhost es-6.7.1-head]# vim?conf/app.js
.....
this.base_uri = this.config.base_uri || this.prefs.get("app-base_uri") || "http://192.168.171.129:9200";?#修改為本機ip
....
[root@localhost es-6.7.1-head]# vim conf/Gruntfile.js
....
????????????????connect: {
????????????????????????server: {
????????????????????????????????options: {
????????????????????????????????????????hostname: '*',????#添加
????????????????????????????????????????port: 9100,
????????????????????????????????????????base: '.',
????????????????????????????????????????keepalive: true
????????????????????????????????}
????????????????????????}
....
[root@localhost es-6.7.1-head]# cat scripts/run_es-head.sh
#!/bin/bash
docker run -d --name es-head-6.7.1 --net=host --restart=always -v /data/es-6.7.1-head/conf/Gruntfile.js:/usr/src/app/Gruntfile.js -v /data/es-6.7.1-head/conf/app.js:/usr/src/app/_site/app.js elasticsearch-head:6.7.1
#容器端口是9100,是es的管理端口
[root@localhost es-6.7.1-head]# sh scripts/run_es-head.sh?
[root@localhost es-6.7.1-head]# docker ps
CONTAINER ID ???????IMAGE ?????????????????????COMMAND ?????????????????CREATED ????????????STATUS ?????????????PORTS ??????????????NAMES
f4f5c967754b ???????elasticsearch-head:6.7.1 ??"/bin/sh -c 'grunt s…" ??12 seconds ago ?????Up 7 seconds ???????????????????????????es-head-6.7.1
a3b0a0187db8 ???????elasticsearch:6.7.1 ???????"/usr/local/bin/dock…" ??7 minutes ago ??????Up 7 minutes ???????????????????????????es6.7.1
[root@localhost es-6.7.1-head]# netstat -anput |grep 9100
tcp6 ??????0 ?????0 :::9100 ????????????????:::* ???????????????????LISTEN ?????14838/grunt ????????
瀏覽器訪問es-head插件:http://192.168.171.129:9100/?
同樣在機器192.168.171.128的head插件也能查看到狀態,因為插件管理工具都是一樣的,如下:
http://192.168.171.128:9100/
2.docker安裝redis4.0.10(在192.168.171.128上)
上傳redis4.0.10鏡像:
[root@localhost ~]# ls redis_4.0.10.tar
redis_4.0.10.tar
[root@localhost ~]#?docker load -i redis_4.0.10.tar
[root@localhost ~]# docker images |grep redis
gmprd.baiwang-inner.com/redis ??4.0.10 ?????????????f713a14c7f9b ???????13 months ago ??????425MB
[root@localhost ~]# mkdir -p /data/redis/conf ????????#創建配置文件目錄
[root@localhost ~]# vim /data/redis/conf/redis.conf ???#自定義配置文件
protected-mode no
port 6379
bind 0.0.0.0
tcp-backlog 511
timeout 0
tcp-keepalive 300
supervised no
pidfile "/usr/local/redis/redis_6379.pid"
loglevel notice
logfile "/opt/redis/logs/redis.log"
databases 16
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename "dump.rdb"
dir "/"
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
slave-priority 100
requirepass 123456
appendonly yes
dir "/opt/redis/data"
logfile "/opt/redis/logs/redis.log"
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes
maxclients 4064
#appendonly yes 是開啟數據持久化
#dir "/opt/redis/data" ?#持久化到的容器里的目錄
#logfile "/opt/redis/logs/redis.log" #持久化到的容器里的目錄,此處寫的必須是文件路徑,目錄路徑不行
[root@localhost ~]# docker run -d --net=host --restart=always --name=redis4.0.10 -v /data/redis/conf/redis.conf:/opt/redis/conf/redis.conf -v /data/redis_data:/opt/redis/data -v /data/redis_logs:/opt/redis/logs gmprd.baiwang-inner.com/redis:4.0.10
[root@localhost ~]# docker ps |grep redis
735fb213ee41 ???????gmprd.baiwang-inner.com/redis:4.0.10 ??"redis-server /opt/r…" ??9 seconds ago ??????Up 8 seconds ???????????????????????????redis4.0.10
[root@localhost ~]#?netstat -anput |grep 6379
tcp ???????0 ?????0 0.0.0.0:6379 ???????????0.0.0.0:* ??????????????LISTEN ?????16988/redis-server ?
[root@localhost ~]# ls /data/redis_data/
appendonly.aof
[root@localhost ~]# ls /data/redis_logs/
redis.log
[root@localhost ~]# docker exec -it redis4.0.10 bash
[root@localhost /]# redis-cli -a 123456
127.0.0.1:6379> set k1 v1
OK
127.0.0.1:6379> keys *
1) "k1"
127.0.0.1:6379> get k1
"v1"
127.0.0.1:6379> quit
[root@localhost /]# exit
3.docker安裝tomcat(不安裝,僅創建模擬tomcat和其他java日志)和filebeat6.7.1 (192.168.171.130和192.168.171.131)
在192.168.171.130上:
模擬創建各類java日志,將各類java日志用filebeat寫入redis中,在用logstash以多行匹配模式,寫入es中:
注意:下面日志不能提前生成,需要先啟動filebeat開始收集后,在vim編寫下面的日志,否則filebeat不能讀取已經有的日志.
a)創建模擬tomcat日志:
[root@localhost ~]# mkdir /data/java-logs
[root@localhost ~]# mkdir /data/java-logs/{tomcat_logs,es_logs,message_logs}
[root@localhost ~]# vim /data/java-logs/tomcat_logs/catalina.out
2020-03-09 13:07:48|ERROR|org.springframework.web.context.ContextLoader:351|Context initialization failed
org.springframework.beans.factory.parsing.BeanDefinitionParsingException: Configuration problem: Unable to locate Spring NamespaceHandler for XML schema namespace [http://www.springframework.org/schema/aop]
Offending resource: URL [file:/usr/local/apache-tomcat-8.0.32/webapps/ROOT/WEB-INF/classes/applicationContext.xml]
at org.springframework.beans.factory.parsing.FailFastProblemReporter.error(FailFastProblemReporter.java:70) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.parsing.ReaderContext.error(ReaderContext.java:85) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.parsing.ReaderContext.error(ReaderContext.java:80) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.xml.BeanDefinitionParserDelegate.error(BeanDefinitionParserDelegate.java:301) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.xml.BeanDefinitionParserDelegate.parseCustomElement(BeanDefinitionParserDelegate.java:1408) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.xml.BeanDefinitionParserDelegate.parseCustomElement(BeanDefinitionParserDelegate.java:1401) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentReader.parseBeanDefinitions(DefaultBeanDefinitionDocumentReader.java:168) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentReader.doRegisterBeanDefinitions(DefaultBeanDefinitionDocumentReader.java:138) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentReader.registerBeanDefinitions(DefaultBeanDefinitionDocumentReader.java:94) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.registerBeanDefinitions(XmlBeanDefinitionReader.java:508) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:392) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:336) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:304) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:181) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:217) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:188) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.web.context.support.XmlWebApplicationContext.loadBeanDefinitions(XmlWebApplicationContext.java:125) ~[spring-web-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.web.context.support.XmlWebApplicationContext.loadBeanDefinitions(XmlWebApplicationContext.java:94) ~[spring-web-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.context.support.AbstractRefreshableApplicationContext.refreshBeanFactory(AbstractRefreshableApplicationContext.java:129) ~[spring-context-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.obtainFreshBeanFactory(AbstractApplicationContext.java:609) ~[spring-context-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:510) ~[spring-context-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:444) ~[spring-web-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:326) ~[spring-web-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:107) [spring-web-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4812) [catalina.jar:8.0.32]
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5255) [catalina.jar:8.0.32]
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:147) [catalina.jar:8.0.32]
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:725) [catalina.jar:8.0.32]
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:701) [catalina.jar:8.0.32]
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:717) [catalina.jar:8.0.32]
at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1091) [catalina.jar:8.0.32]
at org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:1830) [catalina.jar:8.0.32]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_144]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_144]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_144]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_144]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_144]
13-Oct-2020 13:07:48.990 SEVERE [localhost-startStop-3] org.apache.catalina.core.StandardContext.startInternal One or more listeners failed to start. Full details will be found in the appropriate container log file
13-Oct-2020 13:07:48.991 SEVERE [localhost-startStop-3] org.apache.catalina.core.StandardContext.startInternal Context [] startup failed due to previous errors
2020-03-09 13:07:48|INFO|org.springframework.context.support.AbstractApplicationContext:960|Closing Root WebApplicationContext: startup date [Sun Oct 13 13:07:43 CST 2020]; root of context hierarchy
2020-03-09 13:09:41|INFO|org.springframework.context.support.AbstractApplicationContext:960|Closing Root WebApplicationContext: startup date [Sun Oct 13 13:07:43 CST 2020]; root of context hierarchy error test1
2020-03-09 13:10:41|INFO|org.springframework.context.support.AbstractApplicationContext:960|Closing Root WebApplicationContext: startup date [Sun Oct 13 13:07:43 CST 2020]; root of context hierarchy error test2
2020-03-09 13:11:41|INFO|org.springframework.context.support.AbstractApplicationContext:960|Closing Root WebApplicationContext: startup date [Sun Oct 13 13:07:43 CST 2020]; root of context hierarchy error test3
b)制造系統日志(將/var/log/messages部分弄出來) ?系統日志
[root@localhost ~]# vim /data/java-logs/message_logs/messages
Mar 09 14:19:06 localhost systemd: Removed slice system-selinux\x2dpolicy\x2dmigrate\x2dlocal\x2dchanges.slice.
Mar 09 14:19:06 localhost systemd: Stopping system-selinux\x2dpolicy\x2dmigrate\x2dlocal\x2dchanges.slice.
Mar 09 14:19:06 localhost systemd: Stopped target Network is Online.
Mar 09 14:19:06 localhost systemd: Stopping Network is Online.
Mar 09 14:19:06 localhost systemd: Stopping Authorization Manager...
Mar 09 14:20:38 localhost kernel: Initializing cgroup subsys cpuset
Mar 09 14:20:38 localhost kernel: Initializing cgroup subsys cpu
Mar 09 14:20:38 localhost kernel: Initializing cgroup subsys cpuacct
Mar 09 14:20:38 localhost kernel: Linux version 3.10.0-693.el7.x86_64 (builder@kbuilder.dev.centos.org) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-16) (GCC) ) #1 SMP Tue Aug 22 21:09:27 UTC 2017
Mar 09 14:20:38 localhost kernel: Command line: BOOT_IMAGE=/vmlinuz-3.10.0-693.el7.x86_64 root=/dev/mapper/centos-root ro crashkernel=auto rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rhgb quiet LANG=en_US.UTF-8
c)制造es日志:
[root@localhost ~]# vim /data/java-logs/es_logs/es_log
[2020-03-09T21:44:58,440][ERROR][o.e.b.Bootstrap ?????????] Exception
java.lang.RuntimeException: can not run elasticsearch as root
????????at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:035) ~[elasticsearch-6.2.4.jar:6.2.4]
????????at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:172) ~[elasticsearch-6.2.4.jar:6.2.4]
????????at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:323) [elasticsearch-6.2.4.jar:6.2.4]
????????at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:091) [elasticsearch-6.2.4.jar:6.2.4]
????????at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:109) [elasticsearch-6.2.4.jar:6.2.4]
????????at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) [elasticsearch-6.2.4.jar:6.2.4]
????????at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:094) [elasticsearch-cli-6.2.4.jar:6.2.4]
????????at org.elasticsearch.cli.Command.main(Command.java:90) [elasticsearch-cli-6.2.4.jar:6.2.4]
????????at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) [elasticsearch-6.2.4.jar:6.2.4]
????????at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:85) [elasticsearch-6.2.4.jar:6.2.4]
[2020-03-09T21:44:58,549][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: java.lang.RuntimeException: can not run elasticsearch as root
????????at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:095) ~[elasticsearch-6.2.4.jar:6.2.4]
????????at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:109) ~[elasticsearch-6.2.4.jar:6.2.4]
????????at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) ~[elasticsearch-6.2.4.jar:6.2.4]
????????at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:094) ~[elasticsearch-cli-6.2.4.jar:6.2.4]
????????at org.elasticsearch.cli.Command.main(Command.java:90) ~[elasticsearch-cli-6.2.4.jar:6.2.4]
????????at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) ~[elasticsearch-6.2.4.jar:6.2.4]
????????at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:85) ~[elasticsearch-6.2.4.jar:6.2.4]
Caused by: java.lang.RuntimeException: can not run elasticsearch as root
????????at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:035) ~[elasticsearch-6.2.4.jar:6.2.4]
????????at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:172) ~[elasticsearch-6.2.4.jar:6.2.4]
????????at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:323) ~[elasticsearch-6.2.4.jar:6.2.4]
????????at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:091) ~[elasticsearch-6.2.4.jar:6.2.4]
????????... 6 more
[2020-03-09T21:46:32,174][INFO ][o.e.n.Node ??????????????] [] initializing ...
[2020-03-09T21:46:32,467][INFO ][o.e.e.NodeEnvironment ???] [koccs5f] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [48gb], net total_space [49.9gb], types [rootfs]
[2020-03-09T21:46:32,468][INFO ][o.e.e.NodeEnvironment ???] [koccs5f] heap size [0315.6mb], compressed ordinary object pointers [true]
d)制造tomcat訪問日志
[root@localhost ~]# vim /data/java-logs/tomcat_logs/localhost_access_log.2020-03-09.txt?
192.168.171.1 - - [09/Mar/2020:09:07:59 +0800] "GET /favicon.ico HTTP/1.1" 404 -
Caused by: java.lang.RuntimeException: can not run elasticsearch as root
????????at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:105) ~[elasticsearch-6.2.4.jar:6.2.4]
????????at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:172) ~[elasticsearch-6.2.4.jar:6.2.4]
192.168.171.2 - - [09/Mar/2020:09:07:59 +0800] "GET / HTTP/1.1" 404 -
192.168.171.1 - - [09/Mar/2020:15:09:12 +0800] "GET / HTTP/1.1" 200 11250
Caused by: java.lang.RuntimeException: can not run elasticsearch as root
????????at org.elasticsearch.bootstrap.Bootstrap.initializeNatives
192.168.171.2 - - [09/Mar/2020:15:09:12 +0800] "GET /tomcat.png HTTP/1.1" 200 5103
192.168.171.3 - - [09/Mar/2020:15:09:12 +0800] "GET /tomcat.css HTTP/1.1" 200 5576
192.168.171.5 - - [09/Mar/2020:15:09:09 +0800] "GET /bg-nav.png HTTP/1.1" 200 1401
192.168.171.1 - - [09/Mar/2020:15:09:09 +0800] "GET /bg-upper.png HTTP/1.1" 200 3103
安裝filebeat6.7.1:
[root@localhost ~]# cd /data/
[root@localhost data]# ls filebeat6.7.1.tar.gz
filebeat6.7.1.tar.gz
[root@localhost data]# tar -zxf filebeat6.7.1.tar.gz
[root@localhost data]#?cd filebeat6.7.1
[root@localhost filebeat6.7.1]# ls
conf ?image ?scripts
[root@localhost filebeat6.7.1]# ls conf/
filebeat.yml ?filebeat.yml.bak
[root@localhost filebeat6.7.1]# ls image/
filebeat_6.7.1.tar
[root@localhost filebeat6.7.1]# ls scripts/
run_filebeat6.7.1.sh
[root@localhost filebeat6.7.1]# docker load -i image/filebeat_6.7.1.tar?
[root@localhost filebeat6.7.1]# docker images |grep filebeat
docker.elastic.co/beats/filebeat ??6.7.1 ??????????????04fcff75b160 ???????11 months ago ??????279MB
[root@localhost filebeat6.7.1]# cat conf/filebeat.yml
filebeat.inputs:
#下面為添加:——————————————
#系統日志:
- type: log
??enabled: true
??paths:
????- /usr/share/filebeat/logs/message_logs/messages
??fields:
????log_source: system-171.130
#tomcat的catalina日志:
- type: log
??enabled: true
??paths:
????- /usr/share/filebeat/logs/tomcat_logs/catalina.out
??fields:
????log_source: catalina-log-171.130
??multiline.pattern: '^[0-9]{4}-(((0[13578]|(10|12))-(0[1-9]|[1-2][0-9]|3[0-1]))|(02-(0[1-9]|[1-2][0-9]))|((0[469]|11)-(0[1-9]|[1-2][0-9]|30)))'
??multiline.negate: true
??multiline.match: after
# 上面正則是匹配日期開頭正則,類似:2004-02-29開頭的
# log_source: xxx 表示: 因為存入redis的只有一個索引名,logstash對多種類型日志無法區分,定義該項可以讓logstash以此來判斷日志來源,當是這種類型日志,輸出相應的索引名存入es,當時另一種類型日志,輸出相應索引名存入es
#es日志:
- type: log
??enabled: true
??paths:
????- /usr/share/filebeat/logs/es_logs/es_log
??fields:
????log_source: es-log-171.130
??multiline.pattern: '^\['
??multiline.negate: true
??multiline.match: after
#上面正則是是匹配以[開頭的,\表示轉義.
#tomcat的訪問日志:
- type: log
??enabled: true
??paths:
????- /usr/share/filebeat/logs/tomcat_logs/localhost_access_log.2020-03-09.txt
??fields:
????log_source: tomcat-access-log-171.130
??multiline.pattern: '^((2(5[0-5]|[0-4]\d))|[0-1]?\d{1,2})(\.((2(5[0-5]|[0-4]\d))|[0-1]?\d{1,2})){3}'
??multiline.negate: true
??multiline.match: after
#上面為添加:—————————————————————
filebeat.config.modules:
??path: ${path.config}/modules.d/*.yml
??reload.enabled: false
setup.template.settings:
??index.number_of_shards: 3
setup.kibana:
#下面是直接寫入es中:
#output.elasticsearch:
# ?hosts: ["192.168.171.128:9200"]
#下面是寫入redis中:
#下面的filebeat-common是自定的key,要和logstash中從redis里對應的key要要一致,多個節點的nginx的都可以該key寫入,但需要定義log_source以作為區分,logstash讀取的時候以區分的標志來分開存放索引到es中
output.redis:
??hosts: ["192.168.171.128"]
??port: 6379
??password: "123456"
??key: "filebeat-common"
??db: 0
??datatype: list
processors:
??- add_host_metadata: ~
??- add_cloud_metadata: ~
#注意:因為默認情況下,宿主機日志路徑和容器內日志路徑是不一致的,所以配置文件里配置的路徑如果是宿主機日志路徑,容器里則找不到
##所以采取措施是:配置文件里配置成容器里的日志路徑,再把宿主機的日志目錄和容器日志目錄做一個映射就可以了
#/usr/share/filebeat/logs/*.log 是容器里的日志路徑
[root@localhost filebeat6.7.1]# cat scripts/run_filebeat6.7.1.sh?
#!/bin/bash
docker run -d --name filebeat6.7.1 --net=host --restart=always --user=root -v /data/filebeat6.7.1/conf/filebeat.yml:/usr/share/filebeat/filebeat.yml -v /data/java-logs:/usr/share/filebeat/logs ?docker.elastic.co/beats/filebeat:6.7.1
#注意:因為默認情況下,宿主機日志路徑和容器內日志路徑是不一致的,所以配置文件里配置的路徑如果是宿主機日志路徑,容器里則找不到
#所以采取措施是:配置文件里配置成容器里的日志路徑,再把宿主機的日志目錄和容器日志目錄做一個映射就可以了
[root@localhost filebeat6.7.1]# sh scripts/run_filebeat6.7.1.sh ?#運行后則開始收集日志到redis
[root@localhost filebeat6.7.1]# docker ps |grep filebeat
1f2bbd450e7e ???????docker.elastic.co/beats/filebeat:6.7.1 ??"/usr/local/bin/dock…" ??8 seconds ago ??????Up 7 seconds ???????????????????????????filebeat6.7.1
[root@localhost filebeat6.7.1]# cd
在192.168.171.131上:
模擬創建各類java日志,將各類java日志用filebeat寫入redis中,在用logstash以多行匹配模式,寫入es中:
注意:下面日志不能提前生成,需要先啟動filebeat開始收集后,在vim編寫下面的日志,否則filebeat不能讀取已經有的日志.
a)創建模擬tomcat日志:
[root@localhost ~]# mkdir /data/java-logs
[root@localhost ~]# mkdir /data/java-logs/{tomcat_logs,es_logs,message_logs}
[root@localhost ~]# vim /data/java-logs/tomcat_logs/catalina.out
2050-05-09 13:07:48|ERROR|org.springframework.web.context.ContextLoader:351|Context initialization failed
org.springframework.beans.factory.parsing.BeanDefinitionParsingException: Configuration problem: Unable to locate Spring NamespaceHandler for XML schema namespace [http://www.springframework.org/schema/aop]
Offending resource: URL [file:/usr/local/apache-tomcat-8.0.32/webapps/ROOT/WEB-INF/classes/applicationContext.xml]
at org.springframework.beans.factory.parsing.FailFastProblemReporter.error(FailFastProblemReporter.java:70) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.parsing.ReaderContext.error(ReaderContext.java:85) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.parsing.ReaderContext.error(ReaderContext.java:80) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.xml.BeanDefinitionParserDelegate.error(BeanDefinitionParserDelegate.java:301) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.xml.BeanDefinitionParserDelegate.parseCustomElement(BeanDefinitionParserDelegate.java:1408) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.xml.BeanDefinitionParserDelegate.parseCustomElement(BeanDefinitionParserDelegate.java:1401) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentReader.parseBeanDefinitions(DefaultBeanDefinitionDocumentReader.java:168) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentReader.doRegisterBeanDefinitions(DefaultBeanDefinitionDocumentReader.java:138) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentReader.registerBeanDefinitions(DefaultBeanDefinitionDocumentReader.java:94) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.registerBeanDefinitions(XmlBeanDefinitionReader.java:508) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:392) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:336) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:304) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:181) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:217) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:188) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.web.context.support.XmlWebApplicationContext.loadBeanDefinitions(XmlWebApplicationContext.java:125) ~[spring-web-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.web.context.support.XmlWebApplicationContext.loadBeanDefinitions(XmlWebApplicationContext.java:94) ~[spring-web-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.context.support.AbstractRefreshableApplicationContext.refreshBeanFactory(AbstractRefreshableApplicationContext.java:129) ~[spring-context-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.obtainFreshBeanFactory(AbstractApplicationContext.java:609) ~[spring-context-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:510) ~[spring-context-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:444) ~[spring-web-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:326) ~[spring-web-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:107) [spring-web-4.2.6.RELEASE.jar:4.2.6.RELEASE]
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4812) [catalina.jar:8.0.32]
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5255) [catalina.jar:8.0.32]
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:147) [catalina.jar:8.0.32]
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:725) [catalina.jar:8.0.32]
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:701) [catalina.jar:8.0.32]
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:717) [catalina.jar:8.0.32]
at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1091) [catalina.jar:8.0.32]
at org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:1830) [catalina.jar:8.0.32]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_144]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_144]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_144]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_144]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_144]
13-Oct-2050 13:07:48.990 SEVERE [localhost-startStop-3] org.apache.catalina.core.StandardContext.startInternal One or more listeners failed to start. Full details will be found in the appropriate container log file
13-Oct-2050 13:07:48.991 SEVERE [localhost-startStop-3] org.apache.catalina.core.StandardContext.startInternal Context [] startup failed due to previous errors
2050-05-09 13:07:48|INFO|org.springframework.context.support.AbstractApplicationContext:960|Closing Root WebApplicationContext: startup date [Sun Oct 13 13:07:43 CST 2050]; root of context hierarchy
2050-05-09 13:09:41|INFO|org.springframework.context.support.AbstractApplicationContext:960|Closing Root WebApplicationContext: startup date [Sun Oct 13 13:07:43 CST 2050]; root of context hierarchy error test1
2050-05-09 13:10:41|INFO|org.springframework.context.support.AbstractApplicationContext:960|Closing Root WebApplicationContext: startup date [Sun Oct 13 13:07:43 CST 2050]; root of context hierarchy error test2
2050-05-09 13:11:41|INFO|org.springframework.context.support.AbstractApplicationContext:960|Closing Root WebApplicationContext: startup date [Sun Oct 13 13:07:43 CST 2050]; root of context hierarchy error test3
b)制造系統日志(將/var/log/messages部分弄出來) ?系統日志
[root@localhost ~]# vim /data/java-logs/message_logs/messages
Mar 50 50:50:06 localhost systemd: Removed slice system-selinux\x2dpolicy\x2dmigrate\x2dlocal\x2dchanges.slice.
Mar 50 50:50:06 localhost systemd: Stopping system-selinux\x2dpolicy\x2dmigrate\x2dlocal\x2dchanges.slice.
Mar 50 50:50:06 localhost systemd: Stopped target Network is Online.
Mar 50 50:50:06 localhost systemd: Stopping Network is Online.
Mar 50 50:50:06 localhost systemd: Stopping Authorization Manager...
Mar 50 50:20:38 localhost kernel: Initializing cgroup subsys cpuset
Mar 50 50:20:38 localhost kernel: Initializing cgroup subsys cpu
Mar 50 50:20:38 localhost kernel: Initializing cgroup subsys cpuacct
Mar 50 50:20:38 localhost kernel: Linux version 3.10.0-693.el7.x86_64 (builder@kbuilder.dev.centos.org) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-16) (GCC) ) #1 SMP Tue Aug 22 21:50:27 UTC 2050
Mar 50 50:20:38 localhost kernel: Command line: BOOT_IMAGE=/vmlinuz-3.10.0-693.el7.x86_64 root=/dev/mapper/centos-root ro crashkernel=auto rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rhgb quiet LANG=en_US.UTF-8
c)制造es日志:
[root@localhost ~]# vim /data/java-logs/es_logs/es_log
[2050-50-09T21:44:58,440][ERROR][o.e.b.Bootstrap ?????????] Exception
java.lang.RuntimeException: can not run elasticsearch as root
????????at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:505) ~[elasticsearch-6.2.4.jar:6.2.4]
????????at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:172) ~[elasticsearch-6.2.4.jar:6.2.4]
????????at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:323) [elasticsearch-6.2.4.jar:6.2.4]
????????at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:091) [elasticsearch-6.2.4.jar:6.2.4]
????????at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:109) [elasticsearch-6.2.4.jar:6.2.4]
????????at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) [elasticsearch-6.2.4.jar:6.2.4]
????????at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:094) [elasticsearch-cli-6.2.4.jar:6.2.4]
????????at org.elasticsearch.cli.Command.main(Command.java:90) [elasticsearch-cli-6.2.4.jar:6.2.4]
????????at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) [elasticsearch-6.2.4.jar:6.2.4]
????????at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:85) [elasticsearch-6.2.4.jar:6.2.4]
[2050-50-09T21:44:58,549][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [] uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: java.lang.RuntimeException: can not run elasticsearch as root
????????at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:095) ~[elasticsearch-6.2.4.jar:6.2.4]
????????at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:109) ~[elasticsearch-6.2.4.jar:6.2.4]
????????at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) ~[elasticsearch-6.2.4.jar:6.2.4]
????????at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:094) ~[elasticsearch-cli-6.2.4.jar:6.2.4]
????????at org.elasticsearch.cli.Command.main(Command.java:90) ~[elasticsearch-cli-6.2.4.jar:6.2.4]
????????at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) ~[elasticsearch-6.2.4.jar:6.2.4]
????????at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:85) ~[elasticsearch-6.2.4.jar:6.2.4]
Caused by: java.lang.RuntimeException: can not run elasticsearch as root
????????at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:505) ~[elasticsearch-6.2.4.jar:6.2.4]
????????at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:172) ~[elasticsearch-6.2.4.jar:6.2.4]
????????at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:323) ~[elasticsearch-6.2.4.jar:6.2.4]
????????at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:091) ~[elasticsearch-6.2.4.jar:6.2.4]
????????... 6 more
[2050-50-09T21:46:32,174][INFO ][o.e.n.Node ??????????????] [] initializing ...
[2050-50-09T21:46:32,467][INFO ][o.e.e.NodeEnvironment ???] [koccs5f] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [48gb], net total_space [49.9gb], types [rootfs]
[2050-50-09T21:46:32,468][INFO ][o.e.e.NodeEnvironment ???] [koccs5f] heap size [5015.6mb], compressed ordinary object pointers [true]
d)制造tomcat訪問日志
[root@localhost ~]# vim /data/java-logs/tomcat_logs/localhost_access_log.2050-50-09.txt?
192.168.150.1 - - [09/Mar/2050:09:07:59 +0800] "GET /favicon.ico HTTP/1.1" 404 -
Caused by: java.lang.RuntimeException: can not run elasticsearch as root
????????at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:105) ~[elasticsearch-6.2.4.jar:6.2.4]
????????at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:172) ~[elasticsearch-6.2.4.jar:6.2.4]
192.168.150.2 - - [09/Mar/2050:09:07:59 +0800] "GET / HTTP/1.1" 404 -
192.168.150.1 - - [09/Mar/2050:15:09:12 +0800] "GET / HTTP/1.1" 200 11250
Caused by: java.lang.RuntimeException: can not run elasticsearch as root
????????at org.elasticsearch.bootstrap.Bootstrap.initializeNatives
192.168.150.2 - - [09/Mar/2050:15:09:12 +0800] "GET /tomcat.png HTTP/1.1" 200 5103
192.168.150.3 - - [09/Mar/2050:15:09:12 +0800] "GET /tomcat.css HTTP/1.1" 200 5576
192.168.150.5 - - [09/Mar/2050:15:09:09 +0800] "GET /bg-nav.png HTTP/1.1" 200 1401
192.168.150.1 - - [09/Mar/2050:15:09:09 +0800] "GET /bg-upper.png HTTP/1.1" 200 3103
安裝filebeat6.7.1:
[root@localhost ~]# cd /data/
[root@localhost data]# ls filebeat6.7.1.tar.gz
filebeat6.7.1.tar.gz
[root@localhost data]# tar -zxf filebeat6.7.1.tar.gz
[root@localhost data]#?cd filebeat6.7.1
[root@localhost filebeat6.7.1]# ls
conf ?image ?scripts
[root@localhost filebeat6.7.1]# ls conf/
filebeat.yml ?filebeat.yml.bak
[root@localhost filebeat6.7.1]# ls image/
filebeat_6.7.1.tar
[root@localhost filebeat6.7.1]# ls scripts/
run_filebeat6.7.1.sh
[root@localhost filebeat6.7.1]# docker load -i image/filebeat_6.7.1.tar?
[root@localhost filebeat6.7.1]# docker images |grep filebeat
docker.elastic.co/beats/filebeat ??6.7.1 ??????????????04fcff75b160 ???????11 months ago ??????279MB
[root@localhost filebeat6.7.1]# cat conf/filebeat.yml
filebeat.inputs:
#下面為添加:——————————————
#系統日志:
- type: log
??enabled: true
??paths:
????- /usr/share/filebeat/logs/message_logs/messages
??fields:
????log_source: system-171.131
#tomcat的catalina日志:
- type: log
??enabled: true
??paths:
????- /usr/share/filebeat/logs/tomcat_logs/catalina.out
??fields:
????log_source: catalina-log-171.131
??multiline.pattern: '^[0-9]{4}-(((0[13578]|(10|12))-(0[1-9]|[1-2][0-9]|3[0-1]))|(02-(0[1-9]|[1-2][0-9]))|((0[469]|11)-(0[1-9]|[1-2][0-9]|30)))'
??multiline.negate: true
??multiline.match: after
# 上面正則是匹配日期開頭正則,類似:2004-02-29開頭的
# log_source: xxx 表示: 因為存入redis的只有一個索引名,logstash對多種類型日志無法區分,定義該項可以讓logstash以此來判斷日志來源,當是這種類型日志,輸出相應的索引名存入es,當時另一種類型日志,輸出相應索引名存入es
#es日志:
- type: log
??enabled: true
??paths:
????- /usr/share/filebeat/logs/es_logs/es_log
??fields:
????log_source: es-log-171.131
??multiline.pattern: '^\['
??multiline.negate: true
??multiline.match: after
#上面正則是是匹配以[開頭的,\表示轉義.
#tomcat的訪問日志:
- type: log
??enabled: true
??paths:
????- /usr/share/filebeat/logs/tomcat_logs/localhost_access_log.2050-50-09.txt
??fields:
????log_source: tomcat-access-log-171.131
??multiline.pattern: '^((2(5[0-5]|[0-4]\d))|[0-1]?\d{1,2})(\.((2(5[0-5]|[0-4]\d))|[0-1]?\d{1,2})){3}'
??multiline.negate: true
??multiline.match: after
#上面為添加:—————————————————————
filebeat.config.modules:
??path: ${path.config}/modules.d/*.yml
??reload.enabled: false
setup.template.settings:
??index.number_of_shards: 3
setup.kibana:
#下面是直接寫入es中:
#output.elasticsearch:
# ?hosts: ["192.168.171.128:9200"]
#下面是寫入redis中:
#下面的filebeat-common是自定的key,要和logstash中從redis里對應的key要要一致,多個節點的nginx的都可以該key寫入,但需要定義log_source以作為區分,logstash讀取的時候以區分的標志來分開存放索引到es中
output.redis:
??hosts: ["192.168.171.128"]
??port: 6379
??password: "123456"
??key: "filebeat-common"
??db: 0
??datatype: list
processors:
??- add_host_metadata: ~
??- add_cloud_metadata: ~
#注意:因為默認情況下,宿主機日志路徑和容器內日志路徑是不一致的,所以配置文件里配置的路徑如果是宿主機日志路徑,容器里則找不到
##所以采取措施是:配置文件里配置成容器里的日志路徑,再把宿主機的日志目錄和容器日志目錄做一個映射就可以了
#/usr/share/filebeat/logs/*.log 是容器里的日志路徑
[root@localhost filebeat6.7.1]#?cat scripts/run_filebeat6.7.1.sh
#!/bin/bash
docker run -d --name filebeat6.7.1 --net=host --restart=always --user=root -v /data/filebeat6.7.1/conf/filebeat.yml:/usr/share/filebeat/filebeat.yml -v /data/java-logs:/usr/share/filebeat/logs ?docker.elastic.co/beats/filebeat:6.7.1
#注意:因為默認情況下,宿主機日志路徑和容器內日志路徑是不一致的,所以配置文件里配置的路徑如果是宿主機日志路徑,容器里則找不到
#所以采取措施是:配置文件里配置成容器里的日志路徑,再把宿主機的日志目錄和容器日志目錄做一個映射就可以了
[root@localhost filebeat6.7.1]# sh scripts/run_filebeat6.7.1.sh ??#運行后則開始收集日志到redis
[root@localhost filebeat6.7.1]# docker ps |grep filebeat
3cc559a84904 ???????docker.elastic.co/beats/filebeat:6.7.1 ??"/usr/local/bin/dock…" ??8 seconds ago ??????Up 7 seconds ???????????????????????????filebeat6.7.1
[root@localhost filebeat6.7.1]# cd
到redis里查看是否以寫入日志:(192.168.171.128,兩臺都以同一個key寫入redis,所以只有一個key名,篩選進入es時再根據標識篩選)
[root@localhost ~]# docker exec -it redis4.0.10 bash
[root@localhost /]# redis-cli -a 123456
127.0.0.1:6379> KEYS *
1)"filebeat-common"
127.0.0.1:6379> quit
[root@localhost /]# exit
4.docker安裝logstash6.7.1(在192.168.171.129上)——從redis讀出日志,寫入es集群
[root@localhost ~]# cd /data/
[root@localhost data]# ls logstash6.7.1.tar.gz
logstash6.7.1.tar.gz
[root@localhost data]# tar -zxf logstash6.7.1.tar.gz
[root@localhost data]# cd logstash6.7.1
[root@localhost logstash6.7.1]# ls
config ?image ?scripts
[root@localhost logstash6.7.1]# ls config/
GeoLite2-City.mmdb ?log4j2.properties ????logstash.yml ??pipelines.yml_bak ????startup.options
jvm.options ????????logstash-sample.conf ?pipelines.yml ?redis_out_es_in.conf
[root@localhost logstash6.7.1]# ls image/
logstash_6.7.1.tar
[root@localhost logstash6.7.1]# ls scripts/
run_logstash6.7.1.sh
[root@localhost logstash6.7.1]#?docker load -i image/logstash_6.7.1.tar
[root@localhost logstash6.7.1]# docker images |grep logstash
logstash ????????????6.7.1 ??????????????1f5e249719fc ???????11 months ago ??????778MB
[root@localhost logstash6.7.1]# cat config/pipelines.yml ?#確認配置,引用的conf目錄
# This file is where you define your pipelines. You can define multiple.
# For more information on multiple pipelines, see the documentation:
# ??https://www.elastic.co/guide/en/logstash/current/multiple-pipelines.html
- pipeline.id: main
??path.config: "/usr/share/logstash/config/*.conf" ??#容器內的目錄
??pipeline.workers: 3
[root@localhost logstash6.7.1]# cat config/redis_out_es_in.conf ??#查看和確認配置
input {
????redis {
????????host => "192.168.171.128"
????????port => "6379"
????????password => "123456"
????????db => "0"
????????data_type => "list"
????????key => "filebeat-common"
????}
}
#默認target是@timestamp,所以time_local會更新@timestamp時間。下面filter的date插件作用: 當第一次收集或使用緩存寫入時候,會發現入庫時間比日志實際時間有延時,導致時間不準確,最好加入date插件,使得>入庫時間和日志實際時間保持一致.
filter {
????date {
????????locale => "en"
????????match => ["time_local", "dd/MMM/yyyy:HH:mm:ss Z"]
????}
}
output {
????if [fields][log_source] == 'system-171.130' {
????????elasticsearch {
????????????hosts => ["192.168.171.128:9200"]
????????????index => "logstash-system-171.130-log-%{+YYYY.MM.dd}"
????????}
????}
????if [fields][log_source] == 'system-171.131' {
????????elasticsearch {
????????????hosts => ["192.168.171.128:9200"]
????????????index => "logstash-system-171.131-log-%{+YYYY.MM.dd}"
????????}
????}
????if [fields][log_source] == 'catalina-log-171.130' {
????????elasticsearch {
????????????hosts => ["192.168.171.128:9200"]
????????????index => "logstash-catalina-171.130-log-%{+YYYY.MM.dd}"
????????} ???????
????}
????if [fields][log_source] == 'catalina-log-171.131' {
????????elasticsearch {
????????????hosts => ["192.168.171.128:9200"]
????????????index => "logstash-catalina-171.131-log-%{+YYYY.MM.dd}"
????????} ???????
????}
????if [fields][log_source] == 'es-log-171.130' {
????????elasticsearch {
????????????hosts => ["192.168.171.128:9200"]
????????????index => "logstash-es-log-171.130-%{+YYYY.MM.dd}"
????????}
????}
????if [fields][log_source] == 'es-log-171.131' {
????????elasticsearch {
????????????hosts => ["192.168.171.128:9200"]
????????????index => "logstash-es-log-171.131-%{+YYYY.MM.dd}"
????????}
????}
????if [fields][log_source] == 'tomcat-access-log-171.130' {
????????elasticsearch {
????????????hosts => ["192.168.171.128:9200"]
????????????index => "logstash-tomcat-access-171.130-log-%{+YYYY.MM.dd}"
????????}
????} ??
????if [fields][log_source] == 'tomcat-access-log-171.131' {
????????elasticsearch {
????????????hosts => ["192.168.171.128:9200"]
????????????index => "logstash-tomcat-access-171.131-log-%{+YYYY.MM.dd}"
????????}
????} ??
????stdout { codec=> rubydebug }
????#codec=> rubydebug 調試使用,能將信息輸出到控制臺
}
[root@localhost logstash6.7.1]#?cat scripts/run_logstash6.7.1.sh
#!/bin/bash
docker run -d --name logstash6.7.1 --net=host --restart=always -v /data/logstash6.7.1/config:/usr/share/logstash/config logstash:6.7.1?
[root@localhost logstash6.7.1]# sh scripts/run_logstash6.7.1.sh??#從redis讀取日志,寫入es
[root@localhost logstash6.7.1]# docker ps |grep logstash
980aefbc077e ???????logstash:6.7.1 ????????????"/usr/local/bin/dock…" ??9 seconds ago ??????Up 7 seconds ???????????????????????????logstash6.7.1
到es集群查看,如下:
到redis查看,數據已經讀取走,為空了:
[root@localhost ~]# docker exec -it redis4.0.10 bash
[root@localhost /]# redis-cli -a 123456
127.0.0.1:6379> KEYS *
(empty list or set)
127.0.0.1:6379> quit
5.docker安裝kibana6.7.1(在192.168.171.132上)從es中讀取日志展示出來
[root@localhost ~]# cd /data/
[root@localhost data]# ls kibana6.7.1.tar.gz
kibana6.7.1.tar.gz
[root@localhost data]#?tar -zxf kibana6.7.1.tar.gz
[root@localhost data]# cd kibana6.7.1
[root@localhost kibana6.7.1]# ls
config ?image ?scripts
[root@localhost kibana6.7.1]# ls config/
kibana.yml
[root@localhost kibana6.7.1]# ls image/
kibana_6.7.1.tar
[root@localhost kibana6.7.1]# ls scripts/
run_kibana6.7.1.sh
[root@localhost kibana6.7.1]# docker load -i image/kibana_6.7.1.tar
[root@localhost kibana6.7.1]# docker images |grep kibana
kibana ?????????????6.7.1 ??????????????860831fbf9e7 ???????11 months ago ??????677MB
[root@localhost kibana6.7.1]# cat config/kibana.yml
#
# ** THIS IS AN AUTO-GENERATED FILE **
#
# Default Kibana configuration for docker target
server.name: kibana
server.host: "0"
elasticsearch.hosts: [ "http://192.168.171.128:9200" ]
xpack.monitoring.ui.container.elasticsearch.enabled: true
[root@localhost kibana6.7.1]#?cat scripts/run_kibana6.7.1.sh ??
#!/bin/bash
docker run -d --name kibana6.7.1 --net=host --restart=always -v /data/kibana6.7.1/config/kibana.yml:/usr/share/kibana/config/kibana.yml kibana:6.7.1
[root@localhost kibana6.7.1]#?sh scripts/run_kibana6.7.1.sh ?#運行,從es讀取展示到kibana中
[root@localhost kibana6.7.1]# docker ps
CONTAINER ID ???????IMAGE ??????????????COMMAND ?????????????????CREATED ????????????STATUS ?????????????PORTS ??????????????NAMES
bf16aaeaf4d9 ???????kibana:6.7.1 ???????"/usr/local/bin/kiba…" ??16 seconds ago ?????Up 15 seconds ??????????????????????????kibana6.7.1
[root@localhost kibana6.7.1]# netstat -anput |grep 5601???#kibana端口
tcp ???????0 ?????0 0.0.0.0:5601 ???????????0.0.0.0:* ??????????????LISTEN ?????2418/node ????
瀏覽器訪問kibana: ?http://192.168.171.132:5601?
kibana依次創建索引(盡量和es里索引名對應,方便查找)——查詢和展示es里的數據
(1)先創建-*索引:logstash-catalina-* ??點擊management,如下:
輸入索引名:logstash-catalina-*,點擊下一步,如下:
選擇時間戳: @timestamp,點擊創建索引,如下:
(2)先創建-*索引:logstash-es-log-* ??
點擊下一步,如下:
選擇時間戳,點擊創建索引,如下:
(3)創建-*索引:logstash-system-* ??
點擊下一步,如下:
選擇時間戳,點擊創建索引,如下:
(4)創建-*索引:logstash-tomcat-access-* ?
點擊下一步,如下:
點擊創建索引,如下:
查看日志,點擊discover,如下: #注意:由于之前測試訪問日志量少,后面又多寫了些日志,方便測試。
隨便選擇幾個點擊箭頭,即可展開,如下:
如果對運維課程感興趣,可以在b站上、A站或csdn上搜索我的賬號: 運維實戰課程,可以關注我,學習更多免費的運維實戰技術視頻