docker安裝elk6.7.1-搜集java日志

docker安裝elk6.7.1-搜集java日志

如果對運維課程感興趣,可以在b站上、A站或csdn上搜索我的賬號: 運維實戰課程,可以關注我,學習更多免費的運維實戰技術視頻

0.規劃

192.168.171.130 ???tomcat日志+filebeat

192.168.171.131 ???tomcat日志+filebeat

192.168.171.128 ???redis

192.168.171.129 ???logstash

192.168.171.128 ???es1

192.168.171.129 ???es2

192.168.171.132 ???kibana

1.docker安裝es集群-6.7.1 和head插件(在192.168.171.128-es1和192.168.171.129-es2)

在192.168.171.128上安裝es6.7.1和es6.7.1-head插件:

1)安裝docker19.03.2:

[root@localhost ~]# docker info

.......

Server Version: 19.03.2

[root@localhost ~]# sysctl -w vm.max_map_count=262144??#設置elasticsearch用戶擁有的內存權限太小,至少需要262144

[root@localhost ~]# sysctl -a |grep vm.max_map_count ???#查看

vm.max_map_count = 262144

[root@localhost ~]# vim /etc/sysctl.conf

vm.max_map_count=262144

2)安裝es6.7.1:

上傳相關es的壓縮包到/data目錄:

[root@localhost ~]# cd /data/

[root@localhost data]# ls es-6.7.1.tar.gz

es-6.7.1.tar.gz

[root@localhost data]# tar -zxf es-6.7.1.tar.gz

[root@localhost data]# cd es-6.7.1

[root@localhost es-6.7.1]# ls

config ?image ?scripts

[root@localhost es-6.7.1]# ls config/

es.yml

[root@localhost es-6.7.1]# ls image/

elasticsearch_6.7.1.tar

[root@localhost es-6.7.1]# ls scripts/

run_es_6.7.1.sh

[root@localhost es-6.7.1]# docker load -i image/elasticsearch_6.7.1.tar

[root@localhost es-6.7.1]# docker images |grep elasticsearch

elasticsearch ???????6.7.1 ??????????????e2667f5db289 ???????11 months ago ??????812MB

[root@localhost es-6.7.1]# cat config/es.yml

cluster.name: elasticsearch-cluster

node.name: es-node1

network.host: 0.0.0.0

network.publish_host: 192.168.171.128

http.port: 9200

transport.tcp.port: 9300

http.cors.enabled: true

http.cors.allow-origin: "*"

node.master: true

node.data: true

discovery.zen.ping.unicast.hosts: ["192.168.171.128:9300","192.168.171.129:9300"]

discovery.zen.minimum_master_nodes: 1

#cluster.name 集群的名稱,可以自定義名字,但兩個es必須一樣,就是通過是不是同一個名稱判斷是不是一個集群

#node.name 本機的節點名,可自定義,沒必要必須hosts解析或配置該主機名

#下面兩個是默認基礎上新加的,允許跨域訪問

#http.cors.enabled: true

#http.cors.allow-origin: '*'

##注意:容器里有兩個端口,9200是:ES節點和外部通訊使用,9300是:ES節點之間通訊使用

[root@localhost es-6.7.1]#?cat scripts/run_es_6.7.1.sh

#!/bin/bash

docker run -e ES_JAVA_OPTS="-Xms1024m -Xmx1024m" -d --net=host --restart=always -v /data/es-6.7.1/config/es.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v /data/es6.7.1_data:/usr/share/elasticsearch/data -v /data/es6.7.1_logs:/usr/share/elasticsearch/logs ?--name es6.7.1 elasticsearch:6.7.1

#注意:容器里有兩個端口,9200是:ES節點和外部通訊使用,9300是:ES節點之間通訊使用

[root@localhost es-6.7.1]# mkdir /data/es6.7.1_data

[root@localhost es-6.7.1]# mkdir /data/es6.7.1_logs

[root@localhost es-6.7.1]# chmod -R 777 /data/es6.7.1_data/?????#需要es用戶能寫入,否則無法映射

[root@localhost es-6.7.1]# chmod -R 777 /data/es6.7.1_logs/?????#需要es用戶能寫入,否則無法映射

[root@localhost es-6.7.1]#?sh scripts/run_es_6.7.1.sh

[root@localhost es-6.7.1]# docker ps

CONTAINER ID ???????IMAGE ????????????????COMMAND ?????????????????CREATED ????????????STATUS ?????????????PORTS ??????????????NAMES

988abe7eedac ???????elasticsearch:6.7.1 ??"/usr/local/bin/dock…" ??23 seconds ago ?????Up 19 seconds ??????????????????????????es6.7.1

[root@localhost es-6.7.1]# netstat -anput |grep 9200

tcp6 ??????0 ?????0 :::9200 ????????????????:::* ???????????????????LISTEN ?????16196/java ?????????

[root@localhost es-6.7.1]# netstat -anput |grep 9300

tcp6 ??????0 ?????0 :::9300 ????????????????:::* ???????????????????LISTEN ?????16196/java ?????????

[root@localhost es-6.7.1]# cd

瀏覽器訪問es服務:??????http://192.168.171.128:9200/

3)安裝es6.7.1-head插件:

上傳相關es-head插件的壓縮包到/data目錄

[root@localhost ~]# cd /data/

[root@localhost data]# ls es-6.7.1-head.tar.gz

es-6.7.1-head.tar.gz

[root@localhost data]# tar -zxf es-6.7.1-head.tar.gz

[root@localhost data]# cd es-6.7.1-head

[root@localhost es-6.7.1-head]# ls

conf ?image ?scripts

[root@localhost es-6.7.1-head]# ls conf/

app.js ?Gruntfile.js

[root@localhost es-6.7.1-head]# ls image/

elasticsearch-head_6.7.1.tar

[root@localhost es-6.7.1-head]# ls scripts/

run_es-head.sh

[root@localhost es-6.7.1-head]# docker load -i image/elasticsearch-head_6.7.1.tar

[root@localhost es-6.7.1-head]# docker images

REPOSITORY ??????????TAG ????????????????IMAGE ID ???????????CREATED ????????????SIZE

elasticsearch ???????6.7.1 ??????????????e2667f5db289 ???????11 months ago ??????812MB

elasticsearch-head ??6.7.1 ??????????????b19a5c98e43b ???????3 years ago ????????824MB

[root@localhost es-6.7.1-head]# vim?conf/app.js

.....

this.base_uri = this.config.base_uri || this.prefs.get("app-base_uri") || "http://192.168.171.128:9200";?#修改為本機ip

....

[root@localhost es-6.7.1-head]# vim conf/Gruntfile.js

....

????????????????connect: {

????????????????????????server: {

????????????????????????????????options: {

????????????????????????????????????????hostname: '*',????#添加

????????????????????????????????????????port: 9100,

????????????????????????????????????????base: '.',

????????????????????????????????????????keepalive: true

????????????????????????????????}

????????????????????????}

....

[root@localhost es-6.7.1-head]# cat scripts/run_es-head.sh

#!/bin/bash

docker run -d --name es-head-6.7.1 --net=host --restart=always -v /data/es-6.7.1-head/conf/Gruntfile.js:/usr/src/app/Gruntfile.js -v /data/es-6.7.1-head/conf/app.js:/usr/src/app/_site/app.js elasticsearch-head:6.7.1

#容器端口是9100,是es的管理端口

[root@localhost es-6.7.1-head]# sh scripts/run_es-head.sh?

[root@localhost es-6.7.1-head]# docker ps

CONTAINER ID ???????IMAGE ?????????????????????COMMAND ?????????????????CREATED ????????????STATUS ?????????????PORTS ??????????????NAMES

c46189c3338b ???????elasticsearch-head:6.7.1 ??"/bin/sh -c 'grunt s…" ??42 seconds ago ?????Up 37 seconds ??????????????????????????es-head-6.7.1

988abe7eedac ???????elasticsearch:6.7.1 ???????"/usr/local/bin/dock…" ??9 minutes ago ??????Up 9 minutes ???????????????????????????es6.7.1

[root@localhost es-6.7.1-head]# netstat -anput |grep 9100

tcp6 ??????0 ?????0 :::9100 ????????????????:::* ???????????????????LISTEN ?????16840/grunt ????????

瀏覽器訪問es-head插件:http://192.168.171.128:9100/?

在192.168.171.129上安裝es6.7.1和es6.7.1-head插件:

1)安裝docker19.03.2:

[root@localhost ~]# docker info

Client:

?Debug Mode: false

Server:

?Containers: 2

??Running: 2

??Paused: 0

??Stopped: 0

?Images: 2

?Server Version: 19.03.2

[root@localhost ~]# sysctl -w vm.max_map_count=262144??#設置elasticsearch用戶擁有的內存權限太小,至少需要262144

[root@localhost ~]# sysctl -a |grep vm.max_map_count ???#查看

vm.max_map_count = 262144

[root@localhost ~]# vim /etc/sysctl.conf

vm.max_map_count=262144

2)安裝es6.7.1:

上傳相關es的壓縮包到/data目錄:

[root@localhost ~]# cd /data/

[root@localhost data]# ls es-6.7.1.tar.gz

es-6.7.1.tar.gz

[root@localhost data]# tar -zxf es-6.7.1.tar.gz

[root@localhost data]# cd es-6.7.1

[root@localhost es-6.7.1]# ls

config ?image ?scripts

[root@localhost es-6.7.1]# ls config/

es.yml

[root@localhost es-6.7.1]# ls image/

elasticsearch_6.7.1.tar

[root@localhost es-6.7.1]# ls scripts/

run_es_6.7.1.sh

[root@localhost es-6.7.1]# docker load -i image/elasticsearch_6.7.1.tar

[root@localhost es-6.7.1]# docker images |grep elasticsearch

elasticsearch ???????6.7.1 ??????????????e2667f5db289 ???????11 months ago ??????812MB

[root@localhost es-6.7.1]#?vim?config/es.yml

cluster.name: elasticsearch-cluster

node.name: es-node2

network.host: 0.0.0.0

network.publish_host:?192.168.171.129

http.port: 9200

transport.tcp.port: 9300

http.cors.enabled: true

http.cors.allow-origin: "*"

node.master: true

node.data: true

discovery.zen.ping.unicast.hosts: ["192.168.171.128:9300","192.168.171.129:9300"]

discovery.zen.minimum_master_nodes: 1

#cluster.name 集群的名稱,可以自定義名字,但兩個es必須一樣,就是通過是不是同一個名稱判斷是不是一個集群

#node.name 本機的節點名,可自定義,沒必要必須hosts解析或配置該主機名

#下面兩個是默認基礎上新加的,允許跨域訪問

#http.cors.enabled: true

#http.cors.allow-origin: '*'

##注意:容器里有兩個端口,9200是:ES節點和外部通訊使用,9300是:ES節點之間通訊使用

[root@localhost es-6.7.1]#?cat scripts/run_es_6.7.1.sh

#!/bin/bash

docker run -e ES_JAVA_OPTS="-Xms1024m -Xmx1024m" -d --net=host --restart=always -v /data/es-6.7.1/config/es.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v /data/es6.7.1_data:/usr/share/elasticsearch/data -v /data/es6.7.1_logs:/usr/share/elasticsearch/logs ?--name es6.7.1 elasticsearch:6.7.1

#注意:容器里有兩個端口,9200是:ES節點和外部通訊使用,9300是:ES節點之間通訊使用

[root@localhost es-6.7.1]# mkdir /data/es6.7.1_data

[root@localhost es-6.7.1]# mkdir /data/es6.7.1_logs

[root@localhost es-6.7.1]# chmod -R 777 /data/es6.7.1_data/?????#需要es用戶能寫入,否則無法映射

[root@localhost es-6.7.1]# chmod -R 777 /data/es6.7.1_logs/?????#需要es用戶能寫入,否則無法映射

[root@localhost es-6.7.1]#?sh scripts/run_es_6.7.1.sh

[root@localhost es-6.7.1]# docker ps

CONTAINER ID ???????IMAGE ????????????????COMMAND ?????????????????CREATED ????????????STATUS ?????????????PORTS ??????????????NAMES

a3b0a0187db8 ???????elasticsearch:6.7.1 ??"/usr/local/bin/dock…" ??9 seconds ago ??????Up 7 seconds ???????????????????????????es6.7.1

[root@localhost es-6.7.1]# netstat -anput |grep 9200

tcp6 ??????0 ?????0 :::9200 ????????????????:::* ???????????????????LISTEN ?????14171/java ?????????

[root@localhost es-6.7.1]# netstat -anput |grep 9300

tcp6 ??????0 ?????0 :::9300 ????????????????:::* ???????????????????LISTEN ?????14171/java ?????????

[root@localhost es-6.7.1]# cd

瀏覽器訪問es服務:http://192.168.171.129:9200/

3)安裝es6.7.1-head插件:

上傳相關es-head插件的壓縮包到/data目錄

[root@localhost ~]# cd /data/

[root@localhost data]# ls es-6.7.1-head.tar.gz

es-6.7.1-head.tar.gz

[root@localhost data]# tar -zxf es-6.7.1-head.tar.gz

[root@localhost data]# cd es-6.7.1-head

[root@localhost es-6.7.1-head]# ls

conf ?image ?scripts

[root@localhost es-6.7.1-head]# ls conf/

app.js ?Gruntfile.js

[root@localhost es-6.7.1-head]# ls image/

elasticsearch-head_6.7.1.tar

[root@localhost es-6.7.1-head]# ls scripts/

run_es-head.sh

[root@localhost es-6.7.1-head]# docker load -i image/elasticsearch-head_6.7.1.tar

[root@localhost es-6.7.1-head]# docker images

REPOSITORY ??????????TAG ????????????????IMAGE ID ???????????CREATED ????????????SIZE

elasticsearch ???????6.7.1 ??????????????e2667f5db289 ???????11 months ago ??????812MB

elasticsearch-head ??6.7.1 ??????????????b19a5c98e43b ???????3 years ago ????????824MB

[root@localhost es-6.7.1-head]# vim?conf/app.js

.....

this.base_uri = this.config.base_uri || this.prefs.get("app-base_uri") || "http://192.168.171.129:9200";?#修改為本機ip

....

[root@localhost es-6.7.1-head]# vim conf/Gruntfile.js

....

????????????????connect: {

????????????????????????server: {

????????????????????????????????options: {

????????????????????????????????????????hostname: '*',????#添加

????????????????????????????????????????port: 9100,

????????????????????????????????????????base: '.',

????????????????????????????????????????keepalive: true

????????????????????????????????}

????????????????????????}

....

[root@localhost es-6.7.1-head]# cat scripts/run_es-head.sh

#!/bin/bash

docker run -d --name es-head-6.7.1 --net=host --restart=always -v /data/es-6.7.1-head/conf/Gruntfile.js:/usr/src/app/Gruntfile.js -v /data/es-6.7.1-head/conf/app.js:/usr/src/app/_site/app.js elasticsearch-head:6.7.1

#容器端口是9100,是es的管理端口

[root@localhost es-6.7.1-head]# sh scripts/run_es-head.sh?

[root@localhost es-6.7.1-head]# docker ps

CONTAINER ID ???????IMAGE ?????????????????????COMMAND ?????????????????CREATED ????????????STATUS ?????????????PORTS ??????????????NAMES

f4f5c967754b ???????elasticsearch-head:6.7.1 ??"/bin/sh -c 'grunt s…" ??12 seconds ago ?????Up 7 seconds ???????????????????????????es-head-6.7.1

a3b0a0187db8 ???????elasticsearch:6.7.1 ???????"/usr/local/bin/dock…" ??7 minutes ago ??????Up 7 minutes ???????????????????????????es6.7.1

[root@localhost es-6.7.1-head]# netstat -anput |grep 9100

tcp6 ??????0 ?????0 :::9100 ????????????????:::* ???????????????????LISTEN ?????14838/grunt ????????

瀏覽器訪問es-head插件:http://192.168.171.129:9100/?

同樣在機器192.168.171.128的head插件也能查看到狀態,因為插件管理工具都是一樣的,如下:

http://192.168.171.128:9100/

2.docker安裝redis4.0.10(在192.168.171.128上)

上傳redis4.0.10鏡像:

[root@localhost ~]# ls redis_4.0.10.tar

redis_4.0.10.tar

[root@localhost ~]#?docker load -i redis_4.0.10.tar

[root@localhost ~]# docker images |grep redis

gmprd.baiwang-inner.com/redis ??4.0.10 ?????????????f713a14c7f9b ???????13 months ago ??????425MB

[root@localhost ~]# mkdir -p /data/redis/conf ????????#創建配置文件目錄

[root@localhost ~]# vim /data/redis/conf/redis.conf ???#自定義配置文件

protected-mode no

port 6379

bind 0.0.0.0

tcp-backlog 511

timeout 0

tcp-keepalive 300

supervised no

pidfile "/usr/local/redis/redis_6379.pid"

loglevel notice

logfile "/opt/redis/logs/redis.log"

databases 16

save 900 1

save 300 10

save 60 10000

stop-writes-on-bgsave-error yes

rdbcompression yes

rdbchecksum yes

dbfilename "dump.rdb"

dir "/"

slave-serve-stale-data yes

slave-read-only yes

repl-diskless-sync no

repl-diskless-sync-delay 5

repl-disable-tcp-nodelay no

slave-priority 100

requirepass 123456

appendonly yes

dir "/opt/redis/data"

logfile "/opt/redis/logs/redis.log"

appendfilename "appendonly.aof"

appendfsync everysec

no-appendfsync-on-rewrite no

auto-aof-rewrite-percentage 100

auto-aof-rewrite-min-size 64mb

aof-load-truncated yes

lua-time-limit 5000

slowlog-log-slower-than 10000

slowlog-max-len 128

latency-monitor-threshold 0

notify-keyspace-events ""

hash-max-ziplist-entries 512

hash-max-ziplist-value 64

list-max-ziplist-size -2

list-compress-depth 0

set-max-intset-entries 512

zset-max-ziplist-entries 128

zset-max-ziplist-value 64

hll-sparse-max-bytes 3000

activerehashing yes

client-output-buffer-limit normal 0 0 0

client-output-buffer-limit slave 256mb 64mb 60

client-output-buffer-limit pubsub 32mb 8mb 60

hz 10

aof-rewrite-incremental-fsync yes

maxclients 4064

#appendonly yes 是開啟數據持久化

#dir "/opt/redis/data" ?#持久化到的容器里的目錄

#logfile "/opt/redis/logs/redis.log" #持久化到的容器里的目錄,此處寫的必須是文件路徑,目錄路徑不行

[root@localhost ~]# docker run -d --net=host --restart=always --name=redis4.0.10 -v /data/redis/conf/redis.conf:/opt/redis/conf/redis.conf -v /data/redis_data:/opt/redis/data -v /data/redis_logs:/opt/redis/logs gmprd.baiwang-inner.com/redis:4.0.10

[root@localhost ~]# docker ps |grep redis

735fb213ee41 ???????gmprd.baiwang-inner.com/redis:4.0.10 ??"redis-server /opt/r…" ??9 seconds ago ??????Up 8 seconds ???????????????????????????redis4.0.10

[root@localhost ~]#?netstat -anput |grep 6379

tcp ???????0 ?????0 0.0.0.0:6379 ???????????0.0.0.0:* ??????????????LISTEN ?????16988/redis-server ?

[root@localhost ~]# ls /data/redis_data/

appendonly.aof

[root@localhost ~]# ls /data/redis_logs/

redis.log

[root@localhost ~]# docker exec -it redis4.0.10 bash

[root@localhost /]# redis-cli -a 123456

127.0.0.1:6379> set k1 v1

OK

127.0.0.1:6379> keys *

1) "k1"

127.0.0.1:6379> get k1

"v1"

127.0.0.1:6379> quit

[root@localhost /]# exit

3.docker安裝tomcat(不安裝,僅創建模擬tomcat和其他java日志)和filebeat6.7.1 (192.168.171.130和192.168.171.131)

在192.168.171.130上:

模擬創建各類java日志,將各類java日志用filebeat寫入redis中,在用logstash以多行匹配模式,寫入es中:

注意:下面日志不能提前生成,需要先啟動filebeat開始收集后,在vim編寫下面的日志,否則filebeat不能讀取已經有的日志.

a)創建模擬tomcat日志:

[root@localhost ~]# mkdir /data/java-logs

[root@localhost ~]# mkdir /data/java-logs/{tomcat_logs,es_logs,message_logs}

[root@localhost ~]# vim /data/java-logs/tomcat_logs/catalina.out

2020-03-09 13:07:48|ERROR|org.springframework.web.context.ContextLoader:351|Context initialization failed

org.springframework.beans.factory.parsing.BeanDefinitionParsingException: Configuration problem: Unable to locate Spring NamespaceHandler for XML schema namespace [http://www.springframework.org/schema/aop]

Offending resource: URL [file:/usr/local/apache-tomcat-8.0.32/webapps/ROOT/WEB-INF/classes/applicationContext.xml]

at org.springframework.beans.factory.parsing.FailFastProblemReporter.error(FailFastProblemReporter.java:70) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.parsing.ReaderContext.error(ReaderContext.java:85) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.parsing.ReaderContext.error(ReaderContext.java:80) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.xml.BeanDefinitionParserDelegate.error(BeanDefinitionParserDelegate.java:301) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.xml.BeanDefinitionParserDelegate.parseCustomElement(BeanDefinitionParserDelegate.java:1408) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.xml.BeanDefinitionParserDelegate.parseCustomElement(BeanDefinitionParserDelegate.java:1401) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentReader.parseBeanDefinitions(DefaultBeanDefinitionDocumentReader.java:168) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentReader.doRegisterBeanDefinitions(DefaultBeanDefinitionDocumentReader.java:138) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentReader.registerBeanDefinitions(DefaultBeanDefinitionDocumentReader.java:94) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.registerBeanDefinitions(XmlBeanDefinitionReader.java:508) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:392) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:336) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:304) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:181) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:217) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:188) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.web.context.support.XmlWebApplicationContext.loadBeanDefinitions(XmlWebApplicationContext.java:125) ~[spring-web-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.web.context.support.XmlWebApplicationContext.loadBeanDefinitions(XmlWebApplicationContext.java:94) ~[spring-web-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.context.support.AbstractRefreshableApplicationContext.refreshBeanFactory(AbstractRefreshableApplicationContext.java:129) ~[spring-context-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.context.support.AbstractApplicationContext.obtainFreshBeanFactory(AbstractApplicationContext.java:609) ~[spring-context-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:510) ~[spring-context-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:444) ~[spring-web-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:326) ~[spring-web-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:107) [spring-web-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4812) [catalina.jar:8.0.32]

at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5255) [catalina.jar:8.0.32]

at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:147) [catalina.jar:8.0.32]

at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:725) [catalina.jar:8.0.32]

at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:701) [catalina.jar:8.0.32]

at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:717) [catalina.jar:8.0.32]

at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1091) [catalina.jar:8.0.32]

at org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:1830) [catalina.jar:8.0.32]

at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_144]

at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_144]

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_144]

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_144]

at java.lang.Thread.run(Thread.java:748) [na:1.8.0_144]

13-Oct-2020 13:07:48.990 SEVERE [localhost-startStop-3] org.apache.catalina.core.StandardContext.startInternal One or more listeners failed to start. Full details will be found in the appropriate container log file

13-Oct-2020 13:07:48.991 SEVERE [localhost-startStop-3] org.apache.catalina.core.StandardContext.startInternal Context [] startup failed due to previous errors

2020-03-09 13:07:48|INFO|org.springframework.context.support.AbstractApplicationContext:960|Closing Root WebApplicationContext: startup date [Sun Oct 13 13:07:43 CST 2020]; root of context hierarchy

2020-03-09 13:09:41|INFO|org.springframework.context.support.AbstractApplicationContext:960|Closing Root WebApplicationContext: startup date [Sun Oct 13 13:07:43 CST 2020]; root of context hierarchy error test1

2020-03-09 13:10:41|INFO|org.springframework.context.support.AbstractApplicationContext:960|Closing Root WebApplicationContext: startup date [Sun Oct 13 13:07:43 CST 2020]; root of context hierarchy error test2

2020-03-09 13:11:41|INFO|org.springframework.context.support.AbstractApplicationContext:960|Closing Root WebApplicationContext: startup date [Sun Oct 13 13:07:43 CST 2020]; root of context hierarchy error test3

b)制造系統日志(將/var/log/messages部分弄出來) ?系統日志

[root@localhost ~]# vim /data/java-logs/message_logs/messages

Mar 09 14:19:06 localhost systemd: Removed slice system-selinux\x2dpolicy\x2dmigrate\x2dlocal\x2dchanges.slice.

Mar 09 14:19:06 localhost systemd: Stopping system-selinux\x2dpolicy\x2dmigrate\x2dlocal\x2dchanges.slice.

Mar 09 14:19:06 localhost systemd: Stopped target Network is Online.

Mar 09 14:19:06 localhost systemd: Stopping Network is Online.

Mar 09 14:19:06 localhost systemd: Stopping Authorization Manager...

Mar 09 14:20:38 localhost kernel: Initializing cgroup subsys cpuset

Mar 09 14:20:38 localhost kernel: Initializing cgroup subsys cpu

Mar 09 14:20:38 localhost kernel: Initializing cgroup subsys cpuacct

Mar 09 14:20:38 localhost kernel: Linux version 3.10.0-693.el7.x86_64 (builder@kbuilder.dev.centos.org) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-16) (GCC) ) #1 SMP Tue Aug 22 21:09:27 UTC 2017

Mar 09 14:20:38 localhost kernel: Command line: BOOT_IMAGE=/vmlinuz-3.10.0-693.el7.x86_64 root=/dev/mapper/centos-root ro crashkernel=auto rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rhgb quiet LANG=en_US.UTF-8

c)制造es日志:

[root@localhost ~]# vim /data/java-logs/es_logs/es_log

[2020-03-09T21:44:58,440][ERROR][o.e.b.Bootstrap ?????????] Exception

java.lang.RuntimeException: can not run elasticsearch as root

????????at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:035) ~[elasticsearch-6.2.4.jar:6.2.4]

????????at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:172) ~[elasticsearch-6.2.4.jar:6.2.4]

????????at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:323) [elasticsearch-6.2.4.jar:6.2.4]

????????at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:091) [elasticsearch-6.2.4.jar:6.2.4]

????????at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:109) [elasticsearch-6.2.4.jar:6.2.4]

????????at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) [elasticsearch-6.2.4.jar:6.2.4]

????????at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:094) [elasticsearch-cli-6.2.4.jar:6.2.4]

????????at org.elasticsearch.cli.Command.main(Command.java:90) [elasticsearch-cli-6.2.4.jar:6.2.4]

????????at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) [elasticsearch-6.2.4.jar:6.2.4]

????????at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:85) [elasticsearch-6.2.4.jar:6.2.4]

[2020-03-09T21:44:58,549][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [] uncaught exception in thread [main]

org.elasticsearch.bootstrap.StartupException: java.lang.RuntimeException: can not run elasticsearch as root

????????at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:095) ~[elasticsearch-6.2.4.jar:6.2.4]

????????at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:109) ~[elasticsearch-6.2.4.jar:6.2.4]

????????at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) ~[elasticsearch-6.2.4.jar:6.2.4]

????????at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:094) ~[elasticsearch-cli-6.2.4.jar:6.2.4]

????????at org.elasticsearch.cli.Command.main(Command.java:90) ~[elasticsearch-cli-6.2.4.jar:6.2.4]

????????at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) ~[elasticsearch-6.2.4.jar:6.2.4]

????????at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:85) ~[elasticsearch-6.2.4.jar:6.2.4]

Caused by: java.lang.RuntimeException: can not run elasticsearch as root

????????at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:035) ~[elasticsearch-6.2.4.jar:6.2.4]

????????at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:172) ~[elasticsearch-6.2.4.jar:6.2.4]

????????at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:323) ~[elasticsearch-6.2.4.jar:6.2.4]

????????at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:091) ~[elasticsearch-6.2.4.jar:6.2.4]

????????... 6 more

[2020-03-09T21:46:32,174][INFO ][o.e.n.Node ??????????????] [] initializing ...

[2020-03-09T21:46:32,467][INFO ][o.e.e.NodeEnvironment ???] [koccs5f] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [48gb], net total_space [49.9gb], types [rootfs]

[2020-03-09T21:46:32,468][INFO ][o.e.e.NodeEnvironment ???] [koccs5f] heap size [0315.6mb], compressed ordinary object pointers [true]

d)制造tomcat訪問日志

[root@localhost ~]# vim /data/java-logs/tomcat_logs/localhost_access_log.2020-03-09.txt?

192.168.171.1 - - [09/Mar/2020:09:07:59 +0800] "GET /favicon.ico HTTP/1.1" 404 -

Caused by: java.lang.RuntimeException: can not run elasticsearch as root

????????at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:105) ~[elasticsearch-6.2.4.jar:6.2.4]

????????at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:172) ~[elasticsearch-6.2.4.jar:6.2.4]

192.168.171.2 - - [09/Mar/2020:09:07:59 +0800] "GET / HTTP/1.1" 404 -

192.168.171.1 - - [09/Mar/2020:15:09:12 +0800] "GET / HTTP/1.1" 200 11250

Caused by: java.lang.RuntimeException: can not run elasticsearch as root

????????at org.elasticsearch.bootstrap.Bootstrap.initializeNatives

192.168.171.2 - - [09/Mar/2020:15:09:12 +0800] "GET /tomcat.png HTTP/1.1" 200 5103

192.168.171.3 - - [09/Mar/2020:15:09:12 +0800] "GET /tomcat.css HTTP/1.1" 200 5576

192.168.171.5 - - [09/Mar/2020:15:09:09 +0800] "GET /bg-nav.png HTTP/1.1" 200 1401

192.168.171.1 - - [09/Mar/2020:15:09:09 +0800] "GET /bg-upper.png HTTP/1.1" 200 3103

安裝filebeat6.7.1:

[root@localhost ~]# cd /data/

[root@localhost data]# ls filebeat6.7.1.tar.gz

filebeat6.7.1.tar.gz

[root@localhost data]# tar -zxf filebeat6.7.1.tar.gz

[root@localhost data]#?cd filebeat6.7.1

[root@localhost filebeat6.7.1]# ls

conf ?image ?scripts

[root@localhost filebeat6.7.1]# ls conf/

filebeat.yml ?filebeat.yml.bak

[root@localhost filebeat6.7.1]# ls image/

filebeat_6.7.1.tar

[root@localhost filebeat6.7.1]# ls scripts/

run_filebeat6.7.1.sh

[root@localhost filebeat6.7.1]# docker load -i image/filebeat_6.7.1.tar?

[root@localhost filebeat6.7.1]# docker images |grep filebeat

docker.elastic.co/beats/filebeat ??6.7.1 ??????????????04fcff75b160 ???????11 months ago ??????279MB

[root@localhost filebeat6.7.1]# cat conf/filebeat.yml

filebeat.inputs:

#下面為添加:——————————————

#系統日志:

- type: log

??enabled: true

??paths:

????- /usr/share/filebeat/logs/message_logs/messages

??fields:

????log_source: system-171.130

#tomcat的catalina日志:

- type: log

??enabled: true

??paths:

????- /usr/share/filebeat/logs/tomcat_logs/catalina.out

??fields:

????log_source: catalina-log-171.130

??multiline.pattern: '^[0-9]{4}-(((0[13578]|(10|12))-(0[1-9]|[1-2][0-9]|3[0-1]))|(02-(0[1-9]|[1-2][0-9]))|((0[469]|11)-(0[1-9]|[1-2][0-9]|30)))'

??multiline.negate: true

??multiline.match: after

# 上面正則是匹配日期開頭正則,類似:2004-02-29開頭的

# log_source: xxx 表示: 因為存入redis的只有一個索引名,logstash對多種類型日志無法區分,定義該項可以讓logstash以此來判斷日志來源,當是這種類型日志,輸出相應的索引名存入es,當時另一種類型日志,輸出相應索引名存入es

#es日志:

- type: log

??enabled: true

??paths:

????- /usr/share/filebeat/logs/es_logs/es_log

??fields:

????log_source: es-log-171.130

??multiline.pattern: '^\['

??multiline.negate: true

??multiline.match: after

#上面正則是是匹配以[開頭的,\表示轉義.

#tomcat的訪問日志:

- type: log

??enabled: true

??paths:

????- /usr/share/filebeat/logs/tomcat_logs/localhost_access_log.2020-03-09.txt

??fields:

????log_source: tomcat-access-log-171.130

??multiline.pattern: '^((2(5[0-5]|[0-4]\d))|[0-1]?\d{1,2})(\.((2(5[0-5]|[0-4]\d))|[0-1]?\d{1,2})){3}'

??multiline.negate: true

??multiline.match: after

#上面為添加:—————————————————————

filebeat.config.modules:

??path: ${path.config}/modules.d/*.yml

??reload.enabled: false

setup.template.settings:

??index.number_of_shards: 3

setup.kibana:

#下面是直接寫入es中:

#output.elasticsearch:

# ?hosts: ["192.168.171.128:9200"]

#下面是寫入redis中:

#下面的filebeat-common是自定的key,要和logstash中從redis里對應的key要要一致,多個節點的nginx的都可以該key寫入,但需要定義log_source以作為區分,logstash讀取的時候以區分的標志來分開存放索引到es中

output.redis:

??hosts: ["192.168.171.128"]

??port: 6379

??password: "123456"

??key: "filebeat-common"

??db: 0

??datatype: list

processors:

??- add_host_metadata: ~

??- add_cloud_metadata: ~

#注意:因為默認情況下,宿主機日志路徑和容器內日志路徑是不一致的,所以配置文件里配置的路徑如果是宿主機日志路徑,容器里則找不到

##所以采取措施是:配置文件里配置成容器里的日志路徑,再把宿主機的日志目錄和容器日志目錄做一個映射就可以了

#/usr/share/filebeat/logs/*.log 是容器里的日志路徑

[root@localhost filebeat6.7.1]# cat scripts/run_filebeat6.7.1.sh?

#!/bin/bash

docker run -d --name filebeat6.7.1 --net=host --restart=always --user=root -v /data/filebeat6.7.1/conf/filebeat.yml:/usr/share/filebeat/filebeat.yml -v /data/java-logs:/usr/share/filebeat/logs ?docker.elastic.co/beats/filebeat:6.7.1

#注意:因為默認情況下,宿主機日志路徑和容器內日志路徑是不一致的,所以配置文件里配置的路徑如果是宿主機日志路徑,容器里則找不到

#所以采取措施是:配置文件里配置成容器里的日志路徑,再把宿主機的日志目錄和容器日志目錄做一個映射就可以了

[root@localhost filebeat6.7.1]# sh scripts/run_filebeat6.7.1.sh ?#運行后則開始收集日志到redis

[root@localhost filebeat6.7.1]# docker ps |grep filebeat

1f2bbd450e7e ???????docker.elastic.co/beats/filebeat:6.7.1 ??"/usr/local/bin/dock…" ??8 seconds ago ??????Up 7 seconds ???????????????????????????filebeat6.7.1

[root@localhost filebeat6.7.1]# cd

在192.168.171.131上:

模擬創建各類java日志,將各類java日志用filebeat寫入redis中,在用logstash以多行匹配模式,寫入es中:

注意:下面日志不能提前生成,需要先啟動filebeat開始收集后,在vim編寫下面的日志,否則filebeat不能讀取已經有的日志.

a)創建模擬tomcat日志:

[root@localhost ~]# mkdir /data/java-logs

[root@localhost ~]# mkdir /data/java-logs/{tomcat_logs,es_logs,message_logs}

[root@localhost ~]# vim /data/java-logs/tomcat_logs/catalina.out

2050-05-09 13:07:48|ERROR|org.springframework.web.context.ContextLoader:351|Context initialization failed

org.springframework.beans.factory.parsing.BeanDefinitionParsingException: Configuration problem: Unable to locate Spring NamespaceHandler for XML schema namespace [http://www.springframework.org/schema/aop]

Offending resource: URL [file:/usr/local/apache-tomcat-8.0.32/webapps/ROOT/WEB-INF/classes/applicationContext.xml]

at org.springframework.beans.factory.parsing.FailFastProblemReporter.error(FailFastProblemReporter.java:70) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.parsing.ReaderContext.error(ReaderContext.java:85) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.parsing.ReaderContext.error(ReaderContext.java:80) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.xml.BeanDefinitionParserDelegate.error(BeanDefinitionParserDelegate.java:301) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.xml.BeanDefinitionParserDelegate.parseCustomElement(BeanDefinitionParserDelegate.java:1408) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.xml.BeanDefinitionParserDelegate.parseCustomElement(BeanDefinitionParserDelegate.java:1401) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentReader.parseBeanDefinitions(DefaultBeanDefinitionDocumentReader.java:168) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentReader.doRegisterBeanDefinitions(DefaultBeanDefinitionDocumentReader.java:138) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentReader.registerBeanDefinitions(DefaultBeanDefinitionDocumentReader.java:94) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.registerBeanDefinitions(XmlBeanDefinitionReader.java:508) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:392) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:336) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:304) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:181) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:217) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:188) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.web.context.support.XmlWebApplicationContext.loadBeanDefinitions(XmlWebApplicationContext.java:125) ~[spring-web-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.web.context.support.XmlWebApplicationContext.loadBeanDefinitions(XmlWebApplicationContext.java:94) ~[spring-web-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.context.support.AbstractRefreshableApplicationContext.refreshBeanFactory(AbstractRefreshableApplicationContext.java:129) ~[spring-context-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.context.support.AbstractApplicationContext.obtainFreshBeanFactory(AbstractApplicationContext.java:609) ~[spring-context-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:510) ~[spring-context-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:444) ~[spring-web-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:326) ~[spring-web-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:107) [spring-web-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4812) [catalina.jar:8.0.32]

at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5255) [catalina.jar:8.0.32]

at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:147) [catalina.jar:8.0.32]

at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:725) [catalina.jar:8.0.32]

at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:701) [catalina.jar:8.0.32]

at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:717) [catalina.jar:8.0.32]

at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1091) [catalina.jar:8.0.32]

at org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:1830) [catalina.jar:8.0.32]

at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_144]

at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_144]

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_144]

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_144]

at java.lang.Thread.run(Thread.java:748) [na:1.8.0_144]

13-Oct-2050 13:07:48.990 SEVERE [localhost-startStop-3] org.apache.catalina.core.StandardContext.startInternal One or more listeners failed to start. Full details will be found in the appropriate container log file

13-Oct-2050 13:07:48.991 SEVERE [localhost-startStop-3] org.apache.catalina.core.StandardContext.startInternal Context [] startup failed due to previous errors

2050-05-09 13:07:48|INFO|org.springframework.context.support.AbstractApplicationContext:960|Closing Root WebApplicationContext: startup date [Sun Oct 13 13:07:43 CST 2050]; root of context hierarchy

2050-05-09 13:09:41|INFO|org.springframework.context.support.AbstractApplicationContext:960|Closing Root WebApplicationContext: startup date [Sun Oct 13 13:07:43 CST 2050]; root of context hierarchy error test1

2050-05-09 13:10:41|INFO|org.springframework.context.support.AbstractApplicationContext:960|Closing Root WebApplicationContext: startup date [Sun Oct 13 13:07:43 CST 2050]; root of context hierarchy error test2

2050-05-09 13:11:41|INFO|org.springframework.context.support.AbstractApplicationContext:960|Closing Root WebApplicationContext: startup date [Sun Oct 13 13:07:43 CST 2050]; root of context hierarchy error test3

b)制造系統日志(將/var/log/messages部分弄出來) ?系統日志

[root@localhost ~]# vim /data/java-logs/message_logs/messages

Mar 50 50:50:06 localhost systemd: Removed slice system-selinux\x2dpolicy\x2dmigrate\x2dlocal\x2dchanges.slice.

Mar 50 50:50:06 localhost systemd: Stopping system-selinux\x2dpolicy\x2dmigrate\x2dlocal\x2dchanges.slice.

Mar 50 50:50:06 localhost systemd: Stopped target Network is Online.

Mar 50 50:50:06 localhost systemd: Stopping Network is Online.

Mar 50 50:50:06 localhost systemd: Stopping Authorization Manager...

Mar 50 50:20:38 localhost kernel: Initializing cgroup subsys cpuset

Mar 50 50:20:38 localhost kernel: Initializing cgroup subsys cpu

Mar 50 50:20:38 localhost kernel: Initializing cgroup subsys cpuacct

Mar 50 50:20:38 localhost kernel: Linux version 3.10.0-693.el7.x86_64 (builder@kbuilder.dev.centos.org) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-16) (GCC) ) #1 SMP Tue Aug 22 21:50:27 UTC 2050

Mar 50 50:20:38 localhost kernel: Command line: BOOT_IMAGE=/vmlinuz-3.10.0-693.el7.x86_64 root=/dev/mapper/centos-root ro crashkernel=auto rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rhgb quiet LANG=en_US.UTF-8

c)制造es日志:

[root@localhost ~]# vim /data/java-logs/es_logs/es_log

[2050-50-09T21:44:58,440][ERROR][o.e.b.Bootstrap ?????????] Exception

java.lang.RuntimeException: can not run elasticsearch as root

????????at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:505) ~[elasticsearch-6.2.4.jar:6.2.4]

????????at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:172) ~[elasticsearch-6.2.4.jar:6.2.4]

????????at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:323) [elasticsearch-6.2.4.jar:6.2.4]

????????at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:091) [elasticsearch-6.2.4.jar:6.2.4]

????????at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:109) [elasticsearch-6.2.4.jar:6.2.4]

????????at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) [elasticsearch-6.2.4.jar:6.2.4]

????????at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:094) [elasticsearch-cli-6.2.4.jar:6.2.4]

????????at org.elasticsearch.cli.Command.main(Command.java:90) [elasticsearch-cli-6.2.4.jar:6.2.4]

????????at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) [elasticsearch-6.2.4.jar:6.2.4]

????????at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:85) [elasticsearch-6.2.4.jar:6.2.4]

[2050-50-09T21:44:58,549][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [] uncaught exception in thread [main]

org.elasticsearch.bootstrap.StartupException: java.lang.RuntimeException: can not run elasticsearch as root

????????at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:095) ~[elasticsearch-6.2.4.jar:6.2.4]

????????at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:109) ~[elasticsearch-6.2.4.jar:6.2.4]

????????at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) ~[elasticsearch-6.2.4.jar:6.2.4]

????????at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:094) ~[elasticsearch-cli-6.2.4.jar:6.2.4]

????????at org.elasticsearch.cli.Command.main(Command.java:90) ~[elasticsearch-cli-6.2.4.jar:6.2.4]

????????at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) ~[elasticsearch-6.2.4.jar:6.2.4]

????????at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:85) ~[elasticsearch-6.2.4.jar:6.2.4]

Caused by: java.lang.RuntimeException: can not run elasticsearch as root

????????at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:505) ~[elasticsearch-6.2.4.jar:6.2.4]

????????at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:172) ~[elasticsearch-6.2.4.jar:6.2.4]

????????at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:323) ~[elasticsearch-6.2.4.jar:6.2.4]

????????at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:091) ~[elasticsearch-6.2.4.jar:6.2.4]

????????... 6 more

[2050-50-09T21:46:32,174][INFO ][o.e.n.Node ??????????????] [] initializing ...

[2050-50-09T21:46:32,467][INFO ][o.e.e.NodeEnvironment ???] [koccs5f] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [48gb], net total_space [49.9gb], types [rootfs]

[2050-50-09T21:46:32,468][INFO ][o.e.e.NodeEnvironment ???] [koccs5f] heap size [5015.6mb], compressed ordinary object pointers [true]

d)制造tomcat訪問日志

[root@localhost ~]# vim /data/java-logs/tomcat_logs/localhost_access_log.2050-50-09.txt?

192.168.150.1 - - [09/Mar/2050:09:07:59 +0800] "GET /favicon.ico HTTP/1.1" 404 -

Caused by: java.lang.RuntimeException: can not run elasticsearch as root

????????at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:105) ~[elasticsearch-6.2.4.jar:6.2.4]

????????at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:172) ~[elasticsearch-6.2.4.jar:6.2.4]

192.168.150.2 - - [09/Mar/2050:09:07:59 +0800] "GET / HTTP/1.1" 404 -

192.168.150.1 - - [09/Mar/2050:15:09:12 +0800] "GET / HTTP/1.1" 200 11250

Caused by: java.lang.RuntimeException: can not run elasticsearch as root

????????at org.elasticsearch.bootstrap.Bootstrap.initializeNatives

192.168.150.2 - - [09/Mar/2050:15:09:12 +0800] "GET /tomcat.png HTTP/1.1" 200 5103

192.168.150.3 - - [09/Mar/2050:15:09:12 +0800] "GET /tomcat.css HTTP/1.1" 200 5576

192.168.150.5 - - [09/Mar/2050:15:09:09 +0800] "GET /bg-nav.png HTTP/1.1" 200 1401

192.168.150.1 - - [09/Mar/2050:15:09:09 +0800] "GET /bg-upper.png HTTP/1.1" 200 3103

安裝filebeat6.7.1:

[root@localhost ~]# cd /data/

[root@localhost data]# ls filebeat6.7.1.tar.gz

filebeat6.7.1.tar.gz

[root@localhost data]# tar -zxf filebeat6.7.1.tar.gz

[root@localhost data]#?cd filebeat6.7.1

[root@localhost filebeat6.7.1]# ls

conf ?image ?scripts

[root@localhost filebeat6.7.1]# ls conf/

filebeat.yml ?filebeat.yml.bak

[root@localhost filebeat6.7.1]# ls image/

filebeat_6.7.1.tar

[root@localhost filebeat6.7.1]# ls scripts/

run_filebeat6.7.1.sh

[root@localhost filebeat6.7.1]# docker load -i image/filebeat_6.7.1.tar?

[root@localhost filebeat6.7.1]# docker images |grep filebeat

docker.elastic.co/beats/filebeat ??6.7.1 ??????????????04fcff75b160 ???????11 months ago ??????279MB

[root@localhost filebeat6.7.1]# cat conf/filebeat.yml

filebeat.inputs:

#下面為添加:——————————————

#系統日志:

- type: log

??enabled: true

??paths:

????- /usr/share/filebeat/logs/message_logs/messages

??fields:

????log_source: system-171.131

#tomcat的catalina日志:

- type: log

??enabled: true

??paths:

????- /usr/share/filebeat/logs/tomcat_logs/catalina.out

??fields:

????log_source: catalina-log-171.131

??multiline.pattern: '^[0-9]{4}-(((0[13578]|(10|12))-(0[1-9]|[1-2][0-9]|3[0-1]))|(02-(0[1-9]|[1-2][0-9]))|((0[469]|11)-(0[1-9]|[1-2][0-9]|30)))'

??multiline.negate: true

??multiline.match: after

# 上面正則是匹配日期開頭正則,類似:2004-02-29開頭的

# log_source: xxx 表示: 因為存入redis的只有一個索引名,logstash對多種類型日志無法區分,定義該項可以讓logstash以此來判斷日志來源,當是這種類型日志,輸出相應的索引名存入es,當時另一種類型日志,輸出相應索引名存入es

#es日志:

- type: log

??enabled: true

??paths:

????- /usr/share/filebeat/logs/es_logs/es_log

??fields:

????log_source: es-log-171.131

??multiline.pattern: '^\['

??multiline.negate: true

??multiline.match: after

#上面正則是是匹配以[開頭的,\表示轉義.

#tomcat的訪問日志:

- type: log

??enabled: true

??paths:

????- /usr/share/filebeat/logs/tomcat_logs/localhost_access_log.2050-50-09.txt

??fields:

????log_source: tomcat-access-log-171.131

??multiline.pattern: '^((2(5[0-5]|[0-4]\d))|[0-1]?\d{1,2})(\.((2(5[0-5]|[0-4]\d))|[0-1]?\d{1,2})){3}'

??multiline.negate: true

??multiline.match: after

#上面為添加:—————————————————————

filebeat.config.modules:

??path: ${path.config}/modules.d/*.yml

??reload.enabled: false

setup.template.settings:

??index.number_of_shards: 3

setup.kibana:

#下面是直接寫入es中:

#output.elasticsearch:

# ?hosts: ["192.168.171.128:9200"]

#下面是寫入redis中:

#下面的filebeat-common是自定的key,要和logstash中從redis里對應的key要要一致,多個節點的nginx的都可以該key寫入,但需要定義log_source以作為區分,logstash讀取的時候以區分的標志來分開存放索引到es中

output.redis:

??hosts: ["192.168.171.128"]

??port: 6379

??password: "123456"

??key: "filebeat-common"

??db: 0

??datatype: list

processors:

??- add_host_metadata: ~

??- add_cloud_metadata: ~

#注意:因為默認情況下,宿主機日志路徑和容器內日志路徑是不一致的,所以配置文件里配置的路徑如果是宿主機日志路徑,容器里則找不到

##所以采取措施是:配置文件里配置成容器里的日志路徑,再把宿主機的日志目錄和容器日志目錄做一個映射就可以了

#/usr/share/filebeat/logs/*.log 是容器里的日志路徑

[root@localhost filebeat6.7.1]#?cat scripts/run_filebeat6.7.1.sh

#!/bin/bash

docker run -d --name filebeat6.7.1 --net=host --restart=always --user=root -v /data/filebeat6.7.1/conf/filebeat.yml:/usr/share/filebeat/filebeat.yml -v /data/java-logs:/usr/share/filebeat/logs ?docker.elastic.co/beats/filebeat:6.7.1

#注意:因為默認情況下,宿主機日志路徑和容器內日志路徑是不一致的,所以配置文件里配置的路徑如果是宿主機日志路徑,容器里則找不到

#所以采取措施是:配置文件里配置成容器里的日志路徑,再把宿主機的日志目錄和容器日志目錄做一個映射就可以了

[root@localhost filebeat6.7.1]# sh scripts/run_filebeat6.7.1.sh ??#運行后則開始收集日志到redis

[root@localhost filebeat6.7.1]# docker ps |grep filebeat

3cc559a84904 ???????docker.elastic.co/beats/filebeat:6.7.1 ??"/usr/local/bin/dock…" ??8 seconds ago ??????Up 7 seconds ???????????????????????????filebeat6.7.1

[root@localhost filebeat6.7.1]# cd

到redis里查看是否以寫入日志:(192.168.171.128,兩臺都以同一個key寫入redis,所以只有一個key名,篩選進入es時再根據標識篩選)

[root@localhost ~]# docker exec -it redis4.0.10 bash

[root@localhost /]# redis-cli -a 123456

127.0.0.1:6379> KEYS *

1)"filebeat-common"

127.0.0.1:6379> quit

[root@localhost /]# exit

4.docker安裝logstash6.7.1(在192.168.171.129上)——從redis讀出日志,寫入es集群

[root@localhost ~]# cd /data/

[root@localhost data]# ls logstash6.7.1.tar.gz

logstash6.7.1.tar.gz

[root@localhost data]# tar -zxf logstash6.7.1.tar.gz

[root@localhost data]# cd logstash6.7.1

[root@localhost logstash6.7.1]# ls

config ?image ?scripts

[root@localhost logstash6.7.1]# ls config/

GeoLite2-City.mmdb ?log4j2.properties ????logstash.yml ??pipelines.yml_bak ????startup.options

jvm.options ????????logstash-sample.conf ?pipelines.yml ?redis_out_es_in.conf

[root@localhost logstash6.7.1]# ls image/

logstash_6.7.1.tar

[root@localhost logstash6.7.1]# ls scripts/

run_logstash6.7.1.sh

[root@localhost logstash6.7.1]#?docker load -i image/logstash_6.7.1.tar

[root@localhost logstash6.7.1]# docker images |grep logstash

logstash ????????????6.7.1 ??????????????1f5e249719fc ???????11 months ago ??????778MB

[root@localhost logstash6.7.1]# cat config/pipelines.yml ?#確認配置,引用的conf目錄

# This file is where you define your pipelines. You can define multiple.

# For more information on multiple pipelines, see the documentation:

# ??https://www.elastic.co/guide/en/logstash/current/multiple-pipelines.html

- pipeline.id: main

??path.config: "/usr/share/logstash/config/*.conf" ??#容器內的目錄

??pipeline.workers: 3

[root@localhost logstash6.7.1]# cat config/redis_out_es_in.conf ??#查看和確認配置

input {

????redis {

????????host => "192.168.171.128"

????????port => "6379"

????????password => "123456"

????????db => "0"

????????data_type => "list"

????????key => "filebeat-common"

????}

}

#默認target是@timestamp,所以time_local會更新@timestamp時間。下面filter的date插件作用: 當第一次收集或使用緩存寫入時候,會發現入庫時間比日志實際時間有延時,導致時間不準確,最好加入date插件,使得>入庫時間和日志實際時間保持一致.

filter {

????date {

????????locale => "en"

????????match => ["time_local", "dd/MMM/yyyy:HH:mm:ss Z"]

????}

}

output {

????if [fields][log_source] == 'system-171.130' {

????????elasticsearch {

????????????hosts => ["192.168.171.128:9200"]

????????????index => "logstash-system-171.130-log-%{+YYYY.MM.dd}"

????????}

????}

????if [fields][log_source] == 'system-171.131' {

????????elasticsearch {

????????????hosts => ["192.168.171.128:9200"]

????????????index => "logstash-system-171.131-log-%{+YYYY.MM.dd}"

????????}

????}

????if [fields][log_source] == 'catalina-log-171.130' {

????????elasticsearch {

????????????hosts => ["192.168.171.128:9200"]

????????????index => "logstash-catalina-171.130-log-%{+YYYY.MM.dd}"

????????} ???????

????}

????if [fields][log_source] == 'catalina-log-171.131' {

????????elasticsearch {

????????????hosts => ["192.168.171.128:9200"]

????????????index => "logstash-catalina-171.131-log-%{+YYYY.MM.dd}"

????????} ???????

????}

????if [fields][log_source] == 'es-log-171.130' {

????????elasticsearch {

????????????hosts => ["192.168.171.128:9200"]

????????????index => "logstash-es-log-171.130-%{+YYYY.MM.dd}"

????????}

????}

????if [fields][log_source] == 'es-log-171.131' {

????????elasticsearch {

????????????hosts => ["192.168.171.128:9200"]

????????????index => "logstash-es-log-171.131-%{+YYYY.MM.dd}"

????????}

????}

????if [fields][log_source] == 'tomcat-access-log-171.130' {

????????elasticsearch {

????????????hosts => ["192.168.171.128:9200"]

????????????index => "logstash-tomcat-access-171.130-log-%{+YYYY.MM.dd}"

????????}

????} ??

????if [fields][log_source] == 'tomcat-access-log-171.131' {

????????elasticsearch {

????????????hosts => ["192.168.171.128:9200"]

????????????index => "logstash-tomcat-access-171.131-log-%{+YYYY.MM.dd}"

????????}

????} ??

????stdout { codec=> rubydebug }

????#codec=> rubydebug 調試使用,能將信息輸出到控制臺

}

[root@localhost logstash6.7.1]#?cat scripts/run_logstash6.7.1.sh

#!/bin/bash

docker run -d --name logstash6.7.1 --net=host --restart=always -v /data/logstash6.7.1/config:/usr/share/logstash/config logstash:6.7.1?

[root@localhost logstash6.7.1]# sh scripts/run_logstash6.7.1.sh??#從redis讀取日志,寫入es

[root@localhost logstash6.7.1]# docker ps |grep logstash

980aefbc077e ???????logstash:6.7.1 ????????????"/usr/local/bin/dock…" ??9 seconds ago ??????Up 7 seconds ???????????????????????????logstash6.7.1

到es集群查看,如下:

到redis查看,數據已經讀取走,為空了:

[root@localhost ~]# docker exec -it redis4.0.10 bash

[root@localhost /]# redis-cli -a 123456

127.0.0.1:6379> KEYS *

(empty list or set)

127.0.0.1:6379> quit

5.docker安裝kibana6.7.1(在192.168.171.132上)從es中讀取日志展示出來

[root@localhost ~]# cd /data/

[root@localhost data]# ls kibana6.7.1.tar.gz

kibana6.7.1.tar.gz

[root@localhost data]#?tar -zxf kibana6.7.1.tar.gz

[root@localhost data]# cd kibana6.7.1

[root@localhost kibana6.7.1]# ls

config ?image ?scripts

[root@localhost kibana6.7.1]# ls config/

kibana.yml

[root@localhost kibana6.7.1]# ls image/

kibana_6.7.1.tar

[root@localhost kibana6.7.1]# ls scripts/

run_kibana6.7.1.sh

[root@localhost kibana6.7.1]# docker load -i image/kibana_6.7.1.tar

[root@localhost kibana6.7.1]# docker images |grep kibana

kibana ?????????????6.7.1 ??????????????860831fbf9e7 ???????11 months ago ??????677MB

[root@localhost kibana6.7.1]# cat config/kibana.yml

#

# ** THIS IS AN AUTO-GENERATED FILE **

#

# Default Kibana configuration for docker target

server.name: kibana

server.host: "0"

elasticsearch.hosts: [ "http://192.168.171.128:9200" ]

xpack.monitoring.ui.container.elasticsearch.enabled: true

[root@localhost kibana6.7.1]#?cat scripts/run_kibana6.7.1.sh ??

#!/bin/bash

docker run -d --name kibana6.7.1 --net=host --restart=always -v /data/kibana6.7.1/config/kibana.yml:/usr/share/kibana/config/kibana.yml kibana:6.7.1

[root@localhost kibana6.7.1]#?sh scripts/run_kibana6.7.1.sh ?#運行,從es讀取展示到kibana中

[root@localhost kibana6.7.1]# docker ps

CONTAINER ID ???????IMAGE ??????????????COMMAND ?????????????????CREATED ????????????STATUS ?????????????PORTS ??????????????NAMES

bf16aaeaf4d9 ???????kibana:6.7.1 ???????"/usr/local/bin/kiba…" ??16 seconds ago ?????Up 15 seconds ??????????????????????????kibana6.7.1

[root@localhost kibana6.7.1]# netstat -anput |grep 5601???#kibana端口

tcp ???????0 ?????0 0.0.0.0:5601 ???????????0.0.0.0:* ??????????????LISTEN ?????2418/node ????

瀏覽器訪問kibana: ?http://192.168.171.132:5601?

kibana依次創建索引(盡量和es里索引名對應,方便查找)——查詢和展示es里的數據

(1)先創建-*索引:logstash-catalina-* ??點擊management,如下:

輸入索引名:logstash-catalina-*,點擊下一步,如下:

選擇時間戳: @timestamp,點擊創建索引,如下:

(2)先創建-*索引:logstash-es-log-* ??

點擊下一步,如下:

選擇時間戳,點擊創建索引,如下:

(3)創建-*索引:logstash-system-* ??

點擊下一步,如下:

選擇時間戳,點擊創建索引,如下:

(4)創建-*索引:logstash-tomcat-access-* ?

點擊下一步,如下:

點擊創建索引,如下:

查看日志,點擊discover,如下: #注意:由于之前測試訪問日志量少,后面又多寫了些日志,方便測試。

隨便選擇幾個點擊箭頭,即可展開,如下:

如果對運維課程感興趣,可以在b站上、A站或csdn上搜索我的賬號: 運維實戰課程,可以關注我,學習更多免費的運維實戰技術視頻

本文來自互聯網用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。
如若轉載,請注明出處:http://www.pswp.cn/web/66810.shtml
繁體地址,請注明出處:http://hk.pswp.cn/web/66810.shtml
英文地址,請注明出處:http://en.pswp.cn/web/66810.shtml

如若內容造成侵權/違法違規/事實不符,請聯系多彩編程網進行投訴反饋email:809451989@qq.com,一經查實,立即刪除!

相關文章

SpringBoot的Swagger配置

一、Swagger配置 1.添加依賴 <dependency><groupId>com.github.xiaoymin</groupId><artifactId>knife4j-spring-boot-starter</artifactId><version>3.0.2</version> </dependency> 2.修改WebMvcConfig Slf4j Configurat…

linux+docker+nacos+mysql部署

一、下載 docker pull mysql:5.7 docker pull nacos/nacos-server:v2.2.2 docker images 二、mysql部署 1、創建目錄存儲數據信息 mkdir ~/mysql cd ~/mysql 2、運行 MySQL 容器 docker run -id \ -p 3306:3306 \ --name mysql \ -v $PWD/conf:/etc/mysql/conf.d \ -v $PWD/…

代碼隨想錄——二叉樹(一)

文章目錄 二叉樹遍歷先序遍歷中序遍歷后序遍歷層序遍歷層序遍歷Ⅱ二叉樹的右視圖二叉樹的層平均值N插樹的層序遍歷在每個樹行中找最大值填充每個節點的下一個右側節點指針填充每個節點的下一個右側節點指針 II 二叉樹遍歷 先序遍歷 二叉樹先序遍歷 遞歸形式 /*** Definitio…

詳細介紹:持續集成與持續部署(CI/CD)技術細節(關鍵實踐、CI/CD管道、優勢與挑戰)

目錄 前言1、 持續集成&#xff08;CI&#xff09;1.1、持續集成的關鍵實踐1.2、持續集成工具1.3、持續集成的優勢 2、持續部署與持續交付&#xff08;CD&#xff09;2.1、持續交付&#xff08;Continuous Delivery&#xff09;2.2、持續部署&#xff08;Continuous Deployment…

Linux 系統服務開機自啟動指導手冊

一、引言 在 Linux 系統中&#xff0c;設置服務開機自啟動是常見的系統配置任務。本文檔詳細介紹了多種實現服務開機自啟動的方法&#xff0c;包括 systemctl 方式、通用腳本方式、crontab 方案等&#xff0c;并提供了生產環境下的方案建議和開機啟動腳本示例。 二、systemct…

Java如何向http/https接口發出請求

用Java發送web請求所用到的包都在java.net下&#xff0c;在具體使用時可以用如下代碼&#xff0c;你可以把它封裝成一個工具類 import javax.net.ssl.*; import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; import java.io.Outpu…

禁止 iOS 系統瀏覽器雙指放大頁面

網上找到禁止ios縮放的方法基本都試過了,但是還是有bug,如標題所示,下面我將總結一下禁止ios縮放,雙擊縮放的方法。 方法一 在 iOS 10之前&#xff0c;iOS 和 Android 都可以通過一行 meta 標簽來禁止頁面縮放&#xff1a; <meta content"widthdevice-width, initia…

讀西瓜書的數學準備

1&#xff0c;高等數學&#xff1a;會求偏導數就行 2&#xff0c;線性代數&#xff1a;會矩陣運算就行 參考&#xff1a;線性代數--矩陣基本計算&#xff08;加減乘法&#xff09;_矩陣運算-CSDN博客 3&#xff0c;概率論與數理統計&#xff1a;知道啥是隨機變量就行

PLC通信

PLC&#xff08;可編程邏輯控制器&#xff09;通信是指 PLC 與其他設備或系統之間進行數據傳輸和信息交換的過程 一、PLC通信方式 1 &#xff09;串行通信 數據按位順序依次傳輸&#xff0c;只需要一對傳輸線&#xff0c;成本低&#xff0c;傳輸距離長&#xff0c;但速度相對…

C/C++、網絡協議、網絡安全類文章匯總

&#x1f6f8; 文章簡介 本文章主要對本博客的所有文章進行了匯總&#xff0c;方便查找。內容涉及C/C編程&#xff0c;CMake、Makefile、Shell腳本&#xff0c;GUI編程框架MFC和QT&#xff0c;Git版本控制工具&#xff0c;網絡協議基礎知識&#xff0c;網絡安全領域相關知識&a…

java 中多線程、 隊列使用實例,處理大數據業務

場景&#xff1a; 從redis 訂閱數據 調用線程來異步處理數據 直接上代碼 定義線程管理類 import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.BeansException; import org.springframework.beans.factory.BeanFactory; import org…

【自動駕駛】4 智駕生態概述

目錄 1 智駕生態概述 ▲ 關鍵組成部分 ▲ 概述 2 關鍵技術 ▲ 傳感器 ▲ 感知 ▲ 數據閉環 3 未來市場 1 智駕生態概述 智能駕駛生態&#xff0c;簡稱智駕生態&#xff0c;是指圍繞智能駕駛技術的開發、應用、服務和支持所形成的產業體系和合作網絡。 涵蓋了從硬件設…

2025.1.20——一、[RCTF2015]EasySQL1 二次注入|報錯注入|代碼審計

題目來源&#xff1a;buuctf [RCTF2015]EasySQL1 目錄 一、打開靶機&#xff0c;整理信息 二、解題思路 step 1&#xff1a;初步思路為二次注入&#xff0c;在頁面進行操作 step 2&#xff1a;嘗試二次注入 step 3&#xff1a;已知雙引號類型的字符型注入&#xff0c;構造…

”彩色的驗證碼,使用pytesseract識別出來的驗證碼內容一直是空“的解決辦法

問題&#xff1a;彩色的驗證碼&#xff0c;使用pytesseract識別出來的驗證碼內容一直是空字符串 原因&#xff1a;pytesseract只識別黑色部分的內容 解決辦法&#xff1a;先把彩色圖片精確轉換成黑白圖片。再將黑白圖片進行反相&#xff0c;將驗證碼部分的內容變成黑色&#…

Unity3D項目開發中的資源加密詳解

前言 在Unity3D游戲開發中&#xff0c;保護游戲資源不被非法獲取和篡改是至關重要的一環。資源加密作為一種有效的技術手段&#xff0c;可以幫助開發者維護游戲的知識產權和安全性。本文將詳細介紹Unity3D項目中如何進行資源加密&#xff0c;并提供相應的技術詳解和代碼實現。…

RabbitMQ 在實際應用時要注意的問題

1. 冪等性保障 1.1 冪等性介紹 冪等性是數學和計算機科學中某些運算的性質,它們可以被多次應?,?不會改變初始應?的結果. 應?程序的冪等性介紹 在應?程序中,冪等性就是指對?個系統進?重復調?(相同參數),不論請求多少次,這些請求對系統的影響都是相同的效果. ?如數據庫…

AIGC視頻生成明星——Emu Video模型

大家好&#xff0c;這里是好評筆記&#xff0c;公主號&#xff1a;Goodnote&#xff0c;專欄文章私信限時Free。本文詳細介紹Meta的視頻生成模型Emu Video&#xff0c;作為Meta發布的第二款視頻生成模型&#xff0c;在視頻生成領域發揮關鍵作用。 &#x1f33a;優質專欄回顧&am…

Debian 上安裝PHP

1、安裝軟件源拓展工具 apt -y install software-properties-common apt-transport-https lsb-release ca-certificates 2、添加 Ond?ej Sur 的 PHP PPA 源&#xff0c;需要按一次回車&#xff1a; add-apt-repository ppa:ondrej/php 3、更新軟件源緩存&#xff1a; apt-g…

office 2019 關閉word窗口后卡死未響應

最近關閉word文件總是出現卡死未響應的狀態&#xff0c;必須從任務管理器才能殺掉word 進程&#xff0c;然后重新打開word再保存&#xff0c;很是麻煩。&#xff08;#其他特征&#xff0c;在word中打字會特別變慢&#xff0c;敲擊鍵盤半秒才出現字符。&#xff09; office官網…

SecureUtil.aes數據加密工具類

數據加密、解密工具類 包含map和vo的數據轉換 import cn.hutool.core.bean.BeanUtil; import cn.hutool.crypto.SecureUtil;import java.util.HashMap; import java.util.Map;/*** 數據解析**/ public class ParamUtils {/*** 數據解密** param params 參數* param secretKe…