從零開始的云計算生活——番外5,使用ELK實現對應用日志的監控

目錄

一.環境準備

試驗機安裝

修改文件配置

二.收集測試機(test)日志

配置pipline文件

配置filebeat配置文件

三.收集測試機nginx日志

下載安裝nginx

修改filebeat文件

修改pipline文件

四.收集網絡服務模塊日志

1.DHCP

下載dhcp

修改配置文件?

修改dhcp配置文件

配置logstash文件?

配置filebeat文件?

重啟應用查看kibana

2.DNS

修改配置文件(/etc/named.conf)

創建日志目錄并賦權?

?編輯配置filebeat文件

?編輯?配置logstash文件

全部啟動后在kibana查看日志

3.SSH

配置文件

Filebeat配置

Logstash配置

重啟應用

檢查kibana

4.Rsync

配置rsync

Filebeat配置

logstash配置

五.收集Tomcat服務日志

1.安裝tomcat

2.Tomcat啟動驗證

3.配置filebeat文件

4.配置logstash文件

5.重啟服務查看kibana

六.MySQL數據庫日志收集

1.安裝MySQL

2.編輯MySQL日志生成配置

3.啟動mysql并驗證日志生成

4.配置filebeat文件

5.配置logstash文件

6.重啟服務查看kibana

七.NFS日志收集

1.安裝NFS

2.啟用NFS日志

驗證配置:

3.配置filebeat文件

4.配置logstash文件

5.重啟服務查看kibana

八.Redis數據庫日志收集

1.安裝redis數據庫

2.Redis日志生成配置

3.配置filebeat文件

4.配置logstash文件

5.重啟服務查看kibana

九.LVS日志收集

1.下載安裝ipvsadm

2.配置rsyslog文件

2.配置filebeat文件

3.配置logstash文件

十.Haproxy日志收集

1.安裝haproxy

2.配置haproxy文件

3.配置Rsyslog收集HAProxy日志

4.修改filebeat文件

5.修改logstash文件

6.重啟全部服務,登錄kibana查看日志

十一.Keepalived日志收集

1.安裝keepalived

2.配置keepalived日志輸出

3.測試日志生成

4.修改filebeat文件

5.修改logstash文件

6.重啟全部服務,檢查kibana

十二.匯總


一.環境準備

角色主機名IP地址
圖形展示kibana192.168.71.178
日志存儲es192.168.71.179
日志收集分析lostash192.168.71.180
日志采集test192.168.71.181

試驗機安裝

分別安裝elasticsearch,logstash,kibana,filebeat

優化logstash命令

ln -s /usr/share/logstash/bin/logstash /usr/local/bin/

修改文件配置

elasticsearch:

kibana:

可以正常登錄5601端口?

編寫pipline文件

?初始文件內容

input {file {path => "/var/log/messages"start_position => "beginning"}
}
output {elasticsearch {hosts => ["http://192.168.71.179:9200"]index => "system-log-%{+YYYY.MM.dd}"}stdout {codec => rubydebug}
}

運行logstash獲取日志,并在kibana查看

成功運行,并且成功獲取日志

二.收集測試機(test)日志

配置pipline文件

新加一個端口(未使用的都可以)

input {file {path => "/var/log/messages"start_position => "beginning"}beats {port => 5044}
}
filter {if [host][name] {mutate { add_field => { "hostname" => "%{[host][name]}" } }}else if [agent][hostname] {mutate { add_field => { "hostname" => "%{[agent][hostname]}" } }}else {mutate { add_field => { "hostname" => "%{host}" } }}
}
output {if [hostname] ==  "logstash" {elasticsearch {hosts => ["192.168.71.179:9200"]index => "system-log-%{+YYYY.MM.dd}"}
}else if [hostname] ==  "test" {elasticsearch {hosts => ["192.168.71.179:9200"]index => "test-log-%{+YYYY.MM.dd}"}
}stdout {codec => rubydebug}
}

配置filebeat配置文件

將false改為ture打開該功能,然后填寫日志路徑

將ES注釋掉并打開logstash

修改完成后啟動logstash,此時再次查看kibana發現多出test上的日志文件。

三.收集測試機nginx日志

下載安裝nginx

修改filebeat文件

修改pipline文件

在test下新增如下內容

登錄nginx(不改主頁會有一個錯誤日志)

登錄查看kibana,生成日志文件

四.收集網絡服務模塊日志

1.DHCP

下載dhcp

將完整dhcpd.conf文件復制過來

修改配置文件?

修改dhcp配置文件

全部刪除,只保留以下字段?

修改/etc/rsyslog.conf,指向dhcpd.log(日志文件)

配置logstash文件?

配置filebeat文件?

重啟應用查看kibana

2.DNS

下載DNS(bind)

修改配置文件(/etc/named.conf)

logging {channel default_debug {file "data/named.run";severity dynamic;};channel dns_log {file "/var/log/named/dns.log" versions 3 size 20m;severity dynamic;print-time yes;};category default { dns_log; };category queries { dns_log; };
};

創建日志目錄并賦權?

配置filebeat文件

?配置logstash文件

全部啟動后在kibana查看日志

3.SSH

ssh的日志文件路徑為/var/log/secure

配置文件

Filebeat配置

Logstash配置

重啟應用

檢查kibana

4.Rsync

配置rsync

下載rsync

創建服務配置文件

sudo tee /usr/lib/systemd/system/rsyncd.service <<'EOF'
[Unit]
Description=fast remote file copy program daemon
Documentation=man:rsyncd(8)
After=network.target[Service]
EnvironmentFile=/etc/sysconfig/rsyncd
ExecStart=/usr/bin/rsync --daemon --no-detach $OPTIONS[Install]
WantedBy=multi-user.target
EOF

創建環境配置文件

sudo tee /etc/sysconfig/rsyncd <<'EOF'
# Options for rsync daemon
OPTIONS=""
EOF

創建主配置文件

sudo tee /etc/rsyncd.conf <<'EOF'
# 最小化配置示例
uid = root
gid = root
use chroot = yes
max connections = 4
pid file = /var/run/rsyncd.pid# 示例模塊
[backup]path = /tmp/backupcomment = Backup Arearead only = no
EOF

創建日志文件并賦權:

啟動服務

Filebeat配置

logstash配置

重啟服務,使用rsync命令向目標主機傳輸文件以生成日志

在kibana上查看

五.收集Tomcat服務日志

1.安裝tomcat

解壓安裝包

Tomcat安裝

優化啟動和關閉命令

2.Tomcat啟動驗證

3.配置filebeat文件

4.配置logstash文件

5.重啟服務查看kibana

六.MySQL數據庫日志收集

1.安裝MySQL

2.編輯MySQL日志生成配置

編輯MySQL配置文件(/etc/my.cnf/etc/mysql/mysql.conf.d/mysqld.cnf

[mysqld]
general_log = 1
general_log_file = /var/log/mysql/general.log
slow_query_log = 1
slow_query_log_file = /var/log/mysql/slow.log
long_query_time = 2  # 慢查詢閾值(秒)

3.啟動mysql并驗證日志生成

4.配置filebeat文件

5.配置logstash文件

6.重啟服務查看kibana

通用日志和慢日志

七.NFS日志收集

1.安裝NFS

2.啟用NFS日志

編輯NFS配置文件(通常為/etc/nfs.conf/etc/sysconfig/nfs),確保日志模塊啟用并指定路徑

配置rsyslog.log文件,為nfs增加日志路徑

創建日志文件

在NFS配置中指定facility

重啟rsyslog服務

驗證配置:

發送測試日志(使用local4 facility),檢查日志是否生成

3.配置filebeat文件

4.配置logstash文件

5.重啟服務查看kibana

八.Redis數據庫日志收集

1.安裝redis數據庫

2.Redis日志生成配置

修改Redis配置文件?
編輯/etc/redis.conf,啟用日志并指定路徑:

驗證日志生成

3.配置filebeat文件

4.配置logstash文件

5.重啟服務查看kibana

九.LVS日志收集

1.下載安裝ipvsadm

2.配置rsyslog文件

在/etc/rsyslog.conf內加入

kern.* ? ?/var/log/lvs.log

手動觸發LVS日志生成

通過模擬請求觸發LVS轉發,使系統產生日志:

curl http://<VIP>  # 替換為你的虛擬IP(VIP)

或在另一臺機器訪問VIP服務。

或者可以手動添加規則使日志內容增加

ifconfig ens34:0 192.168.71.200/24 #生成臨時網址ipvsadm -a -t 192.168.71.200:80 -s rr
ipvsadm -E -t 192.168.71.200:80 -s rr -p 60
ipvsadm -a -t 192.168.71.200:80 -r 192.168.1.101:80 -g
ipvsadm -a -t 192.168.71.200:80 -r 192.168.1.102:80 -g

查看日志內容是否生成

2.配置filebeat文件

3.配置logstash文件

4.重啟服務后查看kibana

十.Haproxy日志收集

1.安裝haproxy

2.配置haproxy文件

cat > /etc/haproxy/haproxy.cfg <<'EOF'
globallog 127.0.0.1 local0 info  # 重要:使用local0設備defaultslog     globalmode    httpoption  httplogoption  dontlognulltimeout connect 5000timeout client  50000timeout server  50000# 添加您的具體frontend/backend配置
# frontend示例:
frontend http-inbind *:80default_backend serversbackend serversserver server1 192.168.1.100:80 check
EOF

3.配置Rsyslog收集HAProxy日志

創建文件 /etc/rsyslog.d/haproxy.conf

$ModLoad imudp
$UDPServerRun 514
local0.* /var/log/haproxy.log  # 本地存儲
local0.* @logstash_ip:5140     # 轉發到Logstash

4.修改filebeat文件

5.修改logstash文件

6.重啟全部服務,登錄kibana查看日志

十一.Keepalived日志收集

1.安裝keepalived

2.配置keepalived日志輸出

創建日志目錄

sudo mkdir -p /var/log/keepalived
sudo touch /var/log/keepalived/keepalived.log
sudo chown -R root:keepalived /var/log/keepalived

編輯keepalived配置文件,

vim /etc/sysconfig/keepalived

添加以下行

KEEPALIVED_OPTIONS="-D -d -S 0"

?配置rsyslog,在文件末尾添加

local0.* /var/log/keepalived/keepalived.log

重啟服務

3.測試日志生成

4.修改filebeat文件

5.修改logstash文件

6.重啟全部服務,檢查kibana

十二.匯總

logstash

經過全部配置后,logstash上的pipline.conf全部內容如下

input {file {path => "/var/log/messages"start_position => "beginning"}beats {port => 5044}
}
filter {if [host][name] {mutate { add_field => { "hostname" => "%{[host][name]}" } }}else if [agent][hostname] {mutate { add_field => { "hostname" => "%{[agent][hostname]}" } }}else {mutate { add_field => { "hostname" => "%{host}" } }}
}
output {if [hostname] ==  "logstash" {elasticsearch {hosts => ["192.168.71.179:9200"]index => "system-log-%{+YYYY.MM.dd}"}
}else if [hostname] ==  "test" {if "system" in [tags] {elasticsearch {hosts => ["192.168.71.179:9200"]index => "test-log-%{+YYYY.MM.dd}"}}if "nginx-access" in [tags] {elasticsearch {hosts => ["192.168.71.179:9200"]index => "nginx-access-log-%{+YYYY.MM.dd}"}}if "nginx-error" in [tags] {elasticsearch {hosts => ["192.168.71.179:9200"]index => "nginx-error-log-%{+YYYY.MM.dd}"}}if "dhcp" in [tags] {elasticsearch {hosts => ["192.168.71.179:9200"]index => "dhcp-log-%{+YYYY.MM.dd}"}}if "dns" in [tags] {elasticsearch {hosts => ["192.168.71.179:9200"]index => "dns-log-%{+YYYY.MM.dd}"}}if "ssh" in [tags] {elasticsearch {hosts => ["192.168.71.179:9200"]index => "ssh-log-%{+YYYY.MM.dd}"}}if "rsyncd" in [tags] {elasticsearch {hosts => ["192.168.71.179:9200"]index => "rsyncd-log-%{+YYYY.MM.dd}"}}if "tomcat" in [tags] {elasticsearch {hosts => ["192.168.71.179:9200"]index => "tomcat-log-%{+YYYY.MM.dd}"}}
}if "mysql" in [tags] {elasticsearch {hosts => ["192.168.71.179:9200"]index => "mysql-log-%{+YYYY.MM.dd}"}}if "mysql-slow" in [tags] {elasticsearch {hosts => ["192.168.71.179:9200"]index => "mysql-slow-log-%{+YYYY.MM.dd}"}}if "nfs" in [tags] {elasticsearch {hosts => ["192.168.71.179:9200"]index => "nfs-log-%{+YYYY.MM.dd}"}}if "redis" in [tags] {elasticsearch {hosts => ["192.168.71.179:9200"]index => "redis-log-%{+YYYY.MM.dd}"}}if "lvs" in [tags] {elasticsearch {hosts => ["192.168.71.179:9200"]index => "lvs-log-%{+YYYY.MM.dd}"}}if "haproxy" in [tags] {elasticsearch {hosts => ["192.168.71.179:9200"]index => "haproxy-log-%{+YYYY.MM.dd}"}}if "keepalived" in [tags] {elasticsearch {hosts => ["192.168.71.179:9200"]index => "keepalived-log-%{+YYYY.MM.dd}"}}stdout {codec => rubydebug}
}

filebeat

filebeat上的filebeat.yml內容如下

###################### Filebeat Configuration Example ########################## This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.#=========================== Filebeat inputs =============================filebeat.inputs:# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
- type: log# Change to true to enable this input configuration.enabled: true# Paths that should be crawled and fetched. Glob based paths.paths:- /var/log/messages#tags: "system"  #- c:\programdata\elasticsearch\logs\*
- type: logenabled: truepaths:- /var/log/nginx/access.logtags: "nginx-access"  
- type: logenabled: truepaths:- /var/log/nginx/error.logtags: "nginx-error"  
- type: logenabled: truepaths:- /var/log/dhcpd.logtags: "dhcp"  
- type: logenabled: truepaths:- /var/log/named/dns.logtags: "dns"  
- type: logenabled: truepaths:- /var/log/securetags: "ssh"  
- type: logenabled: truepaths:- /var/log/rsyncd.logtags: "rsyncd"  
- type: logenabled: truepaths:- /usr/local/tomcat8/logs/*.logtags: "tomcat"  
- type: logenabled: truepaths:- /var/log/mysql/general.logtags: "mysql"  
- type: logenabled: truepaths:- /var/log/mysql/slow.logtags: "mysql-slow"  
- type: logenabled: truepaths:- /var/log/nfs.logtags: "nfs"  
- type: logenabled: truepaths:- /var/log/redis/redis.logtags: "redis"  
- type: logenabled: truepaths:- /var/log/lvs.logtags: "lvs"  
- type: logenabled: truepaths:- /var/log/haproxy.logtags: "haproxy"  
- type: logenabled: truepaths:- /var/log/keepalived/keepalived.logtags: "keepalived"  # Exclude lines. A list of regular expressions to match. It drops the lines that are# matching any regular expression from the list.#exclude_lines: ['^DBG']# Include lines. A list of regular expressions to match. It exports the lines that are# matching any regular expression from the list.#include_lines: ['^ERR', '^WARN']# Exclude files. A list of regular expressions to match. Filebeat drops the files that# are matching any regular expression from the list. By default, no files are dropped.#exclude_files: ['.gz$']# Optional additional fields. These fields can be freely picked# to add additional information to the crawled log files for filtering#fields:#  level: debug#  review: 1### Multiline options# Multiline can be used for log messages spanning multiple lines. This is common# for Java Stack Traces or C-Line Continuation# The regexp Pattern that has to be matched. The example pattern matches all lines starting with [#multiline.pattern: ^\[# Defines if the pattern set under pattern should be negated or not. Default is false.#multiline.negate: false# Match can be set to "after" or "before". It is used to define if lines should be append to a pattern# that was (not) matched before or after or as long as a pattern is not matched based on negate.# Note: After is the equivalent to previous and before is the equivalent to to next in Logstash#multiline.match: after#============================= Filebeat modules ===============================filebeat.config.modules:# Glob pattern for configuration loadingpath: ${path.config}/modules.d/*.yml# Set to true to enable config reloadingreload.enabled: false# Period on which files under path should be checked for changes#reload.period: 10s#==================== Elasticsearch template setting ==========================setup.template.settings:index.number_of_shards: 1#index.codec: best_compression#_source.enabled: false#================================ General =====================================# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:#============================== Kibana =====================================# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:# Kibana Host# Scheme and port can be left out and will be set to the default (http and 5601)# In case you specify and additional path, the scheme is required: http://localhost:5601/path# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601#host: "localhost:5601"# Kibana Space ID# ID of the Kibana Space into which the dashboards should be loaded. By default,# the Default Space will be used.#space.id:#============================= Elastic Cloud ==================================# These settings simplify using filebeat with the Elastic Cloud (https://cloud.elastic.co/).# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:#================================ Outputs =====================================# Configure what output to use when sending the data collected by the beat.#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
#  # Array of hosts to connect to.
#  # hosts: ["localhost:9200"]
# hosts: ["192.168.71.179:9200"]
# indices:
#  - index: "LVS-logs"
#    when:
#     contains:
#      { "message": "ipvs"}
#setup.ilm.enabled: false
#setup.template.name: "LVS"
#setup.template.pattern: "LVS-*"# Optional protocol and basic auth credentials.#protocol: "https"#username: "elastic"#password: "changeme"#----------------------------- Logstash output --------------------------------
output.logstash:# The Logstash hostshosts: ["192.168.71.180:5044"]# Optional SSL. By default is off.# List of root certificates for HTTPS server verifications#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]# Certificate for SSL client authentication#ssl.certificate: "/etc/pki/client/cert.pem"# Client Certificate Key#ssl.key: "/etc/pki/client/cert.key"#================================ Processors =====================================# Configure processors to enhance or manipulate events generated by the beat.processors:- add_host_metadata: ~- add_cloud_metadata: ~#================================ Logging =====================================# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]#============================== Xpack Monitoring ===============================
# filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.# Set to true to enable the monitoring reporter.
#xpack.monitoring.enabled: false# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well. Any setting that is not set is
# automatically inherited from the Elasticsearch output configuration, so if you
# have the Elasticsearch output configured, you can simply uncomment the
# following line.
#xpack.monitoring.elasticsearch:#================================= Migration ==================================# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true

kibana

kibana總覽

?

本文來自互聯網用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。
如若轉載,請注明出處:http://www.pswp.cn/diannao/92583.shtml
繁體地址,請注明出處:http://hk.pswp.cn/diannao/92583.shtml
英文地址,請注明出處:http://en.pswp.cn/diannao/92583.shtml

如若內容造成侵權/違法違規/事實不符,請聯系多彩編程網進行投訴反饋email:809451989@qq.com,一經查實,立即刪除!

相關文章

Node.js:函數、路由、全局對象

Node.js函數 在JavaScript中&#xff0c;一個函數可以作為另一個函數的參數&#xff0c;可以先定義一個函數&#xff0c;然后進行傳遞&#xff0c;也可以在傳參的地方直接定義 function sayhello(){console.log("hello"); } function run(hello, value){hello();cons…

2025暑期—06神經網絡-常見網絡3

VGG16網絡64個卷積核做兩次卷積&#xff0c;填充后編程224X224X64 pooling 兩次卷 pooling 三次卷...分辨率降低一半&#xff0c;通道數增加1倍所有卷積核都是3x3, 所有的pooling 都是2x2殘差網絡每一層的塊都有越層的連接。

使用 FastAPI 構建 Elasticsearch API

作者&#xff1a;來自 Elastic Jeffrey Rengifo 學習如何使用 Pydantic 模式和 FastAPI 后臺任務&#xff0c;通過實際示例構建一個 Elasticsearch API。 想獲得 Elastic 認證嗎&#xff1f;查看下一期 Elasticsearch Engineer 培訓的時間&#xff01; Elasticsearch 擁有豐富…

[2025CVPR-目標檢測方向]FSHNet:一種用于3D物體檢測的全稀疏混合網絡。

1. ?簡介? 論文提出了FSHNet&#xff08;Fully Sparse Hybrid Network&#xff09;&#xff0c;一種用于3D物體檢測的全稀疏混合網絡。FSHNet旨在解決現有稀疏3D檢測器的兩大核心問題&#xff1a;長距離交互能力弱和網絡優化困難。稀疏檢測器&#xff08;如VoxelNeXt和SAFDN…

MySql 8.0.42 zip版安裝教程和使用

今天要裝個MySQL&#xff0c;就按照自己以前的教程來做&#xff0c;不知道是不是版本更新了的原因&#xff0c;又遇到了一點小阻礙&#xff0c;于是再記錄一下吧。 下載MySQL 下載鏈接&#xff1a;MySQL :: Download MySQL Community Serverhttps://dev.mysql.com/downloads/…

【lucene】實現knn

在 Lucene 中&#xff0c;可以通過 KnnFloatVectorQuery 和 KnnFloatVectorField 來實現 KNN&#xff08;k-Nearest Neighbors&#xff09;搜索。以下是具體介紹&#xff1a;1. 功能原理KnnFloatVectorQuery 是 Lucene 用于執行最近鄰搜索的查詢類&#xff0c;它可以在一個字段…

RabbitMQ實踐學習筆記

RabbitMQ實踐 以下是關于RabbitMQ實踐的整理,涵蓋常見場景和示例代碼(基于Markdown格式)。內容按模塊分類,避免步驟詞匯,直接提供可操作的方法: 基礎連接與隊列聲明 使用Python的pika庫建立連接并聲明隊列: import pikaconnection = pika.BlockingConnection(pika.C…

量子生成對抗網絡:量子計算與生成模型的融合革命

引言&#xff1a;當生成對抗網絡遇上量子計算在人工智能與量子計算雙重浪潮的交匯處&#xff0c;量子生成對抗網絡&#xff08;Quantum Generative Adversarial Networks, QGAN&#xff09;正成為突破經典算力瓶頸的關鍵技術。傳統生成對抗網絡&#xff08;GAN&#xff09;在圖…

VBA 多個選項,將選中的選項錄入當前選中的單元格

1、使用LISTBOX插件&#xff0c;選中后回車錄入 維護好數據&#xff0c;并新增一個activeX列表框插件 Private Sub Worksheet_SelectionChange(ByVal Target As Range)If Target.Count > 1 Then Exit SubIf Target.Row > 2 And Target.Row < 10 And Target.Column 2…

【NLP輿情分析】基于python微博輿情分析可視化系統(flask+pandas+echarts) 視頻教程 - 主頁-微博點贊量Top6實現

大家好&#xff0c;我是java1234_小鋒老師&#xff0c;最近寫了一套【NLP輿情分析】基于python微博輿情分析可視化系統(flaskpandasecharts)視頻教程&#xff0c;持續更新中&#xff0c;計劃月底更新完&#xff0c;感謝支持。今天講解主頁-微博點贊量Top6實現 視頻在線地址&…

SAP調用外部API

SAP需求將中文字符轉化為對應的拼音具體思路,由于sap中沒有將中文字符轉化為拼音的函數或方法類,則以http請求訪問外部服務器發布的API服務,然后獲取其返回值即可1.調用外部網站上提供的api缺點:免費次數有限,后需要充值這里是用www格式的json報文*&----------------------…

(12)機器學習小白入門YOLOv:YOLOv8-cls 模型微調實操

YOLOv8-cls 模型微調實操 (1)機器學習小白入門YOLOv &#xff1a;從概念到實踐 (2)機器學習小白入門 YOLOv&#xff1a;從模塊優化到工程部署 (3)機器學習小白入門 YOLOv&#xff1a; 解鎖圖片分類新技能 (4)機器學習小白入門YOLOv &#xff1a;圖片標注實操手冊 (5)機器學習小…

基于Matlab傳統圖像處理技術的車輛車型識別與分類方法研究

隨著計算機視覺和圖像處理技術的發展&#xff0c;車輛檢測與識別已經成為智能交通系統中的一個重要研究方向。傳統圖像處理方法通過對圖像進行預處理、特征提取、分類與識別&#xff0c;提供了一種無需復雜深度學習模型的解決方案。本研究基于MATLAB平臺&#xff0c;采用傳統圖…

未來趨勢:LeafletJS 與 Web3/AI 的融合

引言 LeafletJS 作為一個輕量、靈活的 JavaScript 地圖庫&#xff0c;以其模塊化設計和高效渲染能力在 Web 地圖開發中占據重要地位。隨著 Web3 和人工智能&#xff08;AI&#xff09;的興起&#xff0c;地圖應用的開發范式正在發生變革。Web3 技術&#xff08;如區塊鏈、去中…

Spring AI 系列之二十一 - EmbeddingModel

之前做個幾個大模型的應用&#xff0c;都是使用Python語言&#xff0c;后來有一個項目使用了Java&#xff0c;并使用了Spring AI框架。隨著Spring AI不斷地完善&#xff0c;最近它發布了1.0正式版&#xff0c;意味著它已經能很好的作為企業級生產環境的使用。對于Java開發者來說…

LFU算法及優化

繼上一篇的LRU算法的實現和講解&#xff0c;這一篇來講述LFU最近使用頻率高的數據很大概率將會再次被使用,而最近使用頻率低的數據,將來大概率不會再使用。做法&#xff1a;把使用頻率最小的數據置換出去。這種算法更多是從使用頻率的角度&#xff08;但是當緩存滿時&#xff0…

關于原車一鍵啟動升級手機控車的核心信息及注意事項

想知道如何給原車已經有一鍵啟動功能的車輛加裝手機遠程啟動。這是個很實用的汽車改裝需求&#xff0c;尤其適合想在冬天提前熱車、夏天提前開空調的車主。一、適配方案與核心功能 ?升級專車專用4G手機控車模塊?&#xff0c;推薦安裝「移動管家YD361-3」系統&#xff0c;該方…

數據結構與算法:類C語言有關操作補充

數據結構與算法:類C語言操作補充 作為老師,我將詳細講解類C語言(如C或C++)中的關鍵操作,包括動態內存分配和參數傳遞。這些內容在數據結構與算法中至關重要,例如在實現動態數組、鏈表或高效函數調用時。我會用通俗易懂的語言和代碼示例逐步解釋,確保你輕松掌握。內容基…

Go 并發(協程,通道,鎖,協程控制)

一.協程&#xff08;Goroutine&#xff09;并發&#xff1a;指程序能夠同時執行多個任務的能力&#xff0c;多線程程序在一個核的cpu上運行&#xff0c;就是并發。并行&#xff1a;多線程程序在多個核的cpu上運行&#xff0c;就是并行。并發主要由切換時間片來實現"同時&q…

圖機器學習(15)——鏈接預測在社交網絡分析中的應用

圖機器學習&#xff08;15&#xff09;——鏈接預測在社交網絡分析中的應用0. 鏈接預測1. 數據處理2. 基于 node2vec 的鏈路預測3. 基于 GraphSAGE 的鏈接預測3.1 無特征方法3.2 引入節點特征4. 用于鏈接預測的手工特征5. 結果對比0. 鏈接預測 如今&#xff0c;社交媒體已成為…