1 項目概述
本項目旨在構建一個高可用、高性能的 MySQL 集群,能夠處理大規模并發業務。通過容器化部署、多級緩存、完善的監控和備份策略,確保數據庫服務的連續性和數據安全性。
架構總覽
預期目標
數據庫服務可用性達到 99.99%
支持每秒 thousands 級別的并發訪問
實現秒級故障檢測和自動切換
數據備份 RPO<5 分鐘,RTO<30 分鐘
完善的監控告警體系
2 環境準備
2.1 硬件環境要求
角色 | 配置建議 | 數量 |
---|---|---|
MySQL 主庫 | 1 核 CPU/2GB 內存 / 50GB SSD | 1 |
MySQL 從庫 | 1 核 CPU/2GB 內存 / 50GB SSD | 2 |
MyCat 節點 | 1 核 CPU/2GB 內存 | 2 |
MHA 管理節點 | 1 核 CPU/2GB 內存 | 1 |
Redis 集群 | 1 核 CPU/2GB 內存 | 3 |
監控節點 | 2 核 CPU/6GB 內存 / 50GB 存儲 | 1 |
備份節點 | 1 核 CPU/2GB 內存 / 100GB 存儲 | 1 |
壓測節點 | 1 核 CPU/2GB 內存 | 1 |
Ansible 控制節點 | 1 核 CPU/2GB 內存 | 1 |
Nginx節點 | 1 核 CPU/2GB 內存 | 2 |
app-server節點 | 1 核 CPU/2GB 內存 | 2 |
2.2 網絡規劃
主機名 | IP 地址 | 角色 | VIP |
---|---|---|---|
windows-client | 192.168.121.68 | 客戶端 | - |
mycat1 | 192.168.121.180 | MyCat 節點 1 | 192.168.121.188 |
mycat2 | 192.168.121.190 | MyCat 節點 2 | 192.168.121.199 |
mha-manager | 192.168.121.220 | MHA 管理節點 | - |
mysql-master | 192.168.121.221 | MySQL 主庫 | 192.168.121.200 |
mysql-slave1 | 192.168.121.222 | MySQL 從庫 1 (候選主庫) | 192.168.121.200 |
mysql-slave2 | 192.168.121.223 | MySQL 從庫 2 | - |
ansible-server | 192.168.121.210 | Ansible 控制節點 / 備份服務器/CI/CD | - |
sysbench-server | 192.168.121.66 | 壓測服務器 | - |
monitor-server | 192.168.121.125 | Prometheus+Grafana+ELK+alertmanager | - |
redis1 | 192.168.121.171 | Redis 節點 1 | - |
redis2 | 192.168.121.172 | Redis 節點 2 | - |
redis3 | 192.168.121.173 | Redis 節點 3 | - |
nginx1 | 192.168.121.70 | Nginx 主負載節點 | 192.168.121.88 |
nginx2 | 192.168.121.71 | Nginx 備負載節點 | 192.168.121.88 |
app-server1 | 192.168.121.80 | 應用服務器主節點 | - |
app-server2 | 192.168.121.81 | 應用服務器備節點 | - |
2.3 軟件版本規劃
軟件 | 版本 |
---|---|
操作系統 | CentOS 7.9 |
Docker | 26.1.4 |
Docker Compose | 1.29.2 |
Ansible | 2.9.27 |
MySQL | 8.0.28 |
MyCat2 | 1.21 |
MHA | 0.58 |
Redis | 6.2.6 |
Prometheus | 2.33.5 |
Grafana | 8.4.5 |
ELK | 7.17.0 |
Keepalived | 1.3.5 |
Sysbench | 1.0.20 |
Nginx | 1.27 |
3 基礎環境部署
3.1 操作系統初始化
所有節點執行以下操作
# 按照網絡規劃設置靜態ip
vi /etc/sysconfig/network-scripts/ifcfg-ens32 # ens32根據實際修改可能是ens33
BOOTPROTO=static # static表示靜態ip地址
NAME=ens32 # 網絡接口名稱
DEVICE=ens32 # 網絡接口的設備名稱
ONBOOT=yes # yes表示自啟動
IPADDR=192.168.121.180 # 靜態ip地址
NETMASK=255.255.255.0 # 子網掩碼
GATEWAY=192.168.121.2 # 網關地址,具體見VMware虛擬網絡編輯器設置
DNS1=114.114.114.114 # DNS首選服務器
DNS2=8.8.8.8# 重啟網卡
systemctl restart network# 配置/etc/hosts文件ip 主機名映射
vim /etc/hosts
192.168.121.180 mycat1
192.168.121.190 mycat2
192.168.121.220 mha-manager
192.168.121.221 mysql-master
192.168.121.222 mysql-slave1
192.168.121.223 mysql-slave2
192.168.121.210 ansible-server
192.168.121.66 sysbench-server
192.168.121.125 monitor-server
192.168.121.171 redis1
192.168.121.172 redis2
192.168.121.173 redis3
192.168.121.70 nginx1
192.168.121.71 nginx2
192.168.121.80 app-server1
192.168.121.81 app-server2
# 按照網絡規劃修改主機名
hostnamectl set-hostname 主機名
su # 關閉SELinux
sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
setenforce 0# 關閉防火墻
systemctl stop firewalld
systemctl disable firewalld# 安裝必要工具
yum install -y vim net-tools wget curl lrzsz telnet# 配置時間同步
yum install -y chrony
systemctl start chronyd
systemctl enable chronyd
chronyc sources# 重啟關閉selinux生效
reboot# 查看selinux狀態
sestatus
顯示disabled表示關閉成功
3.2 部署 Docker 環境
Docker鏡像極速下載服務 - 毫秒鏡像
在所有需要運行容器的節點(除了dns服務器節點)執行:
# 一鍵安裝
bash <(curl -f -s --connect-timeout 10 --retry 3 https://linuxmirrors.cn/docker.sh) --source mirrors.tencent.com/docker-ce --source-registry docker.1ms.run --protocol https --install-latested true --close-firewall false --ignore-backup-tips# 一鍵配置,簡單快捷,告別拉取超時
bash <(curl -sSL https://n3.ink/helper)# 安裝docker-compose
yum install -y gcc python3-devel rust cargopip3 install --upgrade pippip3 install setuptools-rustpip3 install docker-compose
3.3 部署 Ansible 控制節點
在 ansible-server (192.168.121.210) 上執行:
yum install -y epel-release
yum install -y ansible# 配置Ansible主機清單
cat > /etc/ansible/hosts << EOF
[mysql]
192.168.121.221
192.168.121.222
192.168.121.223[mycat]
192.168.121.180
192.168.121.190[mha]
192.168.121.220[redis]
192.168.121.171
192.168.121.172
192.168.121.173[monitor]
192.168.121.125[backup]
192.168.121.210[sysbench]
192.168.121.66[nginx]
192.168.121.70
192.168.121.71[app-server]
192.168.121.80
192.168.121.81
EOF# 配置免密登錄
ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa# 批量分發公鑰
for ip in 192.168.121.180 192.168.121.190 192.168.121.220 192.168.121.221 192.168.121.222 192.168.121.223 192.168.121.210 192.168.121.66 192.168.121.125 192.168.121.171 192.168.121.172 192.168.121.173 192.168.121.70 192.168.121.71; dossh-copy-id root@$ip
done# 測試Ansible連通性
ansible all -m ping
4 MySQL 集群部署
4.1 準備 MySQL Docker 鏡像
在 ansible-server 上創建 Dockerfile:
mkdir -p /data/docker/mysql
cd /data/docker/mysql[root@ansible-server tasks]# cat /data/docker/mysql/Dockerfile
FROM docker.1ms.run/mysql:8.0.28
# 安裝必要工具
RUN yum clean all && \yum makecache fast && \yum install -y \vim \net-tools \iputils && \yum clean all && \rm -rf /var/cache/yum/* \
# 配置MySQL
COPY my.cnf /etc/mysql/conf.d/my.cnf
# 配置MHA相關腳本
COPY master_ip_failover /usr/local/bin/
COPY master_ip_online_change /usr/local/bin/
RUN chmod +x /usr/local/bin/master_ip_failover
RUN chmod +x /usr/local/bin/master_ip_online_change
# 設置時區
ENV TZ=Asia/Shanghai# 分發Dokcerfile到三臺mysql服務器
mkdir -p /data/docker/mysql # 三臺數據庫服務器建立目錄
scp /data/docker/mysql/Dockerfile mysql-master:/data/docker/mysql/Dockerfile
scp /data/docker/mysql/Dockerfile mysql-master:/data/docker/mysql/Dockerfile
scp /data/docker/mysql/Dockerfile mysql-master:/data/docker/mysql/Dockerfile
創建 MySQL 配置文件模板:
[root@ansible-server tasks]# cat /data/docker/mysql/my.cnf
[mysqld]
user = mysql
default-storage-engine = InnoDB
character-set-server = utf8mb4
collation-server = utf8mb4_general_ci
max_connections = 1000
wait_timeout = 600
interactive_timeout = 600
table_open_cache = 2048
max_heap_table_size = 64M
tmp_table_size = 64M
slow_query_log = 1
slow_query_log_file = /var/log/mysql/slow.log
long_query_time = 2
log_queries_not_using_indexes = 1
server-id = {{ server_id }}
log_bin = /var/lib/mysql/mysql-bin
binlog_format = row
binlog_rows_query_log_events = 1
expire_logs_days = 7
gtid_mode = ON
enforce_gtid_consistency = ON
log_slave_updates = ON
relay_log_recovery = 1
sync_binlog = 1
innodb_flush_log_at_trx_commit = 1
plugin-load = "rpl_semi_sync_master=semisync_master.so;rpl_semi_sync_slave=semisync_slave.so"
loose_rpl_semi_sync_master_enabled = 1
loose_rpl_semi_sync_slave_enabled = 1
loose_rpl_semi_sync_master_timeout = 1000[mysqld_safe]
log-error = /var/log/mysql/error.log
創建 MHA 相關腳本:
[root@ansible-server tasks]# cat /data/docker/mysql/master_ip_failover
#!/usr/bin/env perl
use strict;
use warnings FATAL => 'all';use Getopt::Long;my ($command, $ssh_user, $orig_master_host, $orig_master_ip,$orig_master_port, $new_master_host, $new_master_ip, $new_master_port
);my $vip = '192.168.121.200'; #指定vip的地址,自己指定
my $brdc = '192.168.121.255'; #指定vip的廣播地址
my $ifdev = 'ens32'; #指定vip綁定的網卡
my $key = '1'; #指定vip綁定的虛擬網卡序列號
my $ssh_start_vip = "/sbin/ifconfig ens32:$key $vip"; #代表此變量值為ifconfig ens32:1 192.168.121.200
my $ssh_stop_vip = "/sbin/ifconfig ens32:$key down"; #代表此變量值為ifconfig ens32:1 192.168.121.200 down
my $exit_code = 0; #指定退出狀態碼為0GetOptions('command=s' => \$command,'ssh_user=s' => \$ssh_user,'orig_master_host=s' => \$orig_master_host,'orig_master_ip=s' => \$orig_master_ip,'orig_master_port=i' => \$orig_master_port,'new_master_host=s' => \$new_master_host,'new_master_ip=s' => \$new_master_ip,'new_master_port=i' => \$new_master_port,
);exit &main();sub main {print "\n\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\n\n";if ( $command eq "stop" || $command eq "stopssh" ) {my $host = $orig_master_host;my $ip = $orig_master_ip;print "Disabling the VIP on old master: $host \n";&stop_vip();$exit_code = 0;}elsif ( $command eq "start" ) {my $host = $new_master_host;my $ip = $new_master_ip;print "Enabling the VIP - $vip on the new master - $host \n";&start_vip();$exit_code = 0;}elsif ( $command eq "status" ) {print "Checking the Status of the script.. OK \n";$exit_code = 0;}else {&usage();$exit_code = 1;}return $exit_code;
}sub start_vip() {`ssh $ssh_user\@$new_master_host \" $ssh_start_vip \"`;
}sub stop_vip() {return 0 unless ($orig_master_host);`ssh $ssh_user\@$orig_master_host \" $ssh_stop_vip \"`;
}sub usage {print"Usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port --new_master_host=host --new_master_ip=ip --new_master_port=port\n";
}
4.2 使用 Ansible 部署 MySQL 集群
創建 Ansible Playbook:
mkdir -p /data/ansible/roles/mysql/tasks
cd /data/ansible/roles/mysql/tasks[root@ansible-server tasks]# pwd
/data/ansible/roles/mysql/tasks
[root@ansible-server tasks]# cat main.yml
- name: 創建MySQL數據目錄file:path: /data/mysql/datastate: directorymode: '0755'- name: 創建MySQL日志目錄file:path: /data/mysql/logsstate: directorymode: '0755'- name: 復制MySQL配置文件template:src: /data/docker/mysql/my.cnfdest: /data/mysql/my.cnfmode: '0644'- name: 構建MySQL鏡像docker_image:name: docker.1ms.run/mysql:8.0.28build:path: /data/docker/mysqlsource: build- name: 啟動MySQL容器docker_container:name: mysqlimage: docker.1ms.run/mysql:8.0.28state: startedrestart_policy: alwaysports:- "3306:3306"volumes:- /data/mysql/data:/var/lib/mysql- /data/mysql/logs:/var/log/mysql- /data/mysql/my.cnf:/etc/mysql/conf.d/my.cnfenv:MYSQL_ROOT_PASSWORD: "{{ mysql_root_password }}"privileged: yes# 創建主Playbook
[root@ansible-server tasks]# cd /data/ansible
[root@ansible-server ansible]# cat deploy_mysql.yml
- hosts: mysqlvars:mysql_root_password: "123456"server_id: "{{ 221 if inventory_hostname == '192.168.121.221' else 222 if inventory_hostname == '192.168.121.222' else 223 }}"tasks:- include_role:name: mysql
分別為三個 MySQL 節點生成不同的配置文件:
# 為主庫生成配置
sed 's/{{ server_id }}/1/' /data/docker/mysql/my.cnf > /data/mysql/master_my.cnf
scp /data/mysql/master_my.cnf root@192.168.121.221:/data/mysql/my.cnf# 為從庫1生成配置
sed 's/{{ server_id }}/2/' /data/docker/mysql/my.cnf > /data/mysql/slave1_my.cnf
scp /data/mysql/slave1_my.cnf root@192.168.121.222:/data/mysql/my.cnf# 為從庫2生成配置
sed 's/{{ server_id }}/3/' /data/docker/mysql/my.cnf > /data/mysql/slave2_my.cnf
scp /data/mysql/slave2_my.cnf root@192.168.121.223:/data/mysql/my.cnf
執行部署:
ansible-playbook /data/ansible/deploy_mysql.yml
全部顯示ok表示部署完成
4.3 配置 MySQL 主從復制
分別在主庫 (192.168.121.221) 從庫(192.168.121.222,192.168.121.223)上操作:
#說明:auto.cnf文件里保存的是每個數據庫實例的UUID信息,代表數據庫的唯一標識
[root@mysql-master]# rm -rf /data/mysql/data/auto.cnf
[root@mysql-slave1]# rm -rf /data/mysql/data # 刪除從庫 data數據目錄
[root@mysql-slave2]# rm -rf /data/mysql/data # 刪除從庫 data數據目錄
# 主從數據同步
[root@mysql-master]# scp -r /data/mysql/data mysql-slave1:/data/mysql/
[root@mysql-master]# scp -r /data/mysql/data mysql-slave2:/data/mysql/# 進入容器
[root@mysql-master]# docker exec -it mysql bash# 登錄MySQL
mysql -uroot -p123456# 創建復制用戶
CREATE USER 'copy'@'%' IDENTIFIED BY '123456';
GRANT REPLICATION SLAVE ON *.* TO 'copy'@'%';
FLUSH PRIVILEGES;# MySQL 8.0 及以上版本默認使用 caching_sha2_password 認證插件,該插件要求使用加密連接(SSL)或在特定配置下才能允許非加密連接。從庫在連接主庫時,由于未配置 SSL 且主庫未放寬限制,導致認證失敗。
# 使用mysql_native_password插件進行身份驗證
ALTER USER 'copy'@'%'
IDENTIFIED WITH mysql_native_password BY '123456';
FLUSH PRIVILEGES;
# 查看主庫狀態
SHOW MASTER STATUS;
# 記錄File和Position信息mysql> show master status;
+------------------+----------+--------------+------------------+------------------------------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+--------------+------------------+------------------------------------------+
| mysql-bin.000003 | 902 | | | 965d216d-7d64-11f0-8771-000c29111b7d:1-8 |
+------------------+----------+--------------+------------------+------------------------------------------+
1 row in set (0.00 sec)
在從庫 1 (192.168.121.222) 上操作:
# 進入容器
docker exec -it mysql bahs# 登錄MySQL
mysql -uroot -p123456# 停止從庫
mysql> STOP SLAVE;
Query OK, 0 rows affected, 2 warnings (0.00 sec)# 配置主從復制
mysql> change master to master_host='192.168.121.221',master_user='copy',master_password='123456',master_port=3306,master_log_file='mysql-bin.000003',master_log_pos=902;
Query OK, 0 rows affected, 9 warnings (0.01 sec)
說明:
master_host master的IP
master_user 復制的用戶
master_password 復制用戶密碼
master_port master的端口號
master_log_file master正在寫的二進制文件名,鎖表后查看的
master_log_pos master正在寫的二進制位置# 啟動從庫
mysql> start slave;
Query OK, 0 rows affected, 1 warning (0.00 sec)# 查看從庫狀態確保Slave_IO_Running和Slave_SQL_Running都是Yes
mysql> show slave status\G
*************************** 1. row ***************************Slave_IO_State: Waiting for source to send eventMaster_Host: 192.168.121.221Master_User: chenjunMaster_Port: 3306Connect_Retry: 60Master_Log_File: mysql-bin.000003Read_Master_Log_Pos: 1354Relay_Log_File: mysql-slave1-relay-bin.000003Relay_Log_Pos: 366Relay_Master_Log_File: mysql-bin.000003Slave_IO_Running: YesSlave_SQL_Running: YesReplicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0Last_Error: Skip_Counter: 0Exec_Master_Log_Pos: 1354Relay_Log_Space: 1204Until_Condition: NoneUntil_Log_File: Until_Log_Pos: 0Master_SSL_Allowed: NoMaster_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: NoLast_IO_Errno: 0Last_IO_Error: Last_SQL_Errno: 0Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 221Master_UUID: 965d216d-7d64-11f0-8771-000c29111b7dMaster_Info_File: mysql.slave_master_infoSQL_Delay: 0SQL_Remaining_Delay: NULLSlave_SQL_Running_State: Replica has read all relay log; waiting for more updatesMaster_Retry_Count: 86400Master_Bind: Last_IO_Error_Timestamp: Last_SQL_Error_Timestamp: Master_SSL_Crl: Master_SSL_Crlpath: Retrieved_Gtid_Set: 965d216d-7d64-11f0-8771-000c29111b7d:9-10Executed_Gtid_Set: 965d216d-7d64-11f0-8771-000c29111b7d:9-10,
966066a6-7d64-11f0-9760-000c29236169:1-6Auto_Position: 0Replicate_Rewrite_DB: Channel_Name: Master_TLS_Version: Master_public_key_path: Get_master_public_key: 0Network_Namespace:
1 row in set, 1 warning (0.00 sec)
在從庫 2 (192.168.121.223) 上操作:
# 停止從庫
mysql> stop slave;
Query OK, 0 rows affected, 2 warnings (0.00 sec)# 配置同步信息
mysql> change master to -> master_host='192.168.121.221',-> master_user='copy',-> master_password='123456',-> master_port=3306,-> master_log_file='mysql-bin.000003',-> master_log_pos=902;mysql> start slave;
Query OK, 0 rows affected, 1 warning (0.01 sec)mysql> show slave status\G
*************************** 1. row ***************************Slave_IO_State: Waiting for source to send eventMaster_Host: 192.168.121.221Master_User: copyMaster_Port: 3306Connect_Retry: 60Master_Log_File: mysql-bin.000003Read_Master_Log_Pos: 1354Relay_Log_File: mysql-slave2-relay-bin.000002Relay_Log_Pos: 326Relay_Master_Log_File: mysql-bin.000003Slave_IO_Running: YesSlave_SQL_Running: No # 這里發現是no沒有成功Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 1396Last_Error: Coordinator stopped because there were error(s) in the worker(s). The most recent failure being: Worker 1 failed executing transaction '965d216d-7d64-11f0-8771-000c29111b7d:9' at master log mysql-bin.000003, end_log_pos 1187. See error log and/or performance_schema.replication_applier_status_by_worker table for more details about this failure or others, if any.Skip_Counter: 0Exec_Master_Log_Pos: 902Relay_Log_Space: 995Until_Condition: NoneUntil_Log_File: Until_Log_Pos: 0Master_SSL_Allowed: NoMaster_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: NULL
Master_SSL_Verify_Server_Cert: NoLast_IO_Errno: 0Last_IO_Error: Last_SQL_Errno: 1396Last_SQL_Error: Coordinator stopped because there were error(s) in the worker(s). The most recent failure being: Worker 1 failed executing transaction '965d216d-7d64-11f0-8771-000c29111b7d:9' at master log mysql-bin.000003, end_log_pos 1187. See error log and/or performance_schema.replication_applier_status_by_worker table for more details about this failure or others, if any.Replicate_Ignore_Server_Ids: Master_Server_Id: 221Master_UUID: 965d216d-7d64-11f0-8771-000c29111b7dMaster_Info_File: mysql.slave_master_infoSQL_Delay: 0SQL_Remaining_Delay: NULLSlave_SQL_Running_State: Master_Retry_Count: 86400Master_Bind: Last_IO_Error_Timestamp: Last_SQL_Error_Timestamp: 250820 01:59:29Master_SSL_Crl: Master_SSL_Crlpath: Retrieved_Gtid_Set: 965d216d-7d64-11f0-8771-000c29111b7d:9-10Executed_Gtid_Set: 96621d3c-7d64-11f0-9a8b-000c290f45a7:1-5Auto_Position: 0Replicate_Rewrite_DB: Channel_Name: Master_TLS_Version: Master_public_key_path: Get_master_public_key: 0Network_Namespace:
1 row in set, 1 warning (0.00 sec)
在搭建第二臺從庫的的時候發現了一個小問題,第一臺從庫配置完主從復制之后主庫的Position發生了變化導致第二臺從庫Slave_SQL_Running: No # no沒有成功,
接下來去主庫重新獲取file和position主庫 (192.168.121.221) 上操作:
mysql> SHOW MASTER STATUS;
+------------------+----------+--------------+------------------+-------------------------------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+--------------+------------------+-------------------------------------------+
| mysql-bin.000003 | 1354 | | | 965d216d-7d64-11f0-8771-000c29111b7d:1-10 |
+------------------+----------+--------------+------------------+-------------------------------------------+
1 row in set (0.00 sec)
# 發現position從原來的902變成了1354
在從庫 2 (192.168.121.223) 上操作:
# 停止從庫
mysql> stop slave;
Query OK, 0 rows affected, 1 warning (0.01 sec)# 清除主從同步規則
mysql> reset slave;
Query OK, 0 rows affected, 1 warning (0.00 sec)# 重新同步配置信息
mysql> change master to -> master_host='192.168.121.221',-> master_user='copy',-> master_password='123456',-> master_port=3306,-> master_log_file='mysql-bin.000003',-> master_log_pos=1354; # 注意修改pos為目前主庫的值
Query OK, 0 rows affected, 9 warnings (0.01 sec)mysql> start slave;
Query OK, 0 rows affected, 1 warning (0.00 sec)mysql> show slave status\G
*************************** 1. row ***************************Slave_IO_State: Waiting for source to send eventMaster_Host: 192.168.121.221Master_User: chenjunMaster_Port: 3306Connect_Retry: 60Master_Log_File: mysql-bin.000003Read_Master_Log_Pos: 1354Relay_Log_File: mysql-slave2-relay-bin.000002Relay_Log_Pos: 326Relay_Master_Log_File: mysql-bin.000003Slave_IO_Running: YesSlave_SQL_Running: Yes # 發現主從復制成功了Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0Last_Error: Skip_Counter: 0Exec_Master_Log_Pos: 1354Relay_Log_Space: 543Until_Condition: NoneUntil_Log_File: Until_Log_Pos: 0Master_SSL_Allowed: NoMaster_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: NoLast_IO_Errno: 0Last_IO_Error: Last_SQL_Errno: 0Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 221Master_UUID: 965d216d-7d64-11f0-8771-000c29111b7dMaster_Info_File: mysql.slave_master_infoSQL_Delay: 0SQL_Remaining_Delay: NULLSlave_SQL_Running_State: Replica has read all relay log; waiting for more updatesMaster_Retry_Count: 86400Master_Bind: Last_IO_Error_Timestamp: Last_SQL_Error_Timestamp: Master_SSL_Crl: Master_SSL_Crlpath: Retrieved_Gtid_Set: Executed_Gtid_Set: 96621d3c-7d64-11f0-9a8b-000c290f45a7:1-5Auto_Position: 0Replicate_Rewrite_DB: Channel_Name: Master_TLS_Version: Master_public_key_path: Get_master_public_key: 0Network_Namespace:
1 row in set, 1 warning (0.00 sec)
4.4 配置半同步復制
編輯主庫配置文件在主庫 2 (192.168.121.221) 上操作
vim /data/mysql/my.cnf# 在[mysqld]模塊下添加以下參數:
[mysqld]
# 啟用半同步主庫模式(核心參數)
rpl_semi_sync_master_enabled = 1# 半同步超時時間(單位:毫秒,默認10000ms=10秒,建議根據網絡延遲調整)
# 若從庫在超時時間內未確認,主庫會降級為異步復制
rpl_semi_sync_master_timeout = 30000
編輯從庫配置文件在從庫1 (192.168.121.222)和從庫2(192.168.121.223) 上操作
vim /data/mysql/my.cnf
同樣在[mysqld]模塊下添加以下參數:
[mysqld]
# 啟用半同步從庫模式(核心參數)
rpl_semi_sync_slave_enabled = 1# 可選參數:從庫是否在接收到binlog后立即發送確認(1=立即發送,0=等待SQL線程執行后發送)
# 建議保持默認1(僅確認接收,不等待執行,減少主庫等待時間)
rpl_semi_sync_slave_trace_level = 32
重啟主庫和從庫服務
docker restart mysql
驗證持久化配置是否生效
-- 主庫驗證
SHOW GLOBAL VARIABLES LIKE 'rpl_semi_sync_master_enabled'; -- 應返回ON
SHOW GLOBAL VARIABLES LIKE 'rpl_semi_sync_master_timeout'; -- 應返回配置的超時值-- 從庫驗證
SHOW GLOBAL VARIABLES LIKE 'rpl_semi_sync_slave_enabled'; -- 應返回ON
確認半同步狀態已激活
-- 主庫
SHOW GLOBAL STATUS LIKE 'Rpl_semi_sync_master_status'; -- 應返回ON-- 從庫
SHOW GLOBAL STATUS LIKE 'Rpl_semi_sync_slave_status'; -- 應返回ON
如果報錯
mysql> SHOW GLOBAL STATUS LIKE 'Rpl_semi_sync_slave_status';
+----------------------------+-------+
| Variable_name | Value |
+----------------------------+-------+
| Rpl_semi_sync_slave_status | OFF |
+----------------------------+-------+
1 row in set (0.01 sec)mysql> STOP SLAVE;
Query OK, 0 rows affected, 2 warnings (0.00 sec)mysql> START SLAVE;
ERROR 1200 (HY000): The server is not configured as slave; fix in config file or with CHANGE MASTER TO
重新配置兩個從庫連接主庫的信息原因是主庫binlog發生了變化
mysql> change master to -> master_host='192.168.121.221',-> master_user='copy',-> master_password='123456',-> master_port=3306,-> master_log_file='mysql-bin.000005',-> master_log_pos=197;
Query OK, 0 rows affected, 9 warnings (0.01 sec)mysql> start slave;
Query OK, 0 rows affected, 1 warning (0.01 sec)mysql> show slave status\G
*************************** 1. row ***************************Slave_IO_State: Waiting for source to send eventMaster_Host: 192.168.121.221Master_User: chenjunMaster_Port: 3306Connect_Retry: 60Master_Log_File: mysql-bin.000005Read_Master_Log_Pos: 197Relay_Log_File: mysql-slave1-relay-bin.000002Relay_Log_Pos: 326Relay_Master_Log_File: mysql-bin.000005Slave_IO_Running: YesSlave_SQL_Running: YesReplicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0Last_Error: Skip_Counter: 0Exec_Master_Log_Pos: 197Relay_Log_Space: 543Until_Condition: NoneUntil_Log_File: Until_Log_Pos: 0Master_SSL_Allowed: NoMaster_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: NoLast_IO_Errno: 0Last_IO_Error: Last_SQL_Errno: 0Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 221Master_UUID: ebd87b10-7d6c-11f0-965d-000c29111b7dMaster_Info_File: mysql.slave_master_infoSQL_Delay: 0SQL_Remaining_Delay: NULLSlave_SQL_Running_State: Replica has read all relay log; waiting for more updatesMaster_Retry_Count: 86400Master_Bind: Last_IO_Error_Timestamp: Last_SQL_Error_Timestamp: Master_SSL_Crl: Master_SSL_Crlpath: Retrieved_Gtid_Set: Executed_Gtid_Set: 965d216d-7d64-11f0-8771-000c29111b7d:1-5Auto_Position: 0Replicate_Rewrite_DB: Channel_Name: Master_TLS_Version: Master_public_key_path: Get_master_public_key: 0Network_Namespace:
1 row in set, 1 warning (0.00 sec)
測試主從復制
# 主庫執行
create database test;# 從庫執行
show databases;
半同步測試
在主庫(192.168.121.221)上操作:
# 記錄初始事務計數,用于后續對比
mysql> SHOW GLOBAL STATUS LIKE 'Rpl_semi_sync_master_yes_tx'; #成功等待從庫確認的事務數
+-----------------------------+-------+
| Variable_name | Value |
+-----------------------------+-------+
| Rpl_semi_sync_master_yes_tx | 6 |
+-----------------------------+-------+
1 row in set (0.00 sec)mysql> SHOW GLOBAL STATUS LIKE 'Rpl_semi_sync_master_no_tx'; #未等待確認的事務數(異步)
+----------------------------+-------+
| Variable_name | Value |
+----------------------------+-------+
| Rpl_semi_sync_master_no_tx | 0 |
+----------------------------+-------+
1 row in set (0.00 sec)# 主庫創建測試庫表并插入數據
mysql> CREATE DATABASE IF NOT EXISTS test;
Query OK, 1 row affected (0.00 sec)mysql> USE test;
Database changed
mysql> CREATE TABLE IF NOT EXISTS t (id INT PRIMARY KEY, val VARCHAR(50));
Query OK, 0 rows affected (0.01 sec)# 執行事務
mysql> INSERT INTO t VALUES (1, 'semi-sync-test');
Query OK, 1 row affected (0.00 sec)mysql> SHOW GLOBAL STATUS LIKE 'Rpl_semi_sync_master_yes_tx';
+-----------------------------+-------+
| Variable_name | Value |
+-----------------------------+-------+
| Rpl_semi_sync_master_yes_tx | 9 |
+-----------------------------+-------+
1 row in set (0.00 sec)mysql> SHOW GLOBAL STATUS LIKE 'Rpl_semi_sync_master_no_tx';
+----------------------------+-------+
| Variable_name | Value |
+----------------------------+-------+
| Rpl_semi_sync_master_no_tx | 0 |
+----------------------------+-------+
1 row in set (0.00 sec)# 若yes_tx增加,說明事務在收到從庫確認后才提交,半同步正常。
# 若no_tx增加,說明半同步未生效(需排查從庫連接或配置)
5 MHA 部署與配置
5.1 部署 MHA 節點
在 mha-manager (192.168.121.220) 上操作:
# 安裝依賴
yum install -y perl-DBD-MySQL perl-Config-Tiny perl-Log-Dispatch perl-Parallel-ForkManager perl-ExtUtils-CBuilder perl-ExtUtils-MakeMaker epel-release # 安裝MHA Node
wget https://github.com/yoshinorim/mha4mysql-node/releases/download/v0.58/mha4mysql-node-0.58-0.el7.centos.noarch.rpm
rpm -ivh mha4mysql-node-0.58-0.el7.centos.noarch.rpm# 安裝MHA Manager
wget https://github.com/yoshinorim/mha4mysql-manager/releases/download/v0.58/mha4mysql-manager-0.58-0.el7.centos.noarch.rpm
rpm -ivh mha4mysql-manager-0.58-0.el7.centos.noarch.rpm# 創建MHA配置目錄
mkdir -p /etc/mha/mysql_cluster
mkdir -p /var/log/mha/mysql_cluster# 創建MHA配置文件
cat > /etc/mha/mysql_cluster.cnf << EOF
[server default]
manager_workdir=/var/log/mha/mysql_cluster
manager_log=/var/log/mha/mysql_cluster/manager.log
master_binlog_dir=/var/lib/mysql
user=mha
password=123456
ping_interval=1
remote_workdir=/tmp
repl_user=mha
repl_password=123456
ssh_user=root
master_ip_failover_script=/usr/local/bin/master_ip_failover
master_ip_online_change_script=/usr/local/bin/master_ip_online_change
secondary_check_script=masterha_secondary_check -s 192.168.121.222 -s 192.168.121.223
shutdown_script=""[server1]
hostname=192.168.121.221
port=3306
candidate_master=1[server2]
hostname=192.168.121.222
port=3306
candidate_master=1[server3]
hostname=192.168.121.223
port=3306
candidate_master=0
EOFscp root@192.168.121.210:/data/docker/mysql/master_ip_failover /usr/local/bin/
scp root@192.168.121.210:/data/docker/mysql/master_ip_online_change /usr/local/bin/
chmod +x /usr/local/bin/master_ip_failover
chmod +x /usr/local/bin/master_ip_online_change
在所有 MySQL 節點上安裝 MHA Node:
# 在ansible-server上執行
ansible mysql -m shell -a 'yum install -y perl-DBD-MySQL'
cd /data/docker
wget https://github.com/yoshinorim/mha4mysql-node/releases/download/v0.58/mha4mysql-node-0.58-0.el7.centos.noarch.rpmansible mysql -m copy -a 'src=/data/docker/mha4mysql-node-0.58-0.el7.centos.noarch.rpm dest=/tmp/'ansible mysql -m shell -a 'rpm -ivh /tmp/mha4mysql-node-0.58-0.el7.centos.noarch.rpm'
5.2 配置 MySQL 監控用戶
在主庫上創建 MHA 監控用戶(主從復制從庫會同步):
# 登錄MySQL
mysql -uroot -p123456# 創建監控用戶
CREATE USER 'mha'@'%' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON *.* TO 'mha'@'%';
ALTER USER 'mha'@'%' IDENTIFIED WITH mysql_native_password BY '123456';
FLUSH PRIVILEGES;
5.3 測試 MHA 配置
# 測試SSH連接,如果沒用配置ssh免密登錄可能會報錯
[root@mha-manager mha]# masterha_check_ssh --conf=/etc/mha/mysql_cluster.cnf
Thu Aug 21 15:40:49 2025 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Thu Aug 21 15:40:49 2025 - [info] Reading application default configuration from /etc/mha/mysql_cluster.cnf..
Thu Aug 21 15:40:49 2025 - [info] Reading server configuration from /etc/mha/mysql_cluster.cnf..
Thu Aug 21 15:40:49 2025 - [info] Starting SSH connection tests..
Thu Aug 21 15:40:50 2025 - [debug]
Thu Aug 21 15:40:49 2025 - [debug] Connecting via SSH from root@192.168.121.221(192.168.121.221:22) to root@192.168.121.222(192.168.121.222:22)..
Thu Aug 21 15:40:50 2025 - [debug] ok.
Thu Aug 21 15:40:50 2025 - [debug] Connecting via SSH from root@192.168.121.221(192.168.121.221:22) to root@192.168.121.223(192.168.121.223:22)..
Thu Aug 21 15:40:50 2025 - [debug] ok.
Thu Aug 21 15:40:50 2025 - [debug]
Thu Aug 21 15:40:50 2025 - [debug] Connecting via SSH from root@192.168.121.222(192.168.121.222:22) to root@192.168.121.221(192.168.121.221:22)..
Thu Aug 21 15:40:50 2025 - [debug] ok.
Thu Aug 21 15:40:50 2025 - [debug] Connecting via SSH from root@192.168.121.222(192.168.121.222:22) to root@192.168.121.223(192.168.121.223:22)..
Thu Aug 21 15:40:50 2025 - [debug] ok.
Thu Aug 21 15:40:51 2025 - [debug]
Thu Aug 21 15:40:50 2025 - [debug] Connecting via SSH from root@192.168.121.223(192.168.121.223:22) to root@192.168.121.221(192.168.121.221:22)..
Thu Aug 21 15:40:51 2025 - [debug] ok.
Thu Aug 21 15:40:51 2025 - [debug] Connecting via SSH from root@192.168.121.223(192.168.121.223:22) to root@192.168.121.222(192.168.121.222:22)..
Thu Aug 21 15:40:51 2025 - [debug] ok.
Thu Aug 21 15:40:51 2025 - [info] All SSH connection tests passed successfully.# 測試MySQL復制
[root@mha-manager mha]# masterha_check_repl --conf=/etc/mha/mysql_cluster.cnf
Thu Aug 21 15:40:33 2025 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Thu Aug 21 15:40:33 2025 - [info] Reading application default configuration from /etc/mha/mysql_cluster.cnf..
Thu Aug 21 15:40:33 2025 - [info] Reading server configuration from /etc/mha/mysql_cluster.cnf..
Thu Aug 21 15:40:33 2025 - [info] MHA::MasterMonitor version 0.58.
Thu Aug 21 15:40:34 2025 - [info] GTID failover mode = 1
Thu Aug 21 15:40:34 2025 - [info] Dead Servers:
Thu Aug 21 15:40:34 2025 - [info] Alive Servers:
Thu Aug 21 15:40:34 2025 - [info] 192.168.121.221(192.168.121.221:3306)
Thu Aug 21 15:40:34 2025 - [info] 192.168.121.222(192.168.121.222:3306)
Thu Aug 21 15:40:34 2025 - [info] 192.168.121.223(192.168.121.223:3306)
Thu Aug 21 15:40:34 2025 - [info] Alive Slaves:
Thu Aug 21 15:40:34 2025 - [info] 192.168.121.222(192.168.121.222:3306) Version=8.0.28 (oldest major version between slaves) log-bin:enabled
Thu Aug 21 15:40:34 2025 - [info] GTID ON
Thu Aug 21 15:40:34 2025 - [info] Replicating from 192.168.121.221(192.168.121.221:3306)
Thu Aug 21 15:40:34 2025 - [info] Primary candidate for the new Master (candidate_master is set)
Thu Aug 21 15:40:34 2025 - [info] 192.168.121.223(192.168.121.223:3306) Version=8.0.28 (oldest major version between slaves) log-bin:enabled
Thu Aug 21 15:40:34 2025 - [info] GTID ON
Thu Aug 21 15:40:34 2025 - [info] Replicating from 192.168.121.221(192.168.121.221:3306)
Thu Aug 21 15:40:34 2025 - [info] Current Alive Master: 192.168.121.221(192.168.121.221:3306)
Thu Aug 21 15:40:34 2025 - [info] Checking slave configurations..
Thu Aug 21 15:40:34 2025 - [info] read_only=1 is not set on slave 192.168.121.222(192.168.121.222:3306).
Thu Aug 21 15:40:34 2025 - [info] read_only=1 is not set on slave 192.168.121.223(192.168.121.223:3306).
Thu Aug 21 15:40:34 2025 - [info] Checking replication filtering settings..
Thu Aug 21 15:40:34 2025 - [info] binlog_do_db= , binlog_ignore_db=
Thu Aug 21 15:40:34 2025 - [info] Replication filtering check ok.
Thu Aug 21 15:40:34 2025 - [info] GTID (with auto-pos) is supported. Skipping all SSH and Node package checking.
Thu Aug 21 15:40:34 2025 - [info] Checking SSH publickey authentication settings on the current master..
Thu Aug 21 15:40:34 2025 - [info] HealthCheck: SSH to 192.168.121.221 is reachable.
Thu Aug 21 15:40:34 2025 - [info]
192.168.121.221(192.168.121.221:3306) (current master)+--192.168.121.222(192.168.121.222:3306)+--192.168.121.223(192.168.121.223:3306)Thu Aug 21 15:40:34 2025 - [info] Checking replication health on 192.168.121.222..
Thu Aug 21 15:40:34 2025 - [info] ok.
Thu Aug 21 15:40:34 2025 - [info] Checking replication health on 192.168.121.223..
Thu Aug 21 15:40:34 2025 - [info] ok.
Thu Aug 21 15:40:34 2025 - [info] Checking master_ip_failover_script status:
Thu Aug 21 15:40:34 2025 - [info] /usr/local/bin/master_ip_failover --command=status --ssh_user=root --orig_master_host=192.168.121.221 --orig_master_ip=192.168.121.221 --orig_master_port=3306 IN SCRIPT TEST====/sbin/ifconfig ens32:1 down==/sbin/ifconfig ens32:1 192.168.121.200===Checking the Status of the script.. OK
Thu Aug 21 15:40:34 2025 - [info] OK.
Thu Aug 21 15:40:34 2025 - [warning] shutdown_script is not defined.
Thu Aug 21 15:40:34 2025 - [info] Got exit code 0 (Not master dead).MySQL Replication Health is OK.# 啟動MHA Manager
nohup masterha_manager --conf=/etc/mha/mysql_cluster.cnf > /var/log/mha/mysql_cluster/manager.log 2>&1 &# 查看MHA狀態
[root@mha-manager mha]# masterha_check_status --conf=/etc/mha/mysql_cluster.cnf
mysql_cluster (pid:2942) is running(0:PING_OK), master:192.168.121.221
出現以下內容表示啟動成功
5.4 故障轉移效果測試,模擬mysql-matser宕機,指定mysql-slave1成為新的master
5.4.1 在mysql主節點手動開啟vip
ifconfig ens32:1 192.168.121.200/24
5.4.2 mha-manager節點監控日志記錄
[root@mha-manager mha]# tail -f /var/log/mha/mysql_cluster/manager.log
5.4.3 模擬mysql-master宕機,停掉master
[root@mysql-master ~]# docker stop mysql
mysql
查看vip是否漂移到了mysql-slave1
5.4.4 查看狀態master是不是salve1的ip
[root@mha-manager mha]# masterha_check_status --conf=/etc/mha/mysql_cluster.cnf
mysql_cluster (pid:4680) is running(0:PING_OK), master:192.168.121.222
在看看mysql-slave2的主節點信息
至此測試完成,故障主備自動切換master主切換到slave1為主節點,slave2也指向了slave1為主節點。
5.4.5 原mysql-master節點故障恢復
[root@mysql-master ~]# docker start mysql
mysql
[root@mysql-master ~]# docker exec -it mysql bash
root@mysql-master:/# mysql -uroot -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 10
Server version: 8.0.28 MySQL Community Server - GPLCopyright (c) 2000, 2022, Oracle and/or its affiliates.Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.mysql> change master to master_host='192.168.121.222',master_user='mha',master_password='123456',master_port=3306,master_auto_positioon=1;
Query OK, 0 rows affected, 8 warnings (0.01 sec)mysql> start slave;
Query OK, 0 rows affected, 1 warning (0.00 sec)mysql> show slave status\G;
*************************** 1. row ***************************Slave_IO_State: Waiting for source to send eventMaster_Host: 192.168.121.222Master_User: mhaMaster_Port: 3306Connect_Retry: 60Master_Log_File: mysql-bin.000008Read_Master_Log_Pos: 1855Relay_Log_File: mysql-master-relay-bin.000002Relay_Log_Pos: 420Relay_Master_Log_File: mysql-bin.000008Slave_IO_Running: YesSlave_SQL_Running: YesReplicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0Last_Error: Skip_Counter: 0Exec_Master_Log_Pos: 1855Relay_Log_Space: 637Until_Condition: NoneUntil_Log_File: Until_Log_Pos: 0Master_SSL_Allowed: NoMaster_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: NoLast_IO_Errno: 0Last_IO_Error: Last_SQL_Errno: 0Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 222Master_UUID: e6b13ba9-7d6c-11f0-8a0b-000c29236169Master_Info_File: mysql.slave_master_infoSQL_Delay: 0SQL_Remaining_Delay: NULLSlave_SQL_Running_State: Replica has read all relay log; waiting for more updatesMaster_Retry_Count: 86400Master_Bind: Last_IO_Error_Timestamp: Last_SQL_Error_Timestamp: Master_SSL_Crl: Master_SSL_Crlpath: Retrieved_Gtid_Set: Executed_Gtid_Set: 965d216d-7d64-11f0-8771-000c29111b7d:1-10,
e6b13ba9-7d6c-11f0-8a0b-000c29236169:1-4,
ebd87b10-7d6c-11f0-965d-000c29111b7d:1-56Auto_Position: 1Replicate_Rewrite_DB: Channel_Name: Master_TLS_Version: Master_public_key_path: Get_master_public_key: 0Network_Namespace:
1 row in set, 1 warning (0.00 sec)
5.4.6 重啟mha manager,并檢查此時的master節點
[root@mha-manager mha]# systemctl restart mha
[root@mha-manager mha]# masterha_check_status --conf=/etc/mha/mysql_cluster.cnf
mysql_cluster (pid:5425) is running(0:PING_OK), master:192.168.121.222
5.5 配置 MHA 自動啟動
[root@mha-manager mha]# nohup masterha_manager --conf=/etc/mha/mysql_cluster.cnf > /var/log/mha/mysql_cluster/manager.log 2>&1 &
[1] 3978
[root@mha-manager mha]# masterha_check_status --conf=/etc/mha/mysql_cluster.cnf
mysql_cluster (pid:3978) is running(0:PING_OK), master:192.168.121.221
[root@mha-manager mha]# vim /etc/systemd/system/mha.service
[root@mha-manager mha]# systemctl daemon-reload
[root@mha-manager mha]# systemctl enable mha
Created symlink from /etc/systemd/system/multi-user.target.wants/mha.service to /etc/systemd/system/mha.service.
[root@mha-manager mha]# systemctl start mha
[root@mha-manager mha]# systemctl status mha
● mha.service - MHA Manager for MySQL ClusterLoaded: loaded (/etc/systemd/system/mha.service; enabled; vendor preset: disabled)Active: active (running) since 三 2025-08-20 02:24:22 CST; 4s agoMain PID: 4164 (perl)Tasks: 1Memory: 16.9MCGroup: /system.slice/mha.service└─4164 perl /usr/bin/masterha_manager --conf=/etc/mha/mysql_cluster.cnf8月 20 02:24:22 mha-manager systemd[1]: Started MHA Manager for MySQL Cluster.
8月 20 02:24:22 mha-manager masterha_manager[4164]: Wed Aug 20 02:24:22 2025 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
8月 20 02:24:22 mha-manager masterha_manager[4164]: Wed Aug 20 02:24:22 2025 - [info] Reading application default configuration from /etc/mha/mysql_cluster.cnf..
8月 20 02:24:22 mha-manager masterha_manager[4164]: Wed Aug 20 02:24:22 2025 - [info] Reading server configuration from /etc/mha/mysql_cluster.cnf..
6 MyCat2 部署與配置
6.1 安裝部署MyCat2
6.1.1 在mycat1和mycat2服務器部署環境所需要的MySQL數據庫
[root@mycat1 ~]# docker run -d --name mysql --restart=always -p 3306:3306 -v /data/mysql/data:/var/lib/mysql -v /data/mysql/logs:/var/log/mysql -e MYSQL_ROOT_PASSWORD=123456 docker.1ms.run/mysql:8.0.28[root@mycat2 ~]# docker run -d --name mysql --restart=always -p 3306:3306 -v /data/mysql/data:/var/lib/mysql -v /data/mysql/logs:/var/log/mysql -e MYSQL_ROOT_PASSWORD=123456 docker.1ms.run/mysql:8.0.28
6.1.2 安裝java環境
# 安裝mycat需要的Java環境
[root@mycat1 ~]# yum install -y java
[root@mycat2 ~]# yum install -y java
6.1.3安裝java環境下載mycat安裝包和jar包
鏈接: https://pan.baidu.com/s/1w9hr2EH9Cpqt6LFjn8MPrw?pwd=63hu 提取碼: 63hu
6.1.4 解壓mycat ZIP包
[root@mycat1 ~]# yum install -y unzip
[root@mycat2 ~]# yum install -y unzip[root@mycat1 ~]# unzip mycat2-install-template-1.21.zip
[root@mycat2 ~]# unzip mycat2-install-template-1.21.zip
6.1.5 把解壓后的mycat目錄移動到 /usr/local/目錄下
[root@mycat1 ~]# mv mycat /usr/local/
[root@mycat2 ~]# mv mycat /usr/local/
6.1.6 將jar包放入/usr/local/mycat/lib下
[root@mycat1 ~]# mv mycat2-1.22-release-jar-with-dependencies-2022-10-13.jar /usr/local/mycat/lib/
[root@mycat2 ~]# mv mycat2-1.22-release-jar-with-dependencies-2022-10-13.jar /usr/local/mycat/lib/
6.1.7 授予bin目錄可執行權限,防止啟動報錯
[root@mycat1 ~]# cd /usr/local/mycat/
[root@mycat1 mycat]# ls
bin conf lib logs
[root@mycat1 mycat]# chmod +x bin/*
[root@mycat1 mycat]# cd bin/
[root@mycat1 bin]# ll
總用量 2588
-rwxr-xr-x 1 root root 15666 3月 5 2021 mycat
-rwxr-xr-x 1 root root 3916 3月 5 2021 mycat.bat
-rwxr-xr-x 1 root root 281540 3月 5 2021 wrapper-aix-ppc-32
-rwxr-xr-x 1 root root 319397 3月 5 2021 wrapper-aix-ppc-64
-rwxr-xr-x 1 root root 253808 3月 5 2021 wrapper-hpux-parisc-64
-rwxr-xr-x 1 root root 140198 3月 5 2021 wrapper-linux-ppc-64
-rwxr-xr-x 1 root root 99401 3月 5 2021 wrapper-linux-x86-32
-rwxr-xr-x 1 root root 111027 3月 5 2021 wrapper-linux-x86-64
-rwxr-xr-x 1 root root 114052 3月 5 2021 wrapper-macosx-ppc-32
-rwxr-xr-x 1 root root 233604 3月 5 2021 wrapper-macosx-universal-32
-rwxr-xr-x 1 root root 253432 3月 5 2021 wrapper-macosx-universal-64
-rwxr-xr-x 1 root root 112536 3月 5 2021 wrapper-solaris-sparc-32
-rwxr-xr-x 1 root root 148512 3月 5 2021 wrapper-solaris-sparc-64
-rwxr-xr-x 1 root root 110992 3月 5 2021 wrapper-solaris-x86-32
-rwxr-xr-x 1 root root 204800 3月 5 2021 wrapper-windows-x86-32.exe
-rwxr-xr-x 1 root root 220672 3月 5 2021 wrapper-windows-x86-64.exe[root@mycat2 ~]# cd /usr/local/mycat/
[root@mycat2 mycat]# ls
bin conf lib logs
[root@mycat2 mycat]# chmod +x bin/*
[root@mycat2 mycat]# cd bin/
[root@mycat2 bin]# ll
總用量 2588
-rwxr-xr-x 1 root root 15666 3月 5 2021 mycat
-rwxr-xr-x 1 root root 3916 3月 5 2021 mycat.bat
-rwxr-xr-x 1 root root 281540 3月 5 2021 wrapper-aix-ppc-32
-rwxr-xr-x 1 root root 319397 3月 5 2021 wrapper-aix-ppc-64
-rwxr-xr-x 1 root root 253808 3月 5 2021 wrapper-hpux-parisc-64
-rwxr-xr-x 1 root root 140198 3月 5 2021 wrapper-linux-ppc-64
-rwxr-xr-x 1 root root 99401 3月 5 2021 wrapper-linux-x86-32
-rwxr-xr-x 1 root root 111027 3月 5 2021 wrapper-linux-x86-64
-rwxr-xr-x 1 root root 114052 3月 5 2021 wrapper-macosx-ppc-32
-rwxr-xr-x 1 root root 233604 3月 5 2021 wrapper-macosx-universal-32
-rwxr-xr-x 1 root root 253432 3月 5 2021 wrapper-macosx-universal-64
-rwxr-xr-x 1 root root 112536 3月 5 2021 wrapper-solaris-sparc-32
-rwxr-xr-x 1 root root 148512 3月 5 2021 wrapper-solaris-sparc-64
-rwxr-xr-x 1 root root 110992 3月 5 2021 wrapper-solaris-x86-32
-rwxr-xr-x 1 root root 204800 3月 5 2021 wrapper-windows-x86-32.exe
-rwxr-xr-x 1 root root 220672 3月 5 2021 wrapper-windows-x86-64.exe
6.1.8 mycat2加入PATH環境變量,并設置開機啟動
[root@mycat1 ~]# echo "PATH=/usr/local/mycat/bin/:$PATH" >>/root/.bashrc
[root@mycat1 ~]# PATH=/usr/local/mycat/bin/:$PATH[root@mycat2 ~]# echo "PATH=/usr/local/mycat/bin/:$PATH" >>/root/.bashrc
[root@mycat2 ~]# PATH=/usr/local/mycat/bin/:$PATH
6.1.9 編輯prototypeDs.datasource.json默認數據源文件,并啟動mycat(連接本機docker容器mysql數據庫環境)
[root@mycat1 ~]# cd /usr/local/mycat/
[root@mycat1 mycat]# ls
bin conf lib logs
[root@mycat1 mycat]# cd conf/datasources/
[root@mycat1 datasources]# ls
prototypeDs.datasource.json
[root@mycat1 datasources]# vim prototypeDs.datasource.json[root@mycat2 ~]# cd /usr/local/mycat/
[root@mycat2 mycat]# ls
bin conf lib logs
[root@mycat2 mycat]# cd conf/datasources/
[root@mycat2 datasources]# ls
prototypeDs.datasource.json
[root@mycat2 datasources]# vim prototypeDs.datasource.json{"dbType":"mysql","idleTimeout":60000,"initSqls":[],"initSqlsGetConnection":true,"instanceType":"READ_WRITE","maxCon":1000,"maxConnectTimeout":3000,"maxRetryCount":5,"minCon":1,"name":"prototypeDs","password":"123456", # 本機MySQL密碼