數據庫備份、主從、集群等配置
- 1 MySQL
- 1.1 docker安裝MySQL
- 1.2 主從復制
- 1.2.1 主節點配置
- 1.2.2 從節點配置
- 1.2.3 創建用于主從同步的用戶
- 1.2.4 開啟主從同步
- 1.2.4 主從同步驗證
- 1.3 主從切換
- 1.3.1 主節點設置只讀(在192.168.1.151上操作)
- 1.3.2 檢查主從數據是否同步完畢(在192.168.1.152操作)
- 1.3.3 停止并重置從節點(在192.168.1.152操作)
- 1.3.4 修改原從節點的只讀配置(在192.168.1.152操作)
- 1.3.5 主從切換
- 1.3.6 驗證
- 2 Redis
- 2.1 Redis主從復制
- 2.2 Redis哨兵機制
- 2.3 Redis集群
- 2.4 主從復制、哨兵sentinel、集群的區別
- 3 MongoDB
- 3.1 MongoDB主從復制集群(不推薦)
- 3.2 MongoDB副本集(Replica Set)集群
- 3.2.1 集群搭建
- 3.2.2 測試
- 3.2.3 Navicat Premium連接mongoDB副本集群
- 3.2.4 Golang代碼連接mongoDB副本集群
- 3.3 MongoDB分片集群(暫未搭建成功)
1 MySQL
服務器配置:
操作系統類型 | IP | MySQL版本 | 主從類型 |
---|---|---|---|
7.9.2009 | 192.168.1.151 | 8.0.21 | 主 |
7.9.2009 | 192.168.1.152 | 8.0.21 | 從 |
1.1 docker安裝MySQL
在192.168.1.151、192.168.1.152安裝MySQL。
- 創建掛載目錄
mkdir -p /opt/soft/mysql/{conf,data,log}
- 拉取鏡像
docker pull mysql:8.0.21
- docker-compose.yaml(主節點的容器名稱:dc_mysql_master、dc_mysql_slave)
version: '3'
services:mysql:image: mysql:8.0.21container_name: dc_mysql_masterrestart: alwaysenvironment:TZ: Asia/ShanghaiMYSQL_ROOT_PASSWORD: 123456ports:- 4306:3306volumes:- /etc/localtime:/etc/localtime:ro # 將外邊時間直接掛載到容器內部,權限只讀- /opt/soft/mysql/data/:/var/lib/mysql/- /opt/soft/mysql/conf/my.cnf:/etc/mysql/my.cnf- /opt/soft/mysql/log/:/var/log/mysql/logging:driver: json-fileoptions:max-size: 10mmax-file: 5command:--default-authentication-plugin=mysql_native_password--character-set-server=utf8mb4--collation-server=utf8mb4_general_ci--explicit_defaults_for_timestamp=true--lower_case_table_names=1
- 創建配置文件:
vim /opt/soft/mysql/conf/my.cnf
my.cnf配置:
[client]
#設置客戶端默認字符集utf8mb4
default-character-set=utf8mb4
[mysql]
#設置服務器默認字符集為utf8mb4
default-character-set=utf8mb4
[mysqld]
# 解決MySQL8.0版本GROUP BY問題
sql_mode='STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_ENGINE_SUBSTITUTION'
# 限制數據導入導出操作的目錄
secure_file_priv=/var/lib/mysql
# 允許任何IP訪問
bind-address = 0.0.0.0
- 創建mysql容器
docker-compose up -d
- 開放端口(4306)
# 開放4306端口的命令
firewall-cmd --zone=public --add-port=4306/tcp --permanent# 重啟防火墻
firewall-cmd --reload# 查看開放的端口
firewall-cmd --list-port
1.2 主從復制
1.2.1 主節點配置
- 修改my.cnf,在[mysqld]下添加以下內容:
#==================== 主從同步配置=========================
#節點id編號,各個mysql的server_id需要唯一
server_id=1
#[可選]指定binlog和binglog index的文件名
log_bin=mysql-bin
log_bin_index=binlog.index
#[可選]啟用中繼日志
relay-log=mysql-relay
#[可選] 單個binlog最大的文件大小,默認是1G
#max_binlog_size=500M
#[可選]設置binlog格式.STATEMENT,row,mixed
binlog_format=row
#[可選]0(默認)表示讀寫(主機),1表示只讀(從機)
read-only=0
#[可選]設置日志文件保留的時長,單位是秒(默認不刪除文件)
#binlog_expire_logs_seconds=6000
#[可選]設置不要復制的數據庫
#binlog-ignore-db=test
#[可選]設置需要復制的數據庫,默認全部記錄。比如:binlog-do-db=atguigu_master_slave
#binlog-do-db=需要復制的主數據庫名字
- 完整my.cnf
[client]
#設置客戶端默認字符集utf8mb4
default-character-set=utf8mb4
[mysql]
#設置服務器默認字符集為utf8mb4
default-character-set=utf8mb4
[mysqld]
# 解決MySQL8.0版本GROUP BY問題
sql_mode='STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_ENGINE_SUBSTITUTION'
# 限制數據導入導出操作的目錄
secure_file_priv=/var/lib/mysql
# 允許任何IP訪問
bind-address = 0.0.0.0#==================== 主從同步配置=========================
#節點id編號,各個mysql的server_id需要唯一
server_id=1
#[可選]指定binlog和binglog index的文件名
log_bin=mysql-bin
log_bin_index=binlog.index
#[可選]啟用中繼日志
relay-log=mysql-relay
#[可選] 單個binlog最大的文件大小,默認是1G
#max_binlog_size=500M
#[可選]設置binlog格式.STATEMENT,row,mixed
binlog_format=row
#[可選]0(默認)表示讀寫(主機),1表示只讀(從機)
read-only=0
#[可選]設置日志文件保留的時長,單位是秒(默認不刪除文件)
#binlog_expire_logs_seconds=6000
#[可選]設置不要復制的數據庫
#binlog-ignore-db=test
#[可選]設置需要復制的數據庫,默認全部記錄。比如:binlog-do-db=atguigu_master_slave
#binlog-do-db=需要復制的主數據庫名字
- 修改配置后重啟數據庫。
1.2.2 從節點配置
主從節點配置的差異:由于后續需要演示主從切換,所以無論是主從節點,都需要提前開啟binlog和relaylog。故而這里主從配置基本一致,具體配置選項差異只有:
server_id
、read-only
選項。
- 修改my.cnf,在[mysqld]下添加以下內容:
#==================== 主從同步配置=========================
#節點id編號,各個mysql的server_id需要唯一
server_id=2
#[可選]指定binlog和binglog index的文件名
log_bin=mysql-log
log_bin_index=binlog.index
#[可選]啟用中繼日志
relay-log=mysql-relay
#[可選] 單個binlog最大的文件大小,默認是1G
#max_binlog_size=500M
#[可選]設置binlog格式.STATEMENT,row,mixed
binlog_format=row
#[可選]0(默認)表示讀寫(主機),1表示只讀(從機)
read-only=1
#[可選]設置日志文件保留的時長,單位是秒(默認不刪除文件)
#binlog_expire_logs_seconds=6000
#[可選]設置不要復制的數據庫
#binlog-ignore-db=test
#[可選]設置需要復制的數據庫,默認全部記錄。比如:binlog-do-db=atguigu_master_slave
#binlog-do-db=需要復制的主數據庫名字
- 完整my.cnf
[client]
#設置客戶端默認字符集utf8mb4
default-character-set=utf8mb4
[mysql]
#設置服務器默認字符集為utf8mb4
default-character-set=utf8mb4
[mysqld]
# 解決MySQL8.0版本GROUP BY問題
sql_mode='STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_ENGINE_SUBSTITUTION'
# 限制數據導入導出操作的目錄
secure_file_priv=/var/lib/mysql
# 允許任何IP訪問
bind-address = 0.0.0.0#==================== 主從同步配置=========================
#節點id編號,各個mysql的server_id需要唯一
server_id=2
#[可選]指定binlog和binglog index的文件名
log_bin=mysql-log
log_bin_index=binlog.index
#[可選]啟用中繼日志
relay-log=mysql-relay
#[可選] 單個binlog最大的文件大小,默認是1G
#max_binlog_size=500M
#[可選]設置binlog格式.STATEMENT,row,mixed
binlog_format=row
#[可選]0(默認)表示讀寫(主機),1表示只讀(從機)
read-only=1
#[可選]設置日志文件保留的時長,單位是秒(默認不刪除文件)
#binlog_expire_logs_seconds=6000
#[可選]設置不要復制的數據庫
#binlog-ignore-db=test
#[可選]設置需要復制的數據庫,默認全部記錄。比如:binlog-do-db=atguigu_master_slave
#binlog-do-db=需要復制的主數據庫名字
- 修改配置后重啟數據庫。
1.2.3 創建用于主從同步的用戶
主、從節點都需要進行以下操作:
主節點:
# 進入容器
docker exec -it dc_mysql_master /bin/sh# 登錄
mysql -uroot -p#創建slave1用戶
CREATE USER 'slave1'@'%' IDENTIFIED BY '123456';#給slave1用戶授予數據同步的權限
GRANT replication slave on *.* to 'slave1'@'%';#刷新權限
flush privileges;
從節點:
# 進入容器
docker exec -it dc_mysql_slave /bin/sh# 登錄
mysql -uroot -p#創建slave1用戶
CREATE USER 'slave1'@'%' IDENTIFIED BY '123456';#給slave1用戶授予數據同步的權限
GRANT replication slave on *.* to 'slave1'@'%';#刷新權限
flush privileges;
1.2.4 開啟主從同步
開啟主從同步過程中,不要再去操作數據了,以免出現數據不一致情況。最好是數據庫安裝好,還未使用時舊開啟主從同步。
- 查看主節點binlog執行位置(主節點192.168.1.151來執行以下命令):
# 進入容器
docker exec -it dc_mysql_master /bin/sh# 登錄
mysql -uroot -p# 查看binglog執行位置
mysql> show master status;
+------------------+----------+--------------+------------------+-------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+--------------+------------------+-------------------+
| mysql-bin.000005 | 156 | | | |
+------------------+----------+--------------+------------------+-------------------+
1 row in set (0.00 sec)
- 從節點開啟主節點同步操作(從節點192.168.1.152來執行以下命令,
注意端口
):
# 進入容器
docker exec -it dc_mysql_slave /bin/sh# 登錄
mysql -uroot -p#從節點設置主節點信息
CHANGE MASTER TO MASTER_HOST='192.168.1.151', MASTER_PORT=4306, MASTER_USER='slave1', MASTER_PASSWORD='123456', MASTER_LOG_FILE='mysql-bin.000005', MASTER_LOG_POS=156;#從節點開啟數據同步
start slave;#查看主從數據同步情況
show slave status\G;
*************************** 1. row ***************************Slave_IO_State: Waiting for master to send eventMaster_Host: 192.168.1.151Master_User: slave1Master_Port: 4306Connect_Retry: 60Master_Log_File: mysql-bin.000005Read_Master_Log_Pos: 1459Relay_Log_File: mysql-relay.000002Relay_Log_Pos: 1627Relay_Master_Log_File: mysql-bin.000005Slave_IO_Running: YesSlave_SQL_Running: YesReplicate_Do_DB:Replicate_Ignore_DB:Replicate_Do_Table:Replicate_Ignore_Table:Replicate_Wild_Do_Table:Replicate_Wild_Ignore_Table:Last_Errno: 0Last_Error:Skip_Counter: 0Exec_Master_Log_Pos: 1459Relay_Log_Space: 1832Until_Condition: NoneUntil_Log_File:Until_Log_Pos: 0Master_SSL_Allowed: NoMaster_SSL_CA_File:Master_SSL_CA_Path:Master_SSL_Cert:Master_SSL_Cipher:Master_SSL_Key:Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: NoLast_IO_Errno: 0Last_IO_Error:Last_SQL_Errno: 0Last_SQL_Error:Replicate_Ignore_Server_Ids:Master_Server_Id: 1Master_UUID: 54db2059-a589-11ef-a788-0242ac120002Master_Info_File: mysql.slave_master_infoSQL_Delay: 0SQL_Remaining_Delay: NULLSlave_SQL_Running_State: Slave has read all relay log; waiting for more updatesMaster_Retry_Count: 86400Master_Bind:Last_IO_Error_Timestamp:Last_SQL_Error_Timestamp:Master_SSL_Crl:Master_SSL_Crlpath:Retrieved_Gtid_Set:Executed_Gtid_Set:Auto_Position: 0Replicate_Rewrite_DB:Channel_Name:Master_TLS_Version:Master_public_key_path:Get_master_public_key: 0Network_Namespace:
1 row in set (0.00 sec)
注意:從節點使用show slave status;命令檢查主從同步時,Slave_IO_Running 和 Slave_SQL_Running 都為 Yes,并且 Last_Error 為空,說明主從同步成功啟動并正常運行。
??如果發現其中有存在No的情況,那么檢查防火墻是否關閉、主節點的slave1用戶是否創建成功(可以在從節點上執行“mysql -h 主節點ip -P 4306 -u slave1 -p123456”看是否能登錄到主節點上)。
docker exec -it dc_mysql_slave /bin/shmysql -h 192.168.1.151 -P 4306 -u slave1 -p123456;
如果發現其中有存在No的情況,進行以下排查:
1、先稍等一下,啟動slave后,不一定馬上就會變為Yes,可能還需要等一下
2、檢查主從節點服務器的防火墻是否關閉
3、主節點的slave1用戶是否創建成功(可以在從節點上執行“mysql -h 主節點ip -uslave1 -p123456”看是否能登錄到主節點上)
4、如果發現是上面執行"change master to …"命令時參數寫錯導致的,那么在從節點上,先執行“stop slave;
”停止主從,接著在主節點上重新執行“show master status;
”來獲取主節點最新binlog日志以及偏移位置,然后在從節點重新執行“change master to …”命令,最后在從節點上執行 “start slave;
”。
- 取消主從復制:
- 停止從服務器上的復制進程:
STOP SLAVE;
這個命令會停止從服務器上的復制線程,包括I/O線程和SQL線程。
- 移除從服務器上的復制配置:
如果你想要徹底取消主從同步,并且不再需要從服務器作為復制的一部分,你可以移除復制相關的配置。在從服務器上執行以下命令:
RESET MASTER;
這個命令會重置從服務器上的二進制日志,并清除所有與復制相關的配置信息。請注意,這個操作會丟失從服務器上所有的二進制日志文件,所以如果你還需要保留這些日志,請先進行備份。
1.2.4 主從同步驗證
- 在主節點192.168.1.151上建庫、建表、插入表數據,每一步操作都會實時同步到從節點上。
- 檢查從節點192.168.1.152是否也都同步成功。
至此,主從同步就算開啟成功了。
1.3 主從切換
概要:
其實就是將主從節點上的配置互換。
- 切換兩個節點的讀寫權限;
- 切換兩個節點的讀寫配置;
前提需要是主備模式,搭建過程見前兩個小節。
服務器配置:
操作系統類型 | IP | MySQL版本 | 切換前 | 切換后 |
---|---|---|---|---|
7.9.2009 | 192.168.1.151 | 8.0.21 | 主 | 從 |
7.9.2009 | 192.168.1.152 | 8.0.21 | 從 | 主 |
1.3.1 主節點設置只讀(在192.168.1.151上操作)
主節點設置只讀模式,避免進行主從切換過程中還有寫操作,導致切換后主從數據不一致問題。
注意:用SQL命令設置的只讀模式是臨時的,重啟后失效。如果想讓MySQL重啟后也能生效,可以將read_only相關選項配置到my.conf文件里面。
# 進入容器
docker exec -it dc_mysql_master /bin/sh# 登錄
mysql -uroot -p#查看只讀相關配置
show VARIABLES like '%read_only%';
+-----------------------+-------+
| Variable_name | Value |
+-----------------------+-------+
| innodb_read_only | OFF |
| read_only | OFF |
| super_read_only | OFF |
| transaction_read_only | OFF |
+-----------------------+-------+
4 rows in set (0.00 sec)#開啟全局只讀(包括普通用戶、超級管理員root也都不能寫)
set global super_read_only='on';#開啟全局只讀(普通用戶不能寫),理論來說開啟了super_read_only后,就無需設置當前參數
set global read_only='on';#查看只讀相關配置
show VARIABLES like '%read_only%';
+-----------------------+-------+
| Variable_name | Value |
+-----------------------+-------+
| innodb_read_only | OFF |
| read_only | ON |
| super_read_only | ON |
| transaction_read_only | OFF |
+-----------------------+-------+
4 rows in set (0.00 sec)
1.3.2 檢查主從數據是否同步完畢(在192.168.1.152操作)
在從節點上執行"show slave status\G;"命令,查看控制臺打印結果,要求參數值要和下面的一致:
- Slave_IO_Running: Yes
- Slave_SQL_Running: Yes
- Seconds_Behind_Master: 0
- Slave_SQL_Running_State: Replica has read all relay log; waiting for more updates
注意
:Slave_IO_Running和Slave_SQL_Running都為true代表主、從是正常同步,其次Seconds_Behind_Master為0代表當前主、從節點數據一致。
具體操作如下:
# 進入容器
docker exec -it dc_mysql_slave /bin/sh# 登錄
mysql -uroot -pshow slave status\G;
1.3.3 停止并重置從節點(在192.168.1.152操作)
# 進入容器
docker exec -it dc_mysql_slave /bin/sh# 登錄
mysql -uroot -p#停止從節點
stop slave;#重置掉從節點的相關主從同步信息,同時將relaylog文件進行刪除重置
reset slave all;
1.3.4 修改原從節點的只讀配置(在192.168.1.152操作)
注:用SQL命令設置的只讀模式是臨時的,重啟后失效。如果想讓mysql重啟后也能生效,可以將read_only相關選項配置到my.conf文件里面或者從my.conf進行刪除,以為默認就是只讀關閉。
# 進入容器
docker exec -it dc_mysql_slave /bin/sh# 登錄
mysql -uroot -p#查看只讀相關配置
show VARIABLES like '%read_only%';
+-----------------------+-------+
| Variable_name | Value |
+-----------------------+-------+
| innodb_read_only | OFF |
| read_only | ON |
| super_read_only | OFF |
| transaction_read_only | OFF |
+-----------------------+-------+
4 rows in set (0.00 sec)#關閉全局只讀(讓超級管理員root能進行寫操作)
set global super_read_only='off';#關閉全局只讀(讓普通用戶也能寫操作)
set global read_only='off';#查看只讀相關配置
show VARIABLES like '%read_only%';
+-----------------------+-------+
| Variable_name | Value |
+-----------------------+-------+
| innodb_read_only | OFF |
| read_only | OFF |
| super_read_only | OFF |
| transaction_read_only | OFF |
+-----------------------+-------+
4 rows in set (0.00 sec)
1.3.5 主從切換
進行主從同步的過程不要任何寫操作,避免導致切換后主從數據不一致。
- 查看原從節點的最新日志以及偏移量(在192.168.1.152操作)。
# 進入容器
docker exec -it dc_mysql_slave /bin/sh# 登錄
mysql -uroot -pshow master status;
+------------------+----------+--------------+------------------+-------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+--------------+------------------+-------------------+
| mysql-log.000001 | 3096 | | | |
+------------------+----------+--------------+------------------+-------------------+
1 row in set (0.00 sec)
- 將原主節點 的主節點 設置為原從節點( 在192.168.1.151操作)
# 進入容器
docker exec -it dc_mysql_master /bin/sh# 登錄
mysql -uroot -p#設置主節點信息(注意日志文件名稱和1.2里的是不一樣的,不是只修改索引)
CHANGE MASTER TO MASTER_HOST='192.168.1.152', MASTER_PORT=4306, MASTER_USER='slave1', MASTER_PASSWORD='123456', MASTER_LOG_FILE='mysql-log.000001', MASTER_LOG_POS=3096;#開啟slave
start slave;#查看主從同步信息
show slave status\G;
*************************** 1. row ***************************Slave_IO_State: Waiting for master to send eventMaster_Host: 192.168.1.152Master_User: slave1Master_Port: 4306Connect_Retry: 60Master_Log_File: mysql-log.000001Read_Master_Log_Pos: 3096Relay_Log_File: mysql-relay.000002Relay_Log_Pos: 324Relay_Master_Log_File: mysql-log.000001Slave_IO_Running: YesSlave_SQL_Running: YesReplicate_Do_DB:Replicate_Ignore_DB:Replicate_Do_Table:Replicate_Ignore_Table:Replicate_Wild_Do_Table:Replicate_Wild_Ignore_Table:Last_Errno: 0Last_Error:Skip_Counter: 0Exec_Master_Log_Pos: 3096Relay_Log_Space: 529Until_Condition: NoneUntil_Log_File:Until_Log_Pos: 0Master_SSL_Allowed: NoMaster_SSL_CA_File:Master_SSL_CA_Path:Master_SSL_Cert:Master_SSL_Cipher:Master_SSL_Key:Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: NoLast_IO_Errno: 0Last_IO_Error:Last_SQL_Errno: 0Last_SQL_Error:Replicate_Ignore_Server_Ids:Master_Server_Id: 2Master_UUID: 220f1fd5-a620-11ef-a9f5-0242ac120002Master_Info_File: mysql.slave_master_infoSQL_Delay: 0SQL_Remaining_Delay: NULLSlave_SQL_Running_State: Slave has read all relay log; waiting for more updatesMaster_Retry_Count: 86400Master_Bind:Last_IO_Error_Timestamp:Last_SQL_Error_Timestamp:Master_SSL_Crl:Master_SSL_Crlpath:Retrieved_Gtid_Set:Executed_Gtid_Set:Auto_Position: 0Replicate_Rewrite_DB:Channel_Name:Master_TLS_Version:Master_public_key_path:Get_master_public_key: 0Network_Namespace:
1 row in set (0.00 sec)
注意:使用show slave status;
命令檢查主從同步時,Slave_IO_Running 和 Slave_SQL_Running 都為 Yes,并且 Last_Error
為空,說明主從同步成功啟動并正常運行。
1.3.6 驗證
- 在新主節點(192.168.1.152)插入表數據。
- 在新從節點(192.168.1.151)查看表數據,發現在新主節點插入的數據已經自動同步到新從節點上了。
2 Redis
推薦使用Redis主從復制配合哨兵機制。
2.1 Redis主從復制
服務器配置:
操作系統類型 | IP | Redis版本 | 主從類型 | 端口 |
---|---|---|---|---|
7.9.2009 | 192.168.1.151 | 7.4.0 | 主節點 | 26379 |
7.9.2009 | 192.168.1.152 | 7.4.0 | 從節點1 | 26379 |
7.9.2009 | 192.168.1.153 | 7.4.0 | 從節點2 | 26379 |
- 各節點創建掛載目錄并修改權限
mkdir -p /opt/soft/redis/redis_server/{conf,data,log}
chmod 777 /opt/soft/redis/redis_server/data
chmod 777 /opt/soft/redis/redis_server/conf
chmod 777 /opt/soft/redis/redis_server/log
- 各節點拉取鏡像
docker pull redis:7.4.0
- 各節點docker-compose.yaml
在各節點的opt/soft/redis/redis_server下創建docker-compose.yaml
cd /opt/soft/redis/redis_servervim docker-compose.yaml
各節點docker-compose.yaml內容:
- 主節點:
version: "3.1"
services:redis_master:container_name: redis_masterrestart: alwaysimage: redis:7.4.0ports:- 26379:6379volumes:- /etc/localtime:/etc/localtime:ro # 將外邊時間直接掛載到容器內部,權限只讀- /opt/soft/redis/redis_server/data:/data- /opt/soft/redis/redis_server/conf/redis.conf:/etc/redis/redis.conflogging:driver: json-fileoptions:max-size: 10mmax-file: 5command: redis-servernetworks:default:
- 從節點1:
version: "3.1"
services:redis_slave1:container_name: redis_slave1restart: alwaysimage: redis:7.4.0ports:- 26379:6379volumes:- /etc/localtime:/etc/localtime:ro # 將外邊時間直接掛載到容器內部,權限只讀- /opt/soft/redis/redis_server/data:/data- /opt/soft/redis/redis_server/conf/redis.conf:/etc/redis/redis.conflogging:driver: json-fileoptions:max-size: 10mmax-file: 5command: redis-server --slaveof 192.168.1.151 26379 --slave-announce-ip 192.168.1.152 --slave-announce-port 26379networks:default:
- 從節點2:
version: "3.1"
services:redis_slave2:container_name: redis_slave2restart: alwaysimage: redis:7.4.0ports:- 26379:6379volumes:- /etc/localtime:/etc/localtime:ro # 將外邊時間直接掛載到容器內部,權限只讀- /opt/soft/redis/redis_server/data:/data- /opt/soft/redis/redis_server/conf/redis.conf:/etc/redis/redis.conflogging:driver: json-fileoptions:max-size: 10mmax-file: 5command: redis-server --slaveof 192.168.1.151 26379 --slave-announce-ip 192.168.1.153 --slave-announce-port 26379networks:default:
- 各節點創建配置文件:(配置都一樣)
vim /opt/soft/redis/redis_server/conf/redis.conf
redis.conf 配置
# 設置 redis 連接密碼
requirepass 123456# 設置Redis主從復制時的認證密碼。當Redis主節點設置了連接密碼時,從節點也需要設置相同的密碼才能成功連接到主節點并進行數據同步。如果主節點沒有設置連接密碼,從節點可以正常工作,但如果主節點設置了密碼,從節點也需要設置相同的密碼才能進行數據復制。
masterauth 123456# 開啟 AOF 持久化
appendonly yes# AOF文件刷新的方式
# always 每提交一個修改命令都調用fsync刷新到AOF文件,非常非常慢,但也非常安全。
# everysec 每秒鐘都調用fsync刷新到AOF文件,很快,但可能會丟失一秒以內的數據。
# no 依靠OS進行刷新,redis不主動刷新AOF,這樣最快,但安全性就差。
appendfsync everysec# 隨著持久化的不斷增多,AOF文件會越來越大,這個時候就需要AOF文件重寫了。AOF文件重寫
# 如果該參數取值為yes,那么在重寫AOF文件時能提升性能,但可能在重寫AOF文件時丟失數據。
# 如果取值為no,則不會丟失數據,但較取值為yes的性能可能會降低。默認取值是no。
no-appendfsync-on-rewrite no# AOF文件重寫
# 參數能指定重寫的條件,默認是100,
# 即如果當前的AOF文件比上次執行重寫時的文件大一倍時會再次觸發重寫操作。
# 如果該參數取值為0,則不會觸發重寫操作。
auto-aof-rewrite-percentage 100# AOF文件重寫
# 指定觸發重寫時AOF文件的大小,默認是64MB。
auto-aof-rewrite-min-size 64mb# auto-aof-rewrite-percentage 和 auto-aof-rewrite-min-size 兩個參數指定的重寫條件是“And”的關系。
# 即只有當同時滿足這兩個條件時才會觸發重寫操作。# 綁定redis服務器網卡IP,默認為127.0.0.1,即本地回環地址。
# 這樣的話,訪問redis服務只能通過本機的客戶端連接,而無法通過遠程連接。
# 如果bind選項為空的話,那會接受所有來自于可用網絡接口的連接。
# bind 172.0.0.4 127.0.0.1
# bind 127.0.0.1 -::1#默認yes,開啟保護模式,限制為本地訪問,改為no,允許接受其他主機連接此redis。
protected-mode no# Redis key 過期事件監聽
notify-keyspace-events Ex# 在容器內運行 Redis 服務端,并加載指定的配置文件 /etc/redis/redis.conf
redis-server /etc/redis/redis.conf
- 各節點開放端口(26379)
# 開放26379端口的命令
firewall-cmd --zone=public --add-port=26379/tcp --permanent
# 重啟防火墻
firewall-cmd --reload
# 查看開放的端口
firewall-cmd --list-port
- 各節點創建redis-server容器(先創建主節點,依次創建從節點)
docker-compose up -d
- 測試
在主節點(192.168.1.151)添加數據,在從節點查看數據是否同步添加。
2.2 Redis哨兵機制
哨兵節點是特殊的redis節點,不存儲數據,只做監控使用。
注意:如果出現以下錯誤,可能是網絡原因造成的,將sentinel配置中的sentinel down-after-milliseconds mymaster
參數調大些,比如60000。
1:X 20 Nov 2024 17:54:25.029 # +sdown sentinel 0a654e824df23df32c09d6830d7ac9ae3fa55bb6 192.168.1.152 36379 @ mymaster 192.168.1.151 26379
1:X 20 Nov 2024 17:54:26.637 # +sdown sentinel c620886db836f2515e4ede62b0f3a99c758dc045 192.168.1.153 36379 @ mymaster 192.168.1.151 26379
操作系統類型 | IP | Redis版本 | 節點類型 | 端口 |
---|---|---|---|---|
7.9.2009 | 192.168.1.151 | 7.4.0 | sentinel1 | 36379 |
7.9.2009 | 192.168.1.152 | 7.4.0 | sentinel2 | 36379 |
7.9.2009 | 192.168.1.153 | 7.4.0 | sentinel3 | 36379 |
- 各節點創建掛載目錄
mkdir -p /opt/soft/redis/redis_sentinel/{conf,data,log}
chmod 777 /opt/soft/redis/redis_sentinel/data
chmod 777 /opt/soft/redis/redis_sentinel/conf
chmod 777 /opt/soft/redis/redis_sentinel/log
- 各節點拉取鏡像
docker pull redis:7.4.0
- 各節點docker-compose.yaml(各節點的sentinel只有服務名稱和容器名稱不同,比如節點2、節點3的名稱分別是redis_sentinel2、redis_sentinel3)
在各節點的/opt/soft/redis/redis_sentinel
下創建docker-compose.yaml
。
cd /opt/soft/redis/redis_sentinelvim docker-compose.yaml
各節點docker-compose.yaml內容:
version: "3.1"
services:redis_sentinel1:container_name: redis_sentinel1restart: alwaysimage: redis:7.4.0ports:- 36379:36379volumes:- /etc/localtime:/etc/localtime:ro # 將外邊時間直接掛載到容器內部,權限只讀- /opt/soft/redis/redis_sentinel/data:/data- /opt/soft/redis/redis_sentinel/conf:/usr/local/etc/redis- /opt/soft/redis/redis_sentinel/log:/var/loglogging:driver: json-fileoptions:max-size: 10mmax-file: 5command: redis-sentinel /usr/local/etc/redis/sentinel.confnetworks:- sentinel_networknetworks:sentinel_network:driver: bridge
- 各節點創建配置文件:
vim /opt/soft/redis/redis_sentinel/conf/sentinel.conf
sentinel.conf配置:
注意:各個節點上的announce-ip
可能會有差異,需要和當前節點的ip保持一致。
# 啟動sentinel 端口號(注意容器和宿主機的端口映射)
port 36379
# 是否以守護進程的方式啟動
daemonize no
# 啟動的PID
pidfile /var/run/redis-sentinel.pid
# 日志文件命名及路徑
logfile /var/log/redis-sentinel.log
# 數據寫入目錄
dir /data
# sentinel 監控主節點的名字mymaster,ip 192.168.1.151 端口26379,2的意思是 當有幾個 sentinel 節點覺得這個master 有問題后,就進行故障轉移
sentinel monitor mymaster 192.168.1.151 26379 2
# ping mymaster 10000 毫秒后,還ping不通,則主節點會被sentinel判定不可用,默認30秒。
# 因為sentinel是在不同服務器上,建議將時間調大些。
sentinel down-after-milliseconds mymaster 10000
# 當有新的master,老的slave會對新的進行復制,1相當于每次只能復制1個
sentinel parallel-syncs mymaster 1
# 故障轉移的時間
sentinel failover-timeout mymaster 15000
# 不允許使用SENTINEL SET設置notification-script和client-reconfig-script
sentinel deny-scripts-reconfig yes
# mymaster服務使用的密碼
sentinel auth-pass mymaster 123456# 當前節點哨兵對外的IP(需與當前服務器的ip保持一致)
sentinel announce-ip 192.168.1.151/152/153
# 當前節點哨兵對外的port(需與port保持一致)
sentinel announce-port 36379
- 各節點開放端口(36379)
# 開放36379端口的命令
firewall-cmd --zone=public --add-port=36379/tcp --permanent
# 重啟防火墻
firewall-cmd --reload
# 查看開放的端口
firewall-cmd --list-port
- 各節點創建redis-sentinel容器
docker-compose up -d
- 測試
將主節點(192.168.1.151)停止后,查看哨兵是否判定故障并轉移。
查看哨兵節點的日志:
主節點從192.168.1.151變更為了192.168.1.153。
選舉完成后sentinel.conf變化:
2.3 Redis集群
Redis集群必須要3個或以上的主節點,否則在創建集群時會失敗,并且當存活的主節點數小于總節點數的一半時,整個集群就無法提供服務了。所以一共需要3個主節點,每個主節點至少1個副本,因此總共需要6個節點,3主3從。
創建集群時的提示信息:
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
*** ERROR: Invalid configuration for cluster creation.
*** Redis Cluster requires at least 3 master nodes.
*** This is not possible with 3 nodes and 2 replicas per node.
*** At least 9 nodes are required.
同時端口也不能太大:
1:M 21 Nov 2024 11:18:29.904 # Redis port number too high. Cluster communication port is 10,000 port numbers higher than your Redis port. Your Redis port number must be 55535 or less.
服務器配置:
- 每個節點均創建掛載目錄并修改權限
mkdir -p /opt/soft/redis/redis_cluster/node1/{conf,data,log}
chmod 777 /opt/soft/redis/redis_cluster/node1/data
chmod 777 /opt/soft/redis/redis_cluster/node1/conf
chmod 777 /opt/soft/redis/redis_cluster/node1/logmkdir -p /opt/soft/redis/redis_cluster/node2/{conf,data,log}
chmod 777 /opt/soft/redis/redis_cluster/node2/data
chmod 777 /opt/soft/redis/redis_cluster/node2/conf
chmod 777 /opt/soft/redis/redis_cluster/node2/log
- 各節點拉取鏡像
docker pull redis:7.4.0
- 各節點docker-compose.yaml
在各節點的/opt/soft/redis/redis_cluster
下創建docker-compose.yaml
。
cd /opt/soft/redis/redis_clustervim docker-compose.yaml
各節點docker-compose.yaml內容:(各節點的node只有服務名稱和容器名稱不同,比如節點2、節點3的名稱分別是redis_cluster_node2、redis_cluster_node3)
version: "3.1"
services:redis_cluster_node1:container_name: redis_cluster_node1restart: alwaysimage: redis:7.4.0ports:- 16379:16379- 16380:16380volumes:- /etc/localtime:/etc/localtime:ro # 將外邊時間直接掛載到容器內部,權限只讀- /opt/soft/redis/redis_cluster/node1/data:/data- /opt/soft/redis/redis_cluster/node1/conf:/etc/redis- /opt/soft/redis/redis_cluster/node1/log:/var/loglogging:driver: json-fileoptions:max-size: 10mmax-file: 5command: redis-server /etc/redis/redis.confnetworks:- redis_clusterredis_cluster_node2:container_name: redis_cluster_node2restart: alwaysimage: redis:7.4.0ports:- 26379:26379- 26380:26380volumes:- /etc/localtime:/etc/localtime:ro # 將外邊時間直接掛載到容器內部,權限只讀- /opt/soft/redis/redis_cluster/node2/data:/data- /opt/soft/redis/redis_cluster/node2/conf:/etc/redis- /opt/soft/redis/redis_cluster/node2/log:/var/loglogging:driver: json-fileoptions:max-size: 10mmax-file: 5command: redis-server /etc/redis/redis.confnetworks:- redis_clusternetworks:redis_cluster:driver: bridge
- 各節點創建配置文件
node1:
vim /opt/soft/redis/redis_cluster/node1/conf/redis.conf
redis.conf配置
注意:各個節點上的cluster-announce-ip可能會有差異,需要和當前節點的配置保持一致。
# 啟動 cluster 節點的端口號(注意容器和宿主機的端口映射)
port 16379
# 設置 redis 連接密碼
requirepass 123456
# 設置Redis主從復制時的認證密碼。當Redis主節點設置了連接密碼時,從節點也需要設置相同的密碼才能成功連接到主節點并進行數據同步。如果主節點沒有設置連接密碼,從節點可以正常工作,但如果主節點設置了密碼,從節點也需要設置相同的密碼才能進行數據復制。
masterauth 123456
# 日志文件命名及路徑
logfile /var/log/redis_cluster_node.log
# 是否開啟集群功能
cluster-enabled yes
#由于集群通信端口默認為服務端口號6379+10000,即16379,和cluster-announce-bus-port保持一致。
cluster-port 16380
# 集群節點信息文件
cluster-config-file nodes.conf
# 集群節點連接超時時間(最好調大些)
cluster-node-timeout 30000
# 當前節點的IP(填寫宿主機的IP)
cluster-announce-ip 192.168.1.151
# 當前節點的port(需和port保持一致)
cluster-announce-port 16379
# 集群節點總線端口
cluster-announce-bus-port 16380
node2:
vim /opt/soft/redis/redis_cluster/node2/conf/redis.conf
redis.conf配置
注意:各個服務器上的cluster-announce-ip可能會有差異,需要和當前服務器的配置保持一致。
# 啟動 cluster 節點的端口號(注意容器和宿主機的端口映射)
port 26379
# 設置 redis 連接密碼
requirepass 123456
# 設置Redis主從復制時的認證密碼。當Redis主節點設置了連接密碼時,從節點也需要設置相同的密碼才能成功連接到主節點并進行數據同步。如果主節點沒有設置連接密碼,從節點可以正常工作,但如果主節點設置了密碼,從節點也需要設置相同的密碼才能進行數據復制。
masterauth 123456
# 是否開啟集群功能
cluster-enabled yes
#由于集群通信端口默認為服務端口號6379+10000,即16379,和cluster-announce-bus-port保持一致。
cluster-port 26380
# 集群節點信息文件
cluster-config-file nodes.conf
# 集群節點連接超時時間(最好調大些)
cluster-node-timeout 30000
# 當前節點的IP(填寫宿主機的IP)
cluster-announce-ip 192.168.1.151
# 當前節點的port(需和port保持一致)
cluster-announce-port 26379
# 集群節點總線端口
cluster-announce-bus-port 26380
- 開放端口(16379、16380、26379、26380)
# 開放46379、46380、56379、56380端口的命令
firewall-cmd --zone=public --add-port=16379/tcp --permanent
firewall-cmd --zone=public --add-port=16380/tcp --permanent
firewall-cmd --zone=public --add-port=26379/tcp --permanent
firewall-cmd --zone=public --add-port=26380/tcp --permanent
# 重啟防火墻
firewall-cmd --reload
# 查看開放的端口
firewall-cmd --list-port
- 創建redis-cluster節點容器
docker-compose up -d
但這個時候這些容器都是相互獨立的,沒有構成集群。
- 構建集群
構造集群分三步:
- 節點之間通過握手建立連接;
- 進行槽分配;
- 指定節點主從關系;
構造集群的方式有兩種: - 自動創建法;
- 手動創建法。簡便起見,本文采用自動創建法。
創建集群:
隨便選擇一個節點進入容器。
docker exec -it redis_cluster_node1 /bin/sh
輸入下面命令創建集群(由于我們只有6個節點,所以3主3從,每個主節點只有一個副本)
redis-cli -a 123456 --cluster create 192.168.1.151:16379 192.168.1.151:26379 192.168.1.152:16379 192.168.1.152:26379 192.168.1.153:16379 192.168.1.153:26379 --cluster-replicas 1
集群創建過程:
- 查看集群情況
使用剛剛的節點登錄到集群:
redis-cli -c -a 123456 -h 192.168.1.151 -p 16379
- 查看集群狀態:
# 集群狀態
cluster info
192.168.1.151:16379> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:383
cluster_stats_messages_pong_sent:385
cluster_stats_messages_sent:768
cluster_stats_messages_ping_received:380
cluster_stats_messages_pong_received:383
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:768
total_cluster_links_buffer_limit_exceeded:0
- 集群節點情況
# 集群節點查詢
cluster nodes
192.168.1.151:16379> cluster nodes
c79015f531577f01c6ac896bdc02b3e129a37576 192.168.1.151:16379@16380 myself,master - 0 0 1 connected 0-5460
02467576a212ee0dc777a61446140ef862b3904d 192.168.1.153:16379@16380 master - 0 1732168100000 5 connected 10923-16383
dfc77d0f115517957b6c752664b7eb8de6fbbc7d 192.168.1.153:26379@26380 slave e26c87f64ad0aa8211a5a933fb4a7f02adcb9fa3 0 1732168101249 3 connected
a94284a82c2fb4bc0a6d285ac408a021ba43b500 192.168.1.152:26379@26380 slave c79015f531577f01c6ac896bdc02b3e129a37576 0 1732168101000 1 connected
e26c87f64ad0aa8211a5a933fb4a7f02adcb9fa3 192.168.1.152:16379@16380 master - 0 1732168102268 3 connected 5461-10922
994eb9ca9f4871ada78a5491d2f1ccf7486c01bb 192.168.1.151:26379@26380 slave 02467576a212ee0dc777a61446140ef862b3904d 0 1732168099194 5 connected
- 集群中各節點的數據庫情況
[root@node01 log]# docker exec -it redis_cluster_node1 /bin/sh
# redis-cli -c -a 123456 -h 192.168.1.151 -p 16379
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
192.168.1.151:16379> select 1
(error) ERR SELECT is not allowed in cluster mode
192.168.1.151:16379> select 2
(error) ERR SELECT is not allowed in cluster mode
可以發現,集群中的每個節點只有數據庫0可以使用,其他索引的庫均不允許使用,且每個節點中的數據均一致。通過Redis Desktop Manage
可以更直觀的發現。
至此,redis集群搭建成功!
2.4 主從復制、哨兵sentinel、集群的區別
- 主從復制:讀寫分離,備份,一個Master可以有多個Slaves。
- 哨兵sentinel:監控,自動轉移,哨兵發現主服務器掛了后,就會從slave中重新選舉一個主服務器。
- 集群:為了解決單機Redis容量有限的問題,將數據按一定的規則分配到多臺機器,內存/QPS不受限于單機,可受益于分布式集群高擴展性。
對于要求較高的推薦使用集群加哨兵機制實現Redis的高可用。
3 MongoDB
以8.0.3版本為例,目前是最新版本(2024年11月21日),8.0以下版本與較早版本有較大差異,以下操作均基于8.0以上。MongoDB官方文檔:https://www.mongodb.com/zh-cn/docs/manual
3.1 MongoDB主從復制集群(不推薦)
不推薦的理由:缺乏自動故障轉移、數據一致性保障和靈活的讀負載均衡能力。
服務器配置:
操作系統類型 | IP | Redis版本 | 節點類型 | 端口 |
---|---|---|---|---|
7.9.2009 | 192.168.1.151 | 8.0.3 | 主節點 | 27017 |
7.9.2009 | 192.168.1.152 | 8.0.3 | 從節點1 | 27017 |
7.9.2009 | 192.168.1.153 | 8.0.3 | 從節點2 | 27017 |
- 創建掛載目錄并修改權限
mkdir -p /opt/soft/mongo/{conf,data,log}
chmod 777 /opt/soft/mongo/data
chmod 777 /opt/soft/mongo/conf
chmod 777 /opt/soft/mongo/log
- 拉取鏡像
docker pull mongo:8.0.3
- docker-compose.yaml
在各節點的/opt/soft/mongo下創建docker-compose.yaml
cd /opt/soft/mongovim docker-compose.yaml
各節點docker-compose.yaml內容:(各節點的mongoDB只有服務名稱和容器名稱不同)
version: '3.1'
services:mongo_master:container_name: mongo_masterrestart: alwaysimage: mongo:8.0.3ports:- 27017:27017volumes:- /etc/localtime:/etc/localtime:ro # 將外邊時間直接掛載到容器內部,權限只讀- /opt/soft/mongo/data:/data/db- /opt/soft/mongo/conf:/data/configdb- /opt/soft/mongo/log:/data/log- /opt/soft/mongo/conf/keyfile.key:/data/configdb/keyfile.keycommand: --config /data/configdb/mongod.conf --keyFile /data/configdb/keyfile.key # 配置文件environment:MONGO_INITDB_ROOT_USERNAME: rootMONGO_INITDB_ROOT_PASSWORD: 123456logging:driver: json-fileoptions:max-size: 10mmax-file: 5networks:default:
- 創建配置文件:
vim /opt/soft/mongo/conf/mongod.conf
mongod.conf配置
systemLog:destination: filepath: /data/log/mongod.log # log pathlogAppend: truestorage:dbPath: /data/dbnet:bindIp: 0.0.0.0port: 27017 # portreplication:replSetName: rs0 # 復制集名稱# processManagement: # 設置了該項會導致docker exec -it mongodb1 bash 進入容器后馬上自動退出
# fork: true
- 使用openssl創建集群認證的秘鑰
由于搭建副本集服務器,開啟認證的同時,必須指定keyFile參數,節點之間的通訊基于該keyFile進行的。否則會啟動失敗。我們通過docker logs 容器ID
查看docker日志可以發現啟動的時候會報錯:
BadValue: security.keyFile is required when authorization is enabled with replica sets
在/opt/soft/mongo/conf目錄創建
# 創建秘鑰
openssl rand -base64 756 > keyfile.key# 給秘鑰創建賦予權限
chmod 600 keyfile.key
# 修改秘鑰文件組
chown 999:999 keyfile.key
注意:各節點使用的秘鑰文件需要是相同的,可創建完成后復制到每個節點的配置目錄下。
6. 開放端口(27017)
# 開放27017端口的命令
firewall-cmd --zone=public --add-port=27017/tcp --permanent
# 重啟防火墻
firewall-cmd --reload
# 查看開放的端口
firewall-cmd --list-port
- 創建mongoDB容器
docker-compose up -d
- 初始化副本集
- 1 進入任一節點容器中
docker exec -it mongo_master bash
- 2 查詢MongoDB的命令所在目錄(容器中的目錄)
whereis mongosh
root@cf7b255556e8:/# whereis mongosh
mongosh: /usr/bin/mongosh
- 3 使用命令進入MongoDB
/usr/bin/mongosh
- 4 使用Mongo命令初始化副本集
# 驗證身份
use admin;
db.auth("root", "123456");# 初始化副本
rs.initiate({_id: "rs0",members: [{ _id: 0, host: "192.168.1.151:27017" },{ _id: 1, host: "192.168.1.152:27017" },{ _id: 2, host: "192.168.1.153:27017" }]
});# 查看副本狀態
rs.status();# 切換到對應的庫
use boatol;
# 給對應的庫添加可讀寫權限的用戶
db.createUser({user: "admin",pwd: "123456",roles: [ { role: "readWrite", db: "boatol" } ]}
);
- 5 執行及結果
# 進入任一節點容器中
[root@node01 mongo]# docker exec -it mongo_master bash# 查詢MongoDB的命令所在目錄
root@cf7b255556e8:/# whereis mongosh
mongosh: /usr/bin/mongosh# 使用命令進入MongoDB
root@cf7b255556e8:/# /usr/bin/mongosh
Current Mongosh Log ID: 673ef8760d928d1173c1c18b
Connecting to: mongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+2.3.3
Using MongoDB: 8.0.3
Using Mongosh: 2.3.3For mongosh info see: https://www.mongodb.com/docs/mongodb-shell/To help improve our products, anonymous usage data is collected and sent to MongoDB periodically (https://www.mongodb.com/legal/privacy-policy).
You can opt-out by running the disableTelemetry() command.# 驗證身份
test> use admin;
switched to db admin
admin> db.auth("root", "yQcsaZBerNyccT1C");
{ ok: 1 }# 初始化副本
admin> rs.initiate({
... _id: "rs0",
... members: [
... { _id: 0, host: "192.168.1.151:27017" },
... { _id: 1, host: "192.168.1.152:27017" },
... { _id: 2, host: "192.168.1.153:27017" }
... ]
... });
{ ok: 1 }# 查看副本狀態
rs0 [direct: other] admin> rs.status();
{set: 'rs0',date: ISODate('2024-11-21T09:09:41.060Z'),myState: 2,term: Long('0'),syncSourceHost: '',syncSourceId: -1,heartbeatIntervalMillis: Long('2000'),majorityVoteCount: 2,writeMajorityCount: 2,votingMembersCount: 3,writableVotingMembersCount: 3,optimes: {lastCommittedOpTime: { ts: Timestamp({ t: 1732180174, i: 1 }), t: Long('-1') },lastCommittedWallTime: ISODate('2024-11-21T09:09:34.702Z'),readConcernMajorityOpTime: { ts: Timestamp({ t: 1732180174, i: 1 }), t: Long('-1') },appliedOpTime: { ts: Timestamp({ t: 1732180174, i: 1 }), t: Long('-1') },durableOpTime: { ts: Timestamp({ t: 1732180174, i: 1 }), t: Long('-1') },writtenOpTime: { ts: Timestamp({ t: 1732180174, i: 1 }), t: Long('-1') },lastAppliedWallTime: ISODate('2024-11-21T09:09:34.702Z'),lastDurableWallTime: ISODate('2024-11-21T09:09:34.702Z'),lastWrittenWallTime: ISODate('2024-11-21T09:09:34.702Z')},lastStableRecoveryTimestamp: Timestamp({ t: 1732180174, i: 1 }),members: [{_id: 0,name: '192.168.1.151:27017',health: 1,state: 2,stateStr: 'SECONDARY',uptime: 408,optime: { ts: Timestamp({ t: 1732180174, i: 1 }), t: Long('-1') },optimeDate: ISODate('2024-11-21T09:09:34.000Z'),optimeWritten: { ts: Timestamp({ t: 1732180174, i: 1 }), t: Long('-1') },optimeWrittenDate: ISODate('2024-11-21T09:09:34.000Z'),lastAppliedWallTime: ISODate('2024-11-21T09:09:34.702Z'),lastDurableWallTime: ISODate('2024-11-21T09:09:34.702Z'),lastWrittenWallTime: ISODate('2024-11-21T09:09:34.702Z'),syncSourceHost: '',syncSourceId: -1,infoMessage: '',configVersion: 1,configTerm: 0,self: true,lastHeartbeatMessage: ''},{_id: 1,name: '192.168.1.152:27017',health: 1,state: 2,stateStr: 'SECONDARY',uptime: 6,optime: { ts: Timestamp({ t: 1732180174, i: 1 }), t: Long('-1') },optimeDurable: { ts: Timestamp({ t: 1732180174, i: 1 }), t: Long('-1') },optimeWritten: { ts: Timestamp({ t: 1732180174, i: 1 }), t: Long('-1') },optimeDate: ISODate('2024-11-21T09:09:34.000Z'),optimeDurableDate: ISODate('2024-11-21T09:09:34.000Z'),optimeWrittenDate: ISODate('2024-11-21T09:09:34.000Z'),lastAppliedWallTime: ISODate('2024-11-21T09:09:34.702Z'),lastDurableWallTime: ISODate('2024-11-21T09:09:34.702Z'),lastWrittenWallTime: ISODate('2024-11-21T09:09:34.702Z'),lastHeartbeat: ISODate('2024-11-21T09:09:40.918Z'),lastHeartbeatRecv: ISODate('2024-11-21T09:09:40.643Z'),pingMs: Long('0'),lastHeartbeatMessage: '',syncSourceHost: '',syncSourceId: -1,infoMessage: '',configVersion: 1,configTerm: 0},{_id: 2,name: '192.168.1.153:27017',health: 1,state: 2,stateStr: 'SECONDARY',uptime: 6,optime: { ts: Timestamp({ t: 1732180174, i: 1 }), t: Long('-1') },optimeDurable: { ts: Timestamp({ t: 1732180174, i: 1 }), t: Long('-1') },optimeWritten: { ts: Timestamp({ t: 1732180174, i: 1 }), t: Long('-1') },optimeDate: ISODate('2024-11-21T09:09:34.000Z'),optimeDurableDate: ISODate('2024-11-21T09:09:34.000Z'),optimeWrittenDate: ISODate('2024-11-21T09:09:34.000Z'),lastAppliedWallTime: ISODate('2024-11-21T09:09:34.702Z'),lastDurableWallTime: ISODate('2024-11-21T09:09:34.702Z'),lastWrittenWallTime: ISODate('2024-11-21T09:09:34.702Z'),lastHeartbeat: ISODate('2024-11-21T09:09:40.923Z'),lastHeartbeatRecv: ISODate('2024-11-21T09:09:40.679Z'),pingMs: Long('1'),lastHeartbeatMessage: '',syncSourceHost: '',syncSourceId: -1,infoMessage: '',configVersion: 1,configTerm: 0}],ok: 1
}# 切換到對應的庫
rs0 [direct: other] admin> use boatol;
switched to db boatol# 給對應的庫添加可讀寫權限的用戶
rs0 [direct: secondary] boatol> db.createUser(
... {
... user: "admin",
... pwd: "123456",
... roles: [ { role: "readWrite", db: "boatol" } ]
... }
... );
{ok: 1,'$clusterTime': {clusterTime: Timestamp({ t: 1732180196, i: 1 }),signature: {hash: Binary.createFromBase64('nB4GEHSgQJm2DmmGDS3BUXjvLuI=', 0),keyId: Long('7439657245354229766')}},operationTime: Timestamp({ t: 1732180196, i: 1 })
}
rs0 [direct: primary] boatol>
- 測試
在主節點(192.168.1.151)創建集合后,可在192.168.1.152、192.168.1.153上查看結果。
- 總結(主從復制存在的問題)
- 單點故障:Master節點故障時,沒有自動故障轉移機制。
- 數據量有限:Slave節點數據通常不可寫,限制了數據總量的增長。
- 延遲和同步問題:Slave節點可能會落后于Master,導致數據延遲。
- 資源利用率低:需要額外資源來運行Slave節點。
副本集的優點
- 自動故障轉移。
- 讀寫分離,能提供更好的讀擴展能力。
- 副本集成員可配置為arbiter(仲裁),提供投票決定誰是主節點。
3.2 MongoDB副本集(Replica Set)集群
組成:副本集沒有固定的主節點,整個集群會選出一個主節點,當其掛掉后,又在剩下的從節點中選中其他節點為主節點,副本集總有一個主節點和一個或多個備份節點。在出現故障時自動切換,實現故障轉移,在實際生產中非常實用。
功能:
- 主節點負責處理讀寫操作。
- 從節點從主節點復制數據以保持同步,和讀操作。
- 當主節點故障時,會自動進行選舉,從從節點中選出新的主節點,保障系統的高可用性。
優點:
- 提供數據冗余,防止數據丟失。
- 實現讀寫分離,減輕主節點的讀壓力。
3.2.1 集群搭建
官方文檔:https://www.mongodb.com/zh-cn/docs/rapid/administration/replica-set-deployment
服務器配置:
操作系統類型 | IP | MongoDB版本 | 節點類型 | 端口 |
---|---|---|---|---|
7.9.2009 | 192.168.1.151 | 8.0.3 | 主節點 | 27017 |
7.9.2009 | 192.168.1.151 | 8.0.3 | 從節點1 | 37017 |
7.9.2009 | 192.168.1.151 | 8.0.3 | 從節點2 | 47017 |
7.9.2009 | 192.168.1.151 | 8.0.3 | 仲裁節點 | 17017 |
192.168.1.151服務器上的操作。
- 創建掛載目錄并修改權限
# 主節點
mkdir -p /opt/soft/mongo/mongo_master/{conf,data,log}
chmod 777 /opt/soft/mongo/mongo_master/data
chmod 777 /opt/soft/mongo/mongo_master/conf
chmod 777 /opt/soft/mongo/mongo_master/log# 從節點1
mkdir -p /opt/soft/mongo/mongo_slave1/{conf,data,log}
chmod 777 /opt/soft/mongo/mongo_slave1/data
chmod 777 /opt/soft/mongo/mongo_slave1/conf
chmod 777 /opt/soft/mongo/mongo_slave1/log# 從節點2
mkdir -p /opt/soft/mongo/mongo_slave2/{conf,data,log}
chmod 777 /opt/soft/mongo/mongo_slave2/data
chmod 777 /opt/soft/mongo/mongo_slave2/conf
chmod 777 /opt/soft/mongo/mongo_slave2/log# 仲裁節點
mkdir -p /opt/soft/mongo/mongo_arbiter/{conf,data,log}
chmod 777 /opt/soft/mongo/mongo_arbiter/data
chmod 777 /opt/soft/mongo/mongo_arbiter/conf
chmod 777 /opt/soft/mongo/mongo_arbiter/log
- 拉取鏡像
docker pull mongo:8.0.3
- docker-compose.yaml
在/opt/soft/mongo
下創建docker-compose.yaml
。
cd /opt/soft/mongovim docker-compose.yaml
docker-compose.yaml內容:(注意時區)
version: '3.1'
services:mongo_master:container_name: mongo_masterrestart: alwaysimage: mongo:8.0.3ports:- 27017:27017volumes:- /etc/localtime:/etc/localtime:ro # 將外邊時間直接掛載到容器內部,權限只讀- /opt/soft/mongo/mongo_master/data:/data/db- /opt/soft/mongo/mongo_master/conf:/data/configdb- /opt/soft/mongo/mongo_master/log:/data/log- /opt/soft/mongo/keyfile.key:/data/configdb/keyfile.keycommand: --config /data/configdb/mongod.confenvironment:TZ: Asia/Shanghai # 設置時區為上海時區MONGO_INITDB_ROOT_USERNAME: rootMONGO_INITDB_ROOT_PASSWORD: 123456logging:driver: json-fileoptions:max-size: 10mmax-file: 5networks:- mongomongo_slave1:container_name: mongo_slave1restart: alwaysimage: mongo:8.0.3ports:- 37017:27017volumes:- /etc/localtime:/etc/localtime:ro # 將外邊時間直接掛載到容器內部,權限只讀- /opt/soft/mongo/mongo_slave1/data:/data/db- /opt/soft/mongo/mongo_slave1/conf:/data/configdb- /opt/soft/mongo/mongo_slave1/log:/data/log- /opt/soft/mongo/keyfile.key:/data/configdb/keyfile.keycommand: --config /data/configdb/mongod.confenvironment:TZ: Asia/Shanghai # 設置時區為上海時區MONGO_INITDB_ROOT_USERNAME: rootMONGO_INITDB_ROOT_PASSWORD: 123456logging:driver: json-fileoptions:max-size: 10mmax-file: 5networks:- mongomongo_slave2:container_name: mongo_slave2restart: alwaysimage: mongo:8.0.3ports:- 47017:27017volumes:- /etc/localtime:/etc/localtime:ro # 將外邊時間直接掛載到容器內部,權限只讀- /opt/soft/mongo/mongo_slave2/data:/data/db- /opt/soft/mongo/mongo_slave2/conf:/data/configdb- /opt/soft/mongo/mongo_slave2/log:/data/log- /opt/soft/mongo/keyfile.key:/data/configdb/keyfile.keycommand: --config /data/configdb/mongod.confenvironment:TZ: Asia/Shanghai # 設置時區為上海時區MONGO_INITDB_ROOT_USERNAME: rootMONGO_INITDB_ROOT_PASSWORD: 123456logging:driver: json-fileoptions:max-size: 10mmax-file: 5networks:- mongomongo_arbiter:container_name: mongo_arbiterrestart: alwaysimage: mongo:8.0.3ports:- 17017:27017volumes:- /etc/localtime:/etc/localtime:ro # 將外邊時間直接掛載到容器內部,權限只讀- /opt/soft/mongo/mongo_arbiter/data:/data/db- /opt/soft/mongo/mongo_arbiter/conf:/data/configdb- /opt/soft/mongo/mongo_arbiter/log:/data/log- /opt/soft/mongo/keyfile.key:/data/configdb/keyfile.keycommand: --config /data/configdb/mongod.confenvironment:TZ: Asia/Shanghai # 設置時區為上海時區MONGO_INITDB_ROOT_USERNAME: rootMONGO_INITDB_ROOT_PASSWORD: 123456logging:driver: json-fileoptions:max-size: 10mmax-file: 5networks:- mongonetworks:mongo:driver: bridge
- 使用openssl創建集群認證的秘鑰
開啟認證的同時,必須指定keyFile參數,節點之間的通訊基于該keyFile進行的。否則會啟動失敗。我們通過 docker logs 容器ID 查看docker日志可以發現啟動的時候會報錯:
BadValue: security.keyFile is required when authorization is enabled with replica sets
在/opt/soft/mongo目錄操作
# 創建秘鑰
openssl rand -base64 756 > keyfile.key
# 給秘鑰創建賦予權限
chmod 600 keyfile.key
# 修改秘鑰文件組
chown 999:999 keyfile.key
注意
:服務器各節點使用的秘鑰文件需要是相同的。
5. 服務器各節點創建配置文件:(各節點以及仲裁節點配置均一樣)
vim /opt/soft/mongo/mongo_master/conf/mongod.conf
mongod.conf配置(各節點在容器內使用的是相同端口)
# 安全認證
security:keyFile: /data/configdb/keyfile.keyauthorization: enabled# 系統日志
systemLog:destination: filepath: /data/log/mongod.log # log pathlogAppend: true# 數據存儲位置
storage:dbPath: /data/db# 網絡
net:bindIp: 0.0.0.0port: 27017 # port# 副本名稱
replication:replSetName: rs # 副本集的名稱# processManagement: # 設置了該項會導致docker exec -it mongodb1 bash 進入容器后馬上自動退出
# fork: true
復制到從節點1、2、仲裁節點的配置目錄:
cp /opt/soft/mongo/mongo_master/conf/mongod.conf /opt/soft/mongo/mongo_slave1/conf/
cp /opt/soft/mongo/mongo_master/conf/mongod.conf /opt/soft/mongo/mongo_slave2/conf/
cp /opt/soft/mongo/mongo_master/conf/mongod.conf /opt/soft/mongo/mongo_arbiter/conf/
- 開放端口(27017)
# 開放17017、27017、37017、47017端口的命令
firewall-cmd --zone=public --add-port=17017/tcp --permanent
firewall-cmd --zone=public --add-port=27017/tcp --permanent
firewall-cmd --zone=public --add-port=37017/tcp --permanent
firewall-cmd --zone=public --add-port=47017/tcp --permanent
# 重啟防火墻
firewall-cmd --reload
# 查看開放的端口
firewall-cmd --list-port
- 創建mongo容器
docker-compose up -d
- 進入主節點容器內,初始化集群副本集
# 進入容器內部
docker exec -it mongo_master bash#(不用操作)啟動替換配置服務器
# ./usr/bin/mongod --configsvr --replSet configReplSet --bind_ip 192.168.1.151:27017;# 進入數據庫,初始化副本集,默認配置服務器集群的端口為27017
./usr/bin/mongosh --port 27017
-
初始化配置服務器的副本集群
注意:通過docker部署的mongoDB,在主節點內初始化副本集時,不要使用默認的配置來初始化副本集
rs.initiate();
,因為使用默認的配置來初始化副本后,該節點在副本集中members列表中的地址信息host: '57ea58dc33df:27017'
是該節點的容器ID+端口,而不是域名+端口或者IP+端口。
這樣會導致客戶端在連接副本集時發現該節點地址也是容器ID+端口,而客戶端如果不再同一docker網絡下,客戶端會無法得知該節點的類型(主節點、從節點、還是仲裁節點),進而無法使用mongoDB的副本集。
比如Golang客戶端連接mongoDB副本集群時報的錯:
server selection error: server selection timeout, current topology: { Type: ReplicaSetNoPrimary, Servers: [{ Addr: 192.168.1.151:37017, Type: RSSecondary, Average RTT: 1208216 }, { Addr: 192.168.1.151:47017, Type: RSSecondary, Average RTT: 1641358 }, { Addr: 57ea58dc33df:27017, Type: Unknown, Last error: dial tcp: lookup 57ea58dc33df: no such host }, { Addr: 192.168.1.151:17017, Type: RSArbiter, Average RTT: 2135596 }, ] }
推薦如下方式:
rs.initiate({_id: "rs",members: [{ _id: 0, host: "192.168.1.151:27017" }]
});
具體操作:
# 驗證身份
use admin;
db.auth("root", "123456");# 使用默認的配置來初始化副本集
# rs.initiate();# _id: “rs” :副本集的配置數據存儲的主鍵值,默認就是副本集的名字。
# 有域名時盡量使用域名,避免因IP地址變更而不斷更新配置。從 MongoDB 5.0 開始,僅配置了 IP 地址的節點可能無法通過啟動驗證,因而不會啟動。
rs.initiate({_id: "rs",members: [{ _id: 0, host: "192.168.1.151:27017" }]
});# (不用操作)給主節點添加從節點到副本集
# rs.add({ host: "192.168.1.151:37017"});
# rs.add({ host: "192.168.1.151:47017"});# (不用操作)檢查副本集成員的狀態
# rs.status();#(不用操作)從配置服務器副本集中移除要替換的節點。
#rs.remove("192.168.1.151:27017");
#rs.remove("192.168.1.151:37017");
#rs.remove("192.168.1.151:47017");# 添加用戶,示例
# db.createUser(
# {
# user: "myTester",
# pwd: passwordPrompt(), // or cleartext password
# roles: [ { role: "readWrite", db: "test" },
# { role: "read", db: "reporting" } ]
# }
#);
# passwordPrompt() 方法會提示輸入密碼。也可以直接將密碼指定為字符串。推薦使用 passwordPrompt() 方法,避免將密碼顯示在屏幕上,也避免可能將密碼泄露到 Shell 歷史記錄中。# (不用操作)實操
# db.createUser(
# {
# user: "admin",
# pwd: passwordPrompt(), // or cleartext password
# roles: [ { role: "root", db: "admin" },
# { role: "read", db: "test" } ]
# }
# );
- 查看副本集的配置內容
#(未登錄時)驗證身份
use admin;
db.auth("root", "123456");rs.config();# s.conf(configuration);
# rs.config() 是該方法的別名。
# configuration:可選,如果沒有配置,則使用默認主節點配置。rs [direct: secondary] admin> rs.config();
{_id: 'rs',version: 1,term: 1,members: [{_id: 0,host: '192.168.1.151:27017',arbiterOnly: false,buildIndexes: true,hidden: false,priority: 1,tags: {},secondaryDelaySecs: Long('0'),votes: 1}],protocolVersion: Long('1'),writeConcernMajorityJournalDefault: true,settings: {chainingAllowed: true,heartbeatIntervalMillis: 2000,heartbeatTimeoutSecs: 10,electionTimeoutMillis: 10000,catchUpTimeoutMillis: -1,catchUpTakeoverDelayMillis: 30000,getLastErrorModes: {},getLastErrorDefaults: { w: 1, wtimeout: 0 },replicaSetId: ObjectId('674eb939cbdc59c4b7bbb026')}
}
說明:
_id
: “rs” :副本集的配置數據存儲的主鍵值,默認就是副本集的名字。members
:副本集成員數組,此時只有一個: “host” : “192.168.1.151:27017” ,該成員不是仲裁節點: “arbiterOnly” : false ,優先級(權重值): “priority” : 1。settings
:副本集的參數配置。
- 查看副本集狀態
#(未登錄時)驗證身份
use admin;
db.auth("root", "123456");rs.status();rs [direct: primary] admin> rs.status();
{set: 'rs',date: ISODate('2024-12-03T07:56:03.247Z'),myState: 1,term: Long('1'),syncSourceHost: '',syncSourceId: -1,heartbeatIntervalMillis: Long('2000'),majorityVoteCount: 1,writeMajorityCount: 1,votingMembersCount: 1,writableVotingMembersCount: 1,optimes: {lastCommittedOpTime: { ts: Timestamp({ t: 1733212554, i: 1 }), t: Long('1') },lastCommittedWallTime: ISODate('2024-12-03T07:55:54.247Z'),readConcernMajorityOpTime: { ts: Timestamp({ t: 1733212554, i: 1 }), t: Long('1') },appliedOpTime: { ts: Timestamp({ t: 1733212554, i: 1 }), t: Long('1') },durableOpTime: { ts: Timestamp({ t: 1733212554, i: 1 }), t: Long('1') },writtenOpTime: { ts: Timestamp({ t: 1733212554, i: 1 }), t: Long('1') },lastAppliedWallTime: ISODate('2024-12-03T07:55:54.247Z'),lastDurableWallTime: ISODate('2024-12-03T07:55:54.247Z'),lastWrittenWallTime: ISODate('2024-12-03T07:55:54.247Z')},lastStableRecoveryTimestamp: Timestamp({ t: 1733212524, i: 1 }),electionCandidateMetrics: {lastElectionReason: 'electionTimeout',lastElectionDate: ISODate('2024-12-03T07:54:34.147Z'),electionTerm: Long('1'),lastCommittedOpTimeAtElection: { ts: Timestamp({ t: 1733212474, i: 1 }), t: Long('-1') },lastSeenWrittenOpTimeAtElection: { ts: Timestamp({ t: 1733212474, i: 1 }), t: Long('-1') },lastSeenOpTimeAtElection: { ts: Timestamp({ t: 1733212474, i: 1 }), t: Long('-1') },numVotesNeeded: 1,priorityAtElection: 1,electionTimeoutMillis: Long('10000'),newTermStartDate: ISODate('2024-12-03T07:54:34.218Z'),wMajorityWriteAvailabilityDate: ISODate('2024-12-03T07:54:34.274Z')},members: [{_id: 0,name: '192.168.1.151:27017',health: 1,state: 1,stateStr: 'PRIMARY',uptime: 326,optime: { ts: Timestamp({ t: 1733212554, i: 1 }), t: Long('1') },optimeDate: ISODate('2024-12-03T07:55:54.000Z'),optimeWritten: { ts: Timestamp({ t: 1733212554, i: 1 }), t: Long('1') },optimeWrittenDate: ISODate('2024-12-03T07:55:54.000Z'),lastAppliedWallTime: ISODate('2024-12-03T07:55:54.247Z'),lastDurableWallTime: ISODate('2024-12-03T07:55:54.247Z'),lastWrittenWallTime: ISODate('2024-12-03T07:55:54.247Z'),syncSourceHost: '',syncSourceId: -1,infoMessage: '',electionTime: Timestamp({ t: 1733212474, i: 2 }),electionDate: ISODate('2024-12-03T07:54:34.000Z'),configVersion: 1,configTerm: 1,self: true,lastHeartbeatMessage: ''}],ok: 1,'$clusterTime': {clusterTime: Timestamp({ t: 1733212554, i: 1 }),signature: {hash: Binary.createFromBase64('xlhiAyf3P4i7iORTjEDG8iEIJhU=', 0),keyId: Long('7444090892849250310')}},operationTime: Timestamp({ t: 1733212554, i: 1 })
}
說明:
set
: “rs” :副本集的名字。myState
: 1:說明狀態正常。members
:副本集成員數組,此時只有一個: “name” : “192.168.1.151:27017” ,該成員的角色是 “stateStr” : “PRIMARY”, 該節點是健康的: “health” : 1。
- 添加副本從節點
在主節點添加從節點,將其他成員加入到副本集。
#(未登錄時)驗證身份
use admin;
db.auth("root", "123456");# 給主節點添加從節點1、從節點2到副本集
rs.add({ host: "192.168.1.151:37017"});
rs.add({ host: "192.168.1.151:47017"});# 添加從節點1
rs [direct: primary] admin> rs.add({ host: "192.168.1.151:37017"});
{ok: 1,'$clusterTime': {clusterTime: Timestamp({ t: 1733212800, i: 1 }),signature: {hash: Binary.createFromBase64('q6PXGMdN9gqxcY21GayEj3J1opw=', 0),keyId: Long('7444090892849250310')}},operationTime: Timestamp({ t: 1733212800, i: 1 })
}# 添加從節點2
rs [direct: primary] admin> rs.add({ host: "192.168.1.151:47017"});
{ok: 1,'$clusterTime': {clusterTime: Timestamp({ t: 1733212808, i: 1 }),signature: {hash: Binary.createFromBase64('vIil05ua/cb3Rd3s/jXTCKfgnCY=', 0),keyId: Long('7444090892849250310')}},operationTime: Timestamp({ t: 1733212808, i: 1 })
}
- 再次查看副本集狀態
#(未登錄時)驗證身份
use admin;
db.auth("root", "123456");rs.status();rs [direct: primary] admin> rs.status();
{set: 'rs',date: ISODate('2024-12-03T08:00:59.731Z'),myState: 1,term: Long('1'),syncSourceHost: '',syncSourceId: -1,heartbeatIntervalMillis: Long('2000'),majorityVoteCount: 2,writeMajorityCount: 2,votingMembersCount: 3,writableVotingMembersCount: 3,optimes: {lastCommittedOpTime: { ts: Timestamp({ t: 1733212854, i: 1 }), t: Long('1') },lastCommittedWallTime: ISODate('2024-12-03T08:00:54.269Z'),readConcernMajorityOpTime: { ts: Timestamp({ t: 1733212854, i: 1 }), t: Long('1') },appliedOpTime: { ts: Timestamp({ t: 1733212854, i: 1 }), t: Long('1') },durableOpTime: { ts: Timestamp({ t: 1733212854, i: 1 }), t: Long('1') },writtenOpTime: { ts: Timestamp({ t: 1733212854, i: 1 }), t: Long('1') },lastAppliedWallTime: ISODate('2024-12-03T08:00:54.269Z'),lastDurableWallTime: ISODate('2024-12-03T08:00:54.269Z'),lastWrittenWallTime: ISODate('2024-12-03T08:00:54.269Z')},lastStableRecoveryTimestamp: Timestamp({ t: 1733212824, i: 1 }),electionCandidateMetrics: {lastElectionReason: 'electionTimeout',lastElectionDate: ISODate('2024-12-03T07:54:34.147Z'),electionTerm: Long('1'),lastCommittedOpTimeAtElection: { ts: Timestamp({ t: 1733212474, i: 1 }), t: Long('-1') },lastSeenWrittenOpTimeAtElection: { ts: Timestamp({ t: 1733212474, i: 1 }), t: Long('-1') },lastSeenOpTimeAtElection: { ts: Timestamp({ t: 1733212474, i: 1 }), t: Long('-1') },numVotesNeeded: 1,priorityAtElection: 1,electionTimeoutMillis: Long('10000'),newTermStartDate: ISODate('2024-12-03T07:54:34.218Z'),wMajorityWriteAvailabilityDate: ISODate('2024-12-03T07:54:34.274Z')},members: [{_id: 0,name: '192.168.1.151:27017',health: 1, # 狀態state: 1,stateStr: 'PRIMARY', # 節點類型uptime: 622,optime: { ts: Timestamp({ t: 1733212854, i: 1 }), t: Long('1') },optimeDate: ISODate('2024-12-03T08:00:54.000Z'),optimeWritten: { ts: Timestamp({ t: 1733212854, i: 1 }), t: Long('1') },optimeWrittenDate: ISODate('2024-12-03T08:00:54.000Z'),lastAppliedWallTime: ISODate('2024-12-03T08:00:54.269Z'),lastDurableWallTime: ISODate('2024-12-03T08:00:54.269Z'),lastWrittenWallTime: ISODate('2024-12-03T08:00:54.269Z'),syncSourceHost: '',syncSourceId: -1,infoMessage: '',electionTime: Timestamp({ t: 1733212474, i: 2 }),electionDate: ISODate('2024-12-03T07:54:34.000Z'),configVersion: 5,configTerm: 1,self: true,lastHeartbeatMessage: ''},{_id: 1,name: '192.168.1.151:37017',health: 1, # 狀態state: 2,stateStr: 'SECONDARY', # 節點類型uptime: 59,optime: { ts: Timestamp({ t: 1733212854, i: 1 }), t: Long('1') },optimeDurable: { ts: Timestamp({ t: 1733212854, i: 1 }), t: Long('1') },optimeWritten: { ts: Timestamp({ t: 1733212854, i: 1 }), t: Long('1') },optimeDate: ISODate('2024-12-03T08:00:54.000Z'),optimeDurableDate: ISODate('2024-12-03T08:00:54.000Z'),optimeWrittenDate: ISODate('2024-12-03T08:00:54.000Z'),lastAppliedWallTime: ISODate('2024-12-03T08:00:54.269Z'),lastDurableWallTime: ISODate('2024-12-03T08:00:54.269Z'),lastWrittenWallTime: ISODate('2024-12-03T08:00:54.269Z'),lastHeartbeat: ISODate('2024-12-03T08:00:58.771Z'),lastHeartbeatRecv: ISODate('2024-12-03T08:00:58.768Z'),pingMs: Long('0'),lastHeartbeatMessage: '',syncSourceHost: '192.168.1.151:27017',syncSourceId: 0,infoMessage: '',configVersion: 5,configTerm: 1},{_id: 2,name: '192.168.1.151:47017',health: 1, # 狀態state: 2,stateStr: 'SECONDARY', # 節點類型uptime: 51,optime: { ts: Timestamp({ t: 1733212854, i: 1 }), t: Long('1') },optimeDurable: { ts: Timestamp({ t: 1733212854, i: 1 }), t: Long('1') },optimeWritten: { ts: Timestamp({ t: 1733212854, i: 1 }), t: Long('1') },optimeDate: ISODate('2024-12-03T08:00:54.000Z'),optimeDurableDate: ISODate('2024-12-03T08:00:54.000Z'),optimeWrittenDate: ISODate('2024-12-03T08:00:54.000Z'),lastAppliedWallTime: ISODate('2024-12-03T08:00:54.269Z'),lastDurableWallTime: ISODate('2024-12-03T08:00:54.269Z'),lastWrittenWallTime: ISODate('2024-12-03T08:00:54.269Z'),lastHeartbeat: ISODate('2024-12-03T08:00:58.771Z'),lastHeartbeatRecv: ISODate('2024-12-03T08:00:59.243Z'),pingMs: Long('0'),lastHeartbeatMessage: '',syncSourceHost: '192.168.1.151:27017',syncSourceId: 0,infoMessage: '',configVersion: 5,configTerm: 1}],ok: 1,'$clusterTime': {clusterTime: Timestamp({ t: 1733212854, i: 1 }),signature: {hash: Binary.createFromBase64('pDGjP0uTw7qsQYYwfVZF/3r71hs=', 0),keyId: Long('7444090892849250310')}},operationTime: Timestamp({ t: 1733212854, i: 1 })
}
- 添加仲裁節點
因為添加仲裁節點可能一直無響應,或者返回錯誤信息:
“errmsg” : “Reconfig attempted to install a config that would change the implicit default write concern. Use the setDefaultRWConcern command to set a cluster-wide write concern and try the reconfig again.”
解決辦法在主節點設置:(設置全局默認寫入關注點)
#(未登錄時)驗證身份
use admin;
db.auth("root", "123456");# 設置全局默認寫入關注點
db.adminCommand({"setDefaultRWConcern" : 1,"defaultWriteConcern" : {"w" : 2}
});
執行添加仲裁節點的操作:
#(未登錄時)驗證身份
use admin;
db.auth("root", "123456");# 添加仲裁節點
rs.addArb("192.168.1.151:17017");
添加完成后查看副本集狀態,可以發現又多了一個ARBITER節點
(仲裁節點)。
#(未登錄時)驗證身份
use admin;
db.auth("root", "123456");rs.status();rs [direct: primary] admin> rs.status();
{set: 'rs',date: ISODate('2024-12-03T08:03:34.865Z'),myState: 1,term: Long('1'),syncSourceHost: '',syncSourceId: -1,heartbeatIntervalMillis: Long('2000'),majorityVoteCount: 3,writeMajorityCount: 3,votingMembersCount: 4,writableVotingMembersCount: 3,optimes: {lastCommittedOpTime: { ts: Timestamp({ t: 1733213005, i: 1 }), t: Long('1') },lastCommittedWallTime: ISODate('2024-12-03T08:03:25.246Z'),readConcernMajorityOpTime: { ts: Timestamp({ t: 1733213005, i: 1 }), t: Long('1') },appliedOpTime: { ts: Timestamp({ t: 1733213005, i: 1 }), t: Long('1') },durableOpTime: { ts: Timestamp({ t: 1733213005, i: 1 }), t: Long('1') },writtenOpTime: { ts: Timestamp({ t: 1733213005, i: 1 }), t: Long('1') },lastAppliedWallTime: ISODate('2024-12-03T08:03:25.246Z'),lastDurableWallTime: ISODate('2024-12-03T08:03:25.246Z'),lastWrittenWallTime: ISODate('2024-12-03T08:03:25.246Z')},lastStableRecoveryTimestamp: Timestamp({ t: 1733213005, i: 1 }),electionCandidateMetrics: {lastElectionReason: 'electionTimeout',lastElectionDate: ISODate('2024-12-03T07:54:34.147Z'),electionTerm: Long('1'),lastCommittedOpTimeAtElection: { ts: Timestamp({ t: 1733212474, i: 1 }), t: Long('-1') },lastSeenWrittenOpTimeAtElection: { ts: Timestamp({ t: 1733212474, i: 1 }), t: Long('-1') },lastSeenOpTimeAtElection: { ts: Timestamp({ t: 1733212474, i: 1 }), t: Long('-1') },numVotesNeeded: 1,priorityAtElection: 1,electionTimeoutMillis: Long('10000'),newTermStartDate: ISODate('2024-12-03T07:54:34.218Z'),wMajorityWriteAvailabilityDate: ISODate('2024-12-03T07:54:34.274Z')},members: [{_id: 0,name: '192.168.1.151:27017',health: 1,state: 1,stateStr: 'PRIMARY',uptime: 777,optime: { ts: Timestamp({ t: 1733213005, i: 1 }), t: Long('1') },optimeDate: ISODate('2024-12-03T08:03:25.000Z'),optimeWritten: { ts: Timestamp({ t: 1733213005, i: 1 }), t: Long('1') },optimeWrittenDate: ISODate('2024-12-03T08:03:25.000Z'),lastAppliedWallTime: ISODate('2024-12-03T08:03:25.246Z'),lastDurableWallTime: ISODate('2024-12-03T08:03:25.246Z'),lastWrittenWallTime: ISODate('2024-12-03T08:03:25.246Z'),syncSourceHost: '',syncSourceId: -1,infoMessage: '',electionTime: Timestamp({ t: 1733212474, i: 2 }),electionDate: ISODate('2024-12-03T07:54:34.000Z'),configVersion: 6,configTerm: 1,self: true,lastHeartbeatMessage: ''},{_id: 1,name: '192.168.1.151:37017',health: 1,state: 2,stateStr: 'SECONDARY',uptime: 214,optime: { ts: Timestamp({ t: 1733213005, i: 1 }), t: Long('1') },optimeDurable: { ts: Timestamp({ t: 1733213005, i: 1 }), t: Long('1') },optimeWritten: { ts: Timestamp({ t: 1733213005, i: 1 }), t: Long('1') },optimeDate: ISODate('2024-12-03T08:03:25.000Z'),optimeDurableDate: ISODate('2024-12-03T08:03:25.000Z'),optimeWrittenDate: ISODate('2024-12-03T08:03:25.000Z'),lastAppliedWallTime: ISODate('2024-12-03T08:03:25.246Z'),lastDurableWallTime: ISODate('2024-12-03T08:03:25.246Z'),lastWrittenWallTime: ISODate('2024-12-03T08:03:25.246Z'),lastHeartbeat: ISODate('2024-12-03T08:03:33.293Z'),lastHeartbeatRecv: ISODate('2024-12-03T08:03:33.343Z'),pingMs: Long('0'),lastHeartbeatMessage: '',syncSourceHost: '192.168.1.151:27017',syncSourceId: 0,infoMessage: '',configVersion: 6,configTerm: 1},{_id: 2,name: '192.168.1.151:47017',health: 1,state: 2,stateStr: 'SECONDARY',uptime: 206,optime: { ts: Timestamp({ t: 1733213005, i: 1 }), t: Long('1') },optimeDurable: { ts: Timestamp({ t: 1733213005, i: 1 }), t: Long('1') },optimeWritten: { ts: Timestamp({ t: 1733213005, i: 1 }), t: Long('1') },optimeDate: ISODate('2024-12-03T08:03:25.000Z'),optimeDurableDate: ISODate('2024-12-03T08:03:25.000Z'),optimeWrittenDate: ISODate('2024-12-03T08:03:25.000Z'),lastAppliedWallTime: ISODate('2024-12-03T08:03:25.246Z'),lastDurableWallTime: ISODate('2024-12-03T08:03:25.246Z'),lastWrittenWallTime: ISODate('2024-12-03T08:03:25.246Z'),lastHeartbeat: ISODate('2024-12-03T08:03:33.293Z'),lastHeartbeatRecv: ISODate('2024-12-03T08:03:33.284Z'),pingMs: Long('0'),lastHeartbeatMessage: '',syncSourceHost: '192.168.1.151:27017',syncSourceId: 0,infoMessage: '',configVersion: 6,configTerm: 1},{_id: 3,name: '192.168.1.151:17017',health: 1,state: 7,stateStr: 'ARBITER',uptime: 9,lastHeartbeat: ISODate('2024-12-03T08:03:33.556Z'),lastHeartbeatRecv: ISODate('2024-12-03T08:03:33.556Z'),pingMs: Long('1'),lastHeartbeatMessage: '',syncSourceHost: '',syncSourceId: -1,infoMessage: '',configVersion: 6,configTerm: 1}],ok: 1,'$clusterTime': {clusterTime: Timestamp({ t: 1733213005, i: 1 }),signature: {hash: Binary.createFromBase64('yythHsQYHEnQfwDiPHqITajYyF0=', 0),keyId: Long('7444090892849250310')}},operationTime: Timestamp({ t: 1733213005, i: 1 })
}
3.2.2 測試
- 副本節點故障測試
關閉37017副本節點可以發現,主節點和仲裁節點對 37017 的心跳失敗。因為主節點還在,因此,沒有觸發投票選舉。如果此時,在主節點寫入數據。
進入主節點容器內,查看副本集狀態:
# 進入容器內部
docker exec -it mongo_master bash# 進入數據庫,初始化副本集,默認配置服務器集群的端口為27017
./usr/bin/mongosh --port 27017#(未登錄時)驗證身份
use admin;
db.auth("root", "123456");# 查看副本集狀態
rs.status();rs [direct: primary] admin> rs.status();
{set: 'rs',date: ISODate('2024-12-03T08:07:43.663Z'),myState: 1,term: Long('1'),syncSourceHost: '',syncSourceId: -1,heartbeatIntervalMillis: Long('2000'),majorityVoteCount: 3,writeMajorityCount: 3,votingMembersCount: 4,writableVotingMembersCount: 3,optimes: {lastCommittedOpTime: { ts: Timestamp({ t: 1733213244, i: 1 }), t: Long('1') },lastCommittedWallTime: ISODate('2024-12-03T08:07:24.335Z'),readConcernMajorityOpTime: { ts: Timestamp({ t: 1733213244, i: 1 }), t: Long('1') },appliedOpTime: { ts: Timestamp({ t: 1733213254, i: 1 }), t: Long('1') },durableOpTime: { ts: Timestamp({ t: 1733213254, i: 1 }), t: Long('1') },writtenOpTime: { ts: Timestamp({ t: 1733213254, i: 1 }), t: Long('1') },lastAppliedWallTime: ISODate('2024-12-03T08:07:34.339Z'),lastDurableWallTime: ISODate('2024-12-03T08:07:34.339Z'),lastWrittenWallTime: ISODate('2024-12-03T08:07:34.339Z')},lastStableRecoveryTimestamp: Timestamp({ t: 1733213244, i: 1 }),electionCandidateMetrics: {lastElectionReason: 'electionTimeout',lastElectionDate: ISODate('2024-12-03T07:54:34.147Z'),electionTerm: Long('1'),lastCommittedOpTimeAtElection: { ts: Timestamp({ t: 1733212474, i: 1 }), t: Long('-1') },lastSeenWrittenOpTimeAtElection: { ts: Timestamp({ t: 1733212474, i: 1 }), t: Long('-1') },lastSeenOpTimeAtElection: { ts: Timestamp({ t: 1733212474, i: 1 }), t: Long('-1') },numVotesNeeded: 1,priorityAtElection: 1,electionTimeoutMillis: Long('10000'),newTermStartDate: ISODate('2024-12-03T07:54:34.218Z'),wMajorityWriteAvailabilityDate: ISODate('2024-12-03T07:54:34.274Z')},members: [{_id: 0,name: '192.168.1.151:27017',health: 1,state: 1,stateStr: 'PRIMARY',uptime: 1026,optime: { ts: Timestamp({ t: 1733213254, i: 1 }), t: Long('1') },optimeDate: ISODate('2024-12-03T08:07:34.000Z'),optimeWritten: { ts: Timestamp({ t: 1733213254, i: 1 }), t: Long('1') },optimeWrittenDate: ISODate('2024-12-03T08:07:34.000Z'),lastAppliedWallTime: ISODate('2024-12-03T08:07:34.339Z'),lastDurableWallTime: ISODate('2024-12-03T08:07:34.339Z'),lastWrittenWallTime: ISODate('2024-12-03T08:07:34.339Z'),syncSourceHost: '',syncSourceId: -1,infoMessage: '',electionTime: Timestamp({ t: 1733212474, i: 2 }),electionDate: ISODate('2024-12-03T07:54:34.000Z'),configVersion: 6,configTerm: 1,self: true,lastHeartbeatMessage: ''},{_id: 1,name: '192.168.1.151:37017',health: 0,state: 8,stateStr: '(not reachable/healthy)',uptime: 0,optime: { ts: Timestamp({ t: 0, i: 0 }), t: Long('-1') },optimeDurable: { ts: Timestamp({ t: 0, i: 0 }), t: Long('-1') },optimeWritten: { ts: Timestamp({ t: 0, i: 0 }), t: Long('-1') },optimeDate: ISODate('1970-01-01T00:00:00.000Z'),optimeDurableDate: ISODate('1970-01-01T00:00:00.000Z'),optimeWrittenDate: ISODate('1970-01-01T00:00:00.000Z'),lastAppliedWallTime: ISODate('2024-12-03T08:07:24.335Z'),lastDurableWallTime: ISODate('2024-12-03T08:07:24.335Z'),lastWrittenWallTime: ISODate('2024-12-03T08:07:24.335Z'),lastHeartbeat: ISODate('2024-12-03T08:07:41.782Z'),lastHeartbeatRecv: ISODate('2024-12-03T08:07:27.761Z'),pingMs: Long('0'),lastHeartbeatMessage: 'Error connecting to 192.168.1.151:37017 :: caused by :: onInvoke :: caused by :: Connection refused',syncSourceHost: '',syncSourceId: -1,infoMessage: '',configVersion: 6,configTerm: 1},{_id: 2,name: '192.168.1.151:47017',health: 1,state: 2,stateStr: 'SECONDARY',uptime: 455,optime: { ts: Timestamp({ t: 1733213254, i: 1 }), t: Long('1') },optimeDurable: { ts: Timestamp({ t: 1733213254, i: 1 }), t: Long('1') },optimeWritten: { ts: Timestamp({ t: 1733213254, i: 1 }), t: Long('1') },optimeDate: ISODate('2024-12-03T08:07:34.000Z'),optimeDurableDate: ISODate('2024-12-03T08:07:34.000Z'),optimeWrittenDate: ISODate('2024-12-03T08:07:34.000Z'),lastAppliedWallTime: ISODate('2024-12-03T08:07:34.339Z'),lastDurableWallTime: ISODate('2024-12-03T08:07:34.339Z'),lastWrittenWallTime: ISODate('2024-12-03T08:07:34.339Z'),lastHeartbeat: ISODate('2024-12-03T08:07:41.777Z'),lastHeartbeatRecv: ISODate('2024-12-03T08:07:41.719Z'),pingMs: Long('0'),lastHeartbeatMessage: '',syncSourceHost: '192.168.1.151:27017',syncSourceId: 0,infoMessage: '',configVersion: 6,configTerm: 1},{_id: 3,name: '192.168.1.151:17017',health: 1,state: 7,stateStr: 'ARBITER',uptime: 258,lastHeartbeat: ISODate('2024-12-03T08:07:42.086Z'),lastHeartbeatRecv: ISODate('2024-12-03T08:07:42.095Z'),pingMs: Long('0'),lastHeartbeatMessage: '',syncSourceHost: '',syncSourceId: -1,infoMessage: '',configVersion: 6,configTerm: 1}],ok: 1,'$clusterTime': {clusterTime: Timestamp({ t: 1733213254, i: 1 }),signature: {hash: Binary.createFromBase64('Bp2qfzf57TFa56i5x5a6j97llvQ=', 0),keyId: Long('7444090892849250310')}},operationTime: Timestamp({ t: 1733213254, i: 1 })
}
重新啟動從節點,會發現,在主節點上的操作,會自動同步給從節點,從而保持數據的一致性。
-
主節點故障測試
關閉主節點,可以發現,從節點和仲裁節點對 27017 的心跳失敗,當失敗超過 10 秒,此時因為沒有主節點了,會自動發起投票。進入從節點1容器內,查看副本集狀態,可以看到27017已經停止,且47017已經成為主節點,只能在47017節點上操作數據。
# 進入容器內部
docker exec -it mongo_slave1 bash# 進入數據庫,初始化副本集,默認配置服務器集群的端口為27017
./usr/bin/mongosh --port 27017#(未登錄時)驗證身份
use admin;
db.auth("root", "123456");# 查看副本集狀態
rs.status();members: [{_id: 0,name: '192.168.1.151:27017',health: 0,state: 8,stateStr: '(not reachable/healthy)',uptime: 0,optime: { ts: Timestamp({ t: 0, i: 0 }), t: Long('-1') },optimeDurable: { ts: Timestamp({ t: 0, i: 0 }), t: Long('-1') },optimeWritten: { ts: Timestamp({ t: 0, i: 0 }), t: Long('-1') },optimeDate: ISODate('1970-01-01T00:00:00.000Z'),optimeDurableDate: ISODate('1970-01-01T00:00:00.000Z'),optimeWrittenDate: ISODate('1970-01-01T00:00:00.000Z'),lastAppliedWallTime: ISODate('2024-12-03T08:10:14.356Z'),lastDurableWallTime: ISODate('2024-12-03T08:10:14.356Z'),lastWrittenWallTime: ISODate('2024-12-03T08:10:14.356Z'),lastHeartbeat: ISODate('2024-12-03T08:10:31.467Z'),lastHeartbeatRecv: ISODate('2024-12-03T08:10:26.673Z'),pingMs: Long('0'),lastHeartbeatMessage: 'Error connecting to 192.168.1.151:27017 :: caused by :: onInvoke :: caused by :: Connection refused',syncSourceHost: '',syncSourceId: -1,infoMessage: '',configVersion: 6,configTerm: 1},{_id: 1,name: '192.168.1.151:37017',health: 1,state: 2,stateStr: 'SECONDARY',uptime: 79,optime: { ts: Timestamp({ t: 1733213426, i: 2 }), t: Long('3') },optimeDate: ISODate('2024-12-03T08:10:26.000Z'),optimeWritten: { ts: Timestamp({ t: 1733213426, i: 2 }), t: Long('3') },optimeWrittenDate: ISODate('2024-12-03T08:10:26.000Z'),lastAppliedWallTime: ISODate('2024-12-03T08:10:26.929Z'),lastDurableWallTime: ISODate('2024-12-03T08:10:26.929Z'),lastWrittenWallTime: ISODate('2024-12-03T08:10:26.929Z'),syncSourceHost: '192.168.1.151:47017',syncSourceId: 2,infoMessage: '',configVersion: 6,configTerm: 3,self: true,lastHeartbeatMessage: ''},{_id: 2,name: '192.168.1.151:47017',health: 1,state: 1,stateStr: 'PRIMARY',uptime: 76,optime: { ts: Timestamp({ t: 1733213426, i: 2 }), t: Long('3') },optimeDurable: { ts: Timestamp({ t: 1733213426, i: 2 }), t: Long('3') },optimeWritten: { ts: Timestamp({ t: 1733213426, i: 2 }), t: Long('3') },optimeDate: ISODate('2024-12-03T08:10:26.000Z'),optimeDurableDate: ISODate('2024-12-03T08:10:26.000Z'),optimeWrittenDate: ISODate('2024-12-03T08:10:26.000Z'),lastAppliedWallTime: ISODate('2024-12-03T08:10:26.929Z'),lastDurableWallTime: ISODate('2024-12-03T08:10:26.929Z'),lastWrittenWallTime: ISODate('2024-12-03T08:10:26.929Z'),lastHeartbeat: ISODate('2024-12-03T08:10:31.465Z'),lastHeartbeatRecv: ISODate('2024-12-03T08:10:30.957Z'),pingMs: Long('0'),lastHeartbeatMessage: '',syncSourceHost: '',syncSourceId: -1,infoMessage: '',electionTime: Timestamp({ t: 1733213426, i: 1 }),electionDate: ISODate('2024-12-03T08:10:26.000Z'),configVersion: 6,configTerm: 3},{_id: 3,name: '192.168.1.151:17017',health: 1,state: 7,stateStr: 'ARBITER',uptime: 76,lastHeartbeat: ISODate('2024-12-03T08:10:31.460Z'),lastHeartbeatRecv: ISODate('2024-12-03T08:10:30.963Z'),pingMs: Long('0'),lastHeartbeatMessage: '',syncSourceHost: '',syncSourceId: -1,infoMessage: '',configVersion: 6,configTerm: 3}]
重啟27017節點,再進入從節點1容器內,查看副本集狀態,可以看到重啟的27017節點變成了從節點,只能在47017節點上操作數據。
# 進入容器內部
docker exec -it mongo_slave1 bash# 進入數據庫,初始化副本集,默認配置服務器集群的端口為27017
./usr/bin/mongosh --port 27017#(未登錄時)驗證身份
use admin;
db.auth("root", "123456");# 查看副本集狀態
rs.status();members: [{_id: 0,name: '192.168.1.151:27017',health: 1,state: 2,stateStr: 'SECONDARY',uptime: 10,optime: { ts: Timestamp({ t: 1733213546, i: 1 }), t: Long('3') },optimeDurable: { ts: Timestamp({ t: 1733213546, i: 1 }), t: Long('3') },optimeWritten: { ts: Timestamp({ t: 1733213546, i: 1 }), t: Long('3') },optimeDate: ISODate('2024-12-03T08:12:26.000Z'),optimeDurableDate: ISODate('2024-12-03T08:12:26.000Z'),optimeWrittenDate: ISODate('2024-12-03T08:12:26.000Z'),lastAppliedWallTime: ISODate('2024-12-03T08:12:36.945Z'),lastDurableWallTime: ISODate('2024-12-03T08:12:36.945Z'),lastWrittenWallTime: ISODate('2024-12-03T08:12:36.945Z'),lastHeartbeat: ISODate('2024-12-03T08:12:36.898Z'),lastHeartbeatRecv: ISODate('2024-12-03T08:12:36.347Z'),pingMs: Long('0'),lastHeartbeatMessage: '',syncSourceHost: '192.168.1.151:37017',syncSourceId: 1,infoMessage: '',configVersion: 6,configTerm: 3},{_id: 1,name: '192.168.1.151:37017',health: 1,state: 2,stateStr: 'SECONDARY',uptime: 204,optime: { ts: Timestamp({ t: 1733213556, i: 1 }), t: Long('3') },optimeDate: ISODate('2024-12-03T08:12:36.000Z'),optimeWritten: { ts: Timestamp({ t: 1733213556, i: 1 }), t: Long('3') },optimeWrittenDate: ISODate('2024-12-03T08:12:36.000Z'),lastAppliedWallTime: ISODate('2024-12-03T08:12:36.945Z'),lastDurableWallTime: ISODate('2024-12-03T08:12:36.945Z'),lastWrittenWallTime: ISODate('2024-12-03T08:12:36.945Z'),syncSourceHost: '192.168.1.151:47017',syncSourceId: 2,infoMessage: '',configVersion: 6,configTerm: 3,self: true,lastHeartbeatMessage: ''},{_id: 2,name: '192.168.1.151:47017',health: 1,state: 1,stateStr: 'PRIMARY',uptime: 200,optime: { ts: Timestamp({ t: 1733213546, i: 1 }), t: Long('3') },optimeDurable: { ts: Timestamp({ t: 1733213546, i: 1 }), t: Long('3') },optimeWritten: { ts: Timestamp({ t: 1733213546, i: 1 }), t: Long('3') },optimeDate: ISODate('2024-12-03T08:12:26.000Z'),optimeDurableDate: ISODate('2024-12-03T08:12:26.000Z'),optimeWrittenDate: ISODate('2024-12-03T08:12:26.000Z'),lastAppliedWallTime: ISODate('2024-12-03T08:12:26.945Z'),lastDurableWallTime: ISODate('2024-12-03T08:12:26.945Z'),lastWrittenWallTime: ISODate('2024-12-03T08:12:26.945Z'),lastHeartbeat: ISODate('2024-12-03T08:12:35.744Z'),lastHeartbeatRecv: ISODate('2024-12-03T08:12:35.191Z'),pingMs: Long('1'),lastHeartbeatMessage: '',syncSourceHost: '',syncSourceId: -1,infoMessage: '',electionTime: Timestamp({ t: 1733213426, i: 1 }),electionDate: ISODate('2024-12-03T08:10:26.000Z'),configVersion: 6,configTerm: 3},{_id: 3,name: '192.168.1.151:17017',health: 1,state: 7,stateStr: 'ARBITER',uptime: 200,lastHeartbeat: ISODate('2024-12-03T08:12:35.694Z'),lastHeartbeatRecv: ISODate('2024-12-03T08:12:35.190Z'),pingMs: Long('0'),lastHeartbeatMessage: '',syncSourceHost: '',syncSourceId: -1,infoMessage: '',configVersion: 6,configTerm: 3}]
3.2.3 Navicat Premium連接mongoDB副本集群
如圖配置即可:
- 連接:Replica Set
- 成員:副本集各節點地址
- 讀偏好:Primary
- 復制集:(選填)
- 驗證方式:Password
3.2.4 Golang代碼連接mongoDB副本集群
如果只連接mongoDB副本集的主節點uri需要添加?connect=direct
,完整配置:
Uri: mongodb://192.168.1.151:27017/?connect=direct # mongoDB集群模式時只連接主節點需要加 /?connect=direct
如果連接mongoDB副本集群:
Uri: mongodb://192.168.1.151:27017,192.168.1.151:37017,192.168.1.151:47017/?replicaSet=rs