服務器描述:本次搭建是用來測試,所以是在一臺服務器上搭建三個redis服務(一主兩從)
服務角色 | 端口 | Redis.conf名稱 | sentinel配置文件名稱 | sentinel端口 | redis日志路徑 | sentinel路勁 |
主(master) | 6379 | redis.conf | sentinel.conf | 26379 | /home/zhangxs/data/redislog/redis_server/master.log | /home/zhangxs/data/redislog/sentinel/sentinel6379.log |
從(slave) | 6380 | redis_slave6380.conf | Sentinel6380.conf | 26380 | /home/zhangxs/data/redislog/redis_server/slave6380.log | /home/zhangxs/data/redislog/sentinel/sentinel6380.log |
從(slave) | 6381 | redis_slave6381.conf | Sentinel6381.conf | 26381 | /home/zhangxs/data/redislog/redis_server/slave6381.log | /home/zhangxs/data/redislog/sentinel/sentinel6381.log |
?
修改配置文件
- 1: redis.conf
修改redis服務日志路徑:logfile "/home/zhangxs/data/redislog/redis_server/master.log"
其他沒有修改,使用的是默認配置
?
- 2:redis_slave6380.conf 和redis_slave6381.conf? (copy redis_slave.conf)
- 設置他們指向的master服務的ip和端口:slaveof 127.0.0.1 6379(兩個文件都配置)
- 修改redis_slave6380.conf 和redis_slave6381.conf 端口:redis_slave6380.conf? 端口為26380 ; redis_slave6381.conf 端口為26381
- 設置slave服務日志路徑: logfile /home/zhangxs/data/redislog/redis_server/slave6380.log 和 logfile /home/zhangxs/data/redislog/redis_server/slave6381.log
?
- 3:sentinel.conf
- 修改日志路徑:logfile "/home/zhangxs/data/redislog/sentinel/sentinel6379.log"
?
- 4:Sentinel6380.conf 和 Sentinel6381.conf (copy sentinel.conf)
- 修改端口號:Sentinel6380.conf 改為 26380;?? Sentinel6381.conf 改為 26381;
- 修改日志路徑:logfile "/home/zhangxs/data/redislog/sentinel/sentinel6380.log" 和?logfile "/home/zhangxs/data/redislog/sentinel/sentinel6381.log"
?
?
上面配置好后,啟動redis服務
- 1:啟動master
src/redis-server redis.conf&
?
- 2:啟動從服務(slave6380)
src/redis-server redis_slave6380.conf &
?
查看主服務日志,會發現多出來一段
|
?
查看從服務日志slave6380.log
? |
- 3:啟動從服務(slave6381)
src/redis-server redis_slave6381.conf &
?
查看主服務日志,會發現多出來一段
//從服務127.0.0.1:6381發送同步請求
2755:M 29 Jul 00:26:46.531 * Slave 127.0.0.1:6381 asks for synchronization
//接受127.0.1:6381的部分重同步請求。從偏移量1開始發送積壓的1106字節
2755:M 29 Jul 00:26:46.531 * Partial resynchronization request from 127.0.0.1:6381 accepted. Sending 1106 bytes of backlog starting from offset 1.
?
查看從服務日志slave6381.log
2809:S 29 Jul 00:26:46.531 * DB loaded from disk: 0.000 seconds
2809:S 29 Jul 00:26:46.531 * Before turning into a slave, using my master parameters to synthesize a cached master: I may be able to synchronize with the new master with just a partial transfer.
2809:S 29 Jul 00:26:46.531 * Ready to accept connections
2809:S 29 Jul 00:26:46.531 * Connecting to MASTER 127.0.0.1:6379
2809:S 29 Jul 00:26:46.531 * MASTER <-> SLAVE sync started
2809:S 29 Jul 00:26:46.531 * Non blocking connect for SYNC fired the event.
2809:S 29 Jul 00:26:46.531 * Master replied to PING, replication can continue...
2809:S 29 Jul 00:26:46.531 * Trying a partial resynchronization (request 541cd938f43b4f144e647881af409fa1884ea5a4:1).
2809:S 29 Jul 00:26:46.531 * Successful partial resynchronization with master.
2809:S 29 Jul 00:26:46.532 * MASTER <-> SLAVE sync: Master accepted a Partial Resynchronization
?
- 4:測試數據同步功能(使用redis-cli 連接服務器)
1:連接master
[root@vm1 src]# redis-cli -h 127.0.0.1 -p 6379
127.0.0.1:6379> set name fj
?
2:連接slave6380
[root@vm1 src]# redis-cli -h 127.0.0.1 -p 6380
127.0.0.1:6380> get name
"fj"
127.0.0.1:6380>
?
3:連接slave6381
[root@vm1 src]# redis-cli -h 127.0.0.1 -p 6381
127.0.0.1:6381> get name
"fj"
127.0.0.1:6381>
?
Ok 主從同步沒有問題。
默認情況下從服務是不允許set數據的,測試下
127.0.0.1:6380> set name hello
(error) READONLY You can't write against a read only slave.
127.0.0.1:6380>
127.0.0.1:6381> set name hello
(error) READONLY You can't write against a read only slave.
127.0.0.1:6381>
?
啟動各個服務的sentinel
?
- 啟動sentinel6379
src/redis-sentinel sentinel.conf &
?
- 查看Sentinel6379.log
2908:X 29 Jul 01:01:32.838 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
2908:X 29 Jul 01:01:32.839 # Redis version=4.0.10, bits=64, commit=00000000, modified=0, pid=2908, just started
2908:X 29 Jul 01:01:32.839 # Configuration loaded
2908:X 29 Jul 01:01:32.839 * Increased maximum number of open files to 10032 (it was originally set to 1024).
2908:X 29 Jul 01:01:32.840 * Running mode=sentinel, port=26379.
2908:X 29 Jul 01:01:32.840 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
2908:X 29 Jul 01:01:32.855 # Sentinel ID is 1a77392638e41bb0ea0a865ffc93b8de6335227f
2908:X 29 Jul 01:01:32.855 # +monitor master mymaster 127.0.0.1 6379 quorum 2
//一個新的從服務器已經被sentinel識別并關聯
2908:X 29 Jul 01:01:32.856 * +slave slave 127.0.0.1:6380 127.0.0.1 6380 @ mymaster 127.0.0.1 6379
2908:X 29 Jul 01:01:32.858 * +slave slave 127.0.0.1:6381 127.0.0.1 6381 @ mymaster 127.0.0.1 6379
?
- 啟動sentinel6380
src/redis-sentinel sentinel6380.conf &
?
- 查看 Sentinel6380.log
2937:X 29 Jul 01:08:14.325 # Redis version=4.0.10, bits=64, commit=00000000, modified=0, pid=2937, just started
2937:X 29 Jul 01:08:14.325 # Configuration loaded
2937:X 29 Jul 01:08:14.327 * Increased maximum number of open files to 10032 (it was originally set to 1024).
2937:X 29 Jul 01:08:14.377 * Running mode=sentinel, port=26380.
2937:X 29 Jul 01:08:14.377 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
2937:X 29 Jul 01:08:14.379 # Sentinel ID is 4a6aebffdd1301bf054e722c34e8a6611418ba8a
2937:X 29 Jul 01:08:14.379 # +monitor master mymaster 127.0.0.1 6379 quorum 2
2937:X 29 Jul 01:08:14.380 * +slave slave 127.0.0.1:6380 127.0.0.1 6380 @ mymaster 127.0.0.1 6379
2937:X 29 Jul 01:08:14.381 * +slave slave 127.0.0.1:6381 127.0.0.1 6381 @ mymaster 127.0.0.1 6379
//一個新的sentinel(監控)已經被識別并關聯
2937:X 29 Jul 01:08:14.919 * +sentinel sentinel 1a77392638e41bb0ea0a865ffc93b8de6335227f 127.0.0.1 26379 @ mymaster 127.0.0.1 6379
?
- Sentinel6380啟動后會發現,Sentinel6379.log 加了一段日志
//一個新的sentinel(監控)已經被識別并關聯
2908:X 29 Jul 01:08:16.367 * +sentinel sentinel 4a6aebffdd1301bf054e722c34e8a6611418ba8a 127.0.0.1 26380 @ mymaster 127.0.0.1 6379
新啟動一個sentinel,會通過發布訂閱功能自動發現監控相同master下的其他sentinel。這一功能是通過向頻道 sentinel:hello 發送信息來實現的。
?
- 啟動sentinel6381
[root@vm1 redis-4.0.10]# src/redis-sentinel sentinel6381.conf &
?
查看 Sentinel6381.log
2961:X 29 Jul 01:11:09.823 # Configuration loaded
2961:X 29 Jul 01:11:09.823 * Increased maximum number of open files to 10032 (it was originally set to 1024).
2961:X 29 Jul 01:11:09.852 * Running mode=sentinel, port=26381.
2961:X 29 Jul 01:11:09.852 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
2961:X 29 Jul 01:11:09.853 # Sentinel ID is 1db1a4dcdf0ecca00b64d9362c2a2dd338da0030
2961:X 29 Jul 01:11:09.853 # +monitor master mymaster 127.0.0.1 6379 quorum 2
2961:X 29 Jul 01:11:09.853 * +slave slave 127.0.0.1:6380 127.0.0.1 6380 @ mymaster 127.0.0.1 6379
2961:X 29 Jul 01:11:09.855 * +slave slave 127.0.0.1:6381 127.0.0.1 6381 @ mymaster 127.0.0.1 6379
2961:X 29 Jul 01:11:10.334 * +sentinel sentinel 1a77392638e41bb0ea0a865ffc93b8de6335227f 127.0.0.1 26379 @ mymaster 127.0.0.1 6379
2961:X 29 Jul 01:11:11.446 * +sentinel sentinel 4a6aebffdd1301bf054e722c34e8a6611418ba8a 127.0.0.1 26380 @ mymaster 127.0.0.1 6379
?
?
- 新加入的sentinel6381, sentinel6379和sentinel6380 都會收到通知(//一個新的sentinel(監控)已經被識別并關聯)
2908:X 29 Jul 01:11:11.880 * +sentinel sentinel 1db1a4dcdf0ecca00b64d9362c2a2dd338da0030 127.0.0.1 26381 @ mymaster 127.0.0.1 6379 |
2937:X 29 Jul 01:11:11.878 * +sentinel sentinel 1db1a4dcdf0ecca00b64d9362c2a2dd338da0030 127.0.0.1 26381 @ mymaster 127.0.0.1 6379 |
?
sentinel的狀態都會記錄到sentinel.conf文件中,用于啟動后恢復狀態,查看下各個sentinel.conf 文件 變動后的部分
- sentinel.conf
啟動前:無???? 啟動后:sentinel myid 1a77392638e41bb0ea0a865ffc93b8de6335227f //自己的sentinel myid |
啟動前:無???? 啟動后: # Generated by CONFIG REWRITE #master下得兩個從服務 sentinel known-slave mymaster 127.0.0.1 6380 sentinel known-slave mymaster 127.0.0.1 6381 #master下其他兩個sentinel sentinel known-sentinel mymaster 127.0.0.1 26380 4a6aebffdd1301bf054e722c34e8a6611418ba8a sentinel known-sentinel mymaster 127.0.0.1 26381 1db1a4dcdf0ecca00b64d9362c2a2dd338da0030 sentinel current-epoch 0 |
?
Sentinel6380.conf和Sentinel6381.conf改動和sentinel.conf 基本一樣。不一樣的就是 記錄自己sentinel myid和master下其他兩個sentinel不一樣,大同小異。
?
測試故障遷移
Sentinel 故障遷移我使用的是默認配置(不需要再配置,可以自定義修改)
//判斷master失效,至少有兩個sentinel同意才會執行故障遷移
sentinel monitor mymaster 127.0.0.1 6379 2
//如果在10秒內sentinel 都收到master的一次有效回復,就認為該master主觀下線
sentinel down-after-milliseconds mymaster 60000
sentinel failover-timeout mymaster 180000
//在執行故障轉移時,同時只有一個slave能對新的master進行數據同步
sentinel parallel-syncs mymaster 1
sentinel monitor resque 192.168.1.3 6380 4
sentinel down-after-milliseconds resque 10000
sentinel failover-timeout resque 180000
sentinel parallel-syncs resque 5
?
- 1:查看redis的相關服務
root 2755 2551 0 00:11 pts/2 00:00:06 src/redis-server 127.0.0.1:6379
root 2780 2551 0 00:13 pts/2 00:00:06 src/redis-server 127.0.0.1:6380
root 2809 2551 0 00:26 pts/2 00:00:05 src/redis-server 127.0.0.1:6381
root 2816 2529 0 00:30 pts/1 00:00:00 redis-cli -h 127.0.0.1 -p 6379
root 2822 2530 0 00:33 pts/0 00:00:00 redis-cli -h 127.0.0.1 -p 6380
root 2841 2823 0 00:34 pts/6 00:00:00 redis-cli -h 127.0.0.1 -p 6381
root 2908 2551 0 01:01 pts/2 00:00:07 src/redis-sentinel *:26379 [sentinel]
root 2937 2551 0 01:08 pts/2 00:00:06 src/redis-sentinel *:26380 [sentinel]
root 2961 2551 0 01:11 pts/2 00:00:06 src/redis-sentinel *:26381 [sentinel]
root 3000 2551 0 01:50 pts/2 00:00:00 grep --color=auto redis
?
?
- 2:查看整個備份狀態
127.0.0.1:6379> info# Server
# Clients
# Memory
# Persistence
# Stats
# Replication
role:master
connected_slaves:2
slave0:ip=127.0.0.1,port=6380,state=online,offset=544591,lag=0
slave1:ip=127.0.0.1,port=6381,state=online,offset=544591,lag=0
master_replid:541cd938f43b4f144e647881af409fa1884ea5a4
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:544857
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:544857
# CPU# Cluster# Keyspace
?
其他信息我都刪掉了,只留下【Replication】的信息,其他信息可以在redis-cli 命令行中使用【info】命令查看
可以看到,6379是master角色,master下有兩個從服務port=6380,port=6381
?
- 3: Kill 掉 master,觀察日志
kill -9 2755
?
master被干掉了,所以master.log 沒有日志,看其他兩個從服務日志(截取部分)
redis_slave6380.log
///一分鐘內
2780:S 29 Jul 01:58:53.414 # Connection with master lost.
2780:S 29 Jul 01:58:53.414 * Caching the disconnected master state.
2780:S 29 Jul 01:58:54.163 * Connecting to MASTER 127.0.0.1:6379
2780:S 29 Jul 01:58:54.163 * MASTER <-> SLAVE sync started
2780:S 29 Jul 01:58:54.164 # Error condition on socket for SYNC: Connection refused
2780:S 29 Jul 01:58:55.168 * Connecting to MASTER 127.0.0.1:6379
2780:S 29 Jul 01:58:55.169 * MASTER <-> SLAVE sync started
....
...
...
2780:S 29 Jul 01:59:22.381 # Error condition on socket for SYNC: Connection refused
2780:S 29 Jul 01:59:23.389 * Connecting to MASTER 127.0.0.1:6379
2780:S 29 Jul 01:59:23.389 * MASTER <-> SLAVE sync started
2780:S 29 Jul 01:59:23.389 # Error condition on socket for SYNC: Connection refused
///一分鐘后
2780:S 29 Jul 01:59:24.321 * SLAVE OF 127.0.0.1:6381 enabled (user request from 'id=8 addr=127.0.0.1:52556 fd=11 name=sentinel-4a6aebff-cmd age=3070 idle=0 flags=x db=0 sub=0 psub=0 multi=3 qbuf=133 qbuf-free=32635 obl=36 oll=0 omem=0 events=r cmd=exec')
2780:S 29 Jul 01:59:24.323 # CONFIG REWRITE executed with suc
2780:S 29 Jul 01:59:24.399 * Connecting to MASTER 127.0.0.1:6381
2780:S 29 Jul 01:59:24.399 * MASTER <-> SLAVE sync started
2780:S 29 Jul 01:59:24.399 * Non blocking connect for SYNC fired the event.
2780:S 29 Jul 01:59:24.399 * Master replied to PING, replication can continue...
2780:S 29 Jul 01:59:24.399 * Trying a partial resynchronization (request 541cd938f43b4f144e647881af409fa1884ea5a4:617714).
2780:S 29 Jul 01:59:24.400 * Successful partial resynchronization with master.
2780:S 29 Jul 01:59:24.400 # Master replication ID changed to 514edab0972b4b6e5388edc4f14fbdb4d223d39e
2780:S 29 Jul 01:59:24.400 * MASTER <-> SLAVE sync: Master accepted a Partial Resynchronization.
?
從01:58:53.414 到01:59:23.389 這一分鐘內一直在嘗試連接master,一分鐘內沒有連接成功后,sentinel 就會master判斷為主觀下線,看日志
Sentinel6379.log
//判定master 主觀下線
2908:X 29 Jul 01:59:23.492 # +sdown master mymaster 127.0.0.1 6379
//當前的紀元(epoch)已經被更新。
2908:X 29 Jul 01:59:23.546 # +new-epoch 1
//開始給sentinel6380投票,誰來主導這次故障轉移
2908:X 29 Jul 01:59:23.549 # +vote-for-leader 4a6aebffdd1301bf054e722c34e8a6611418ba8a 1
//判定master 客觀觀下線,已經有2個sentinel同意
2908:X 29 Jul 01:59:23.569 # +odown master mymaster 127.0.0.1 6379 #quorum 3/2
2908:X 29 Jul 01:59:23.569 # Next failover delay: I will not start a failover before Sun Jul 29 02:05:23 2018
2908:X 29 Jul 01:59:24.328 # +config-update-from sentinel 4a6aebffdd1301bf054e722c34e8a6611418ba8a 127.0.0.1 26380 @ mymaster 127.0.0.1 6379
2908:X 29 Jul 01:59:24.328 # +switch-master mymaster 127.0.0.1 6379 127.0.0.1 6381
2908:X 29 Jul 01:59:24.328 * +slave slave 127.0.0.1:6380 127.0.0.1 6380 @ mymaster 127.0.0.1 6381
2908:X 29 Jul 01:59:24.328 * +slave slave 127.0.0.1:6379 127.0.0.1 6379 @ mymaster 127.0.0.1 6381
2908:X 29 Jul 01:59:54.333 # +sdown slave 127.0.0.1:6379 127.0.0.1 6379 @ mymaster 127.0.0.1 6381
?
?
Sentinel6380.log
//判定master 主觀下線
2937:X 29 Jul 01:59:23.459 # +sdown master mymaster 127.0.0.1 6379
//判定master 客觀觀下線,已經有2個sentinel同意
2937:X 29 Jul 01:59:23.536 # +odown master mymaster 127.0.0.1 6379 #quorum 2/2
2937:X 29 Jul 01:59:23.537 # +new-epoch 1
//嘗試故障轉移master
2937:X 29 Jul 01:59:23.537 # +try-failover master mymaster 127.0.0.1 6379
//開始給sentinel6380投票,誰來主導這次故障轉移
2937:X 29 Jul 01:59:23.540 # +vote-for-leader 4a6aebffdd1301bf054e722c34e8a6611418ba8a 1
//其他兩個sentinel 都投票給4a6aebffdd1301bf054e722c34e8a6611418ba8a 【6380sentinel】
2937:X 29 Jul 01:59:23.549 # 1db1a4dcdf0ecca00b64d9362c2a2dd338da0030 voted for 4a6aebffdd1301bf054e722c34e8a6611418ba8a 1
2937:X 29 Jul 01:59:23.549 # 1a77392638e41bb0ea0a865ffc93b8de6335227f voted for 4a6aebffdd1301bf054e722c34e8a6611418ba8a 1
//6379這個服務贏得選舉可以進行故障轉移
2937:X 29 Jul 01:59:23.619 # +elected-leader master mymaster 127.0.0.1 6379
//發現6379這個服務是故障轉移狀態,就開始選擇master下得從服務
2937:X 29 Jul 01:59:23.619 # +failover-state-select-slave master mymaster 127.0.0.1 6379
//故障轉移操作現在處于 select-slave 狀態 —— Sentinel 正在尋找可以升級為主服務器的從服務器。(選擇mymaster 127.0.0.1 6379 下 6381 的從服務)
2937:X 29 Jul 01:59:23.710 # +selected-slave slave 127.0.0.1:6381 127.0.0.1 6381 @ mymaster 127.0.0.1 6379
//Sentinel 正在將6379下的從服務器6381升級為主服務器,等待升級功能完成。
2937:X 29 Jul 01:59:23.710 * +failover-state-send-slaveof-noone slave 127.0.0.1:6381 127.0.0.1 6381 @ mymaster 127.0.0.1 6379
//master下的從服務 6381 等待升級
2937:X 29 Jul 01:59:23.769 * +failover-state-wait-promotion slave 127.0.0.1:6381 127.0.0.1 6381 @ mymaster 127.0.0.1 6379
//升級master下從服務6381
2937:X 29 Jul 01:59:24.251 # +promoted-slave slave 127.0.0.1:6381 127.0.0.1 6381 @ mymaster 127.0.0.1 6379
//故障轉移狀態切換到了 reconf-slaves 狀態。(再次確認從服務器轉為主服務器)
2937:X 29 Jul 01:59:24.251 # +failover-state-reconf-slaves master mymaster 127.0.0.1 6379
//牽頭的sentinel 向6380從服務器發送slaveof 指令,將它設置為新的master
2937:X 29 Jul 01:59:24.321 * +slave-reconf-sent slave 127.0.0.1:6380 127.0.0.1 6380 @ mymaster 127.0.0.1 6379
//6379不再處于客觀下線狀態,客觀下線狀態只用于master服務,6379已經不是master了
2937:X 29 Jul 01:59:24.653 # -odown master mymaster 127.0.0.1 6379
//6380服務正在將自己設置為6381主服務的從服務器,還未完成
2937:X 29 Jul 01:59:25.270 * +slave-reconf-inprog slave 127.0.0.1:6380 127.0.0.1 6380 @ mymaster 127.0.0.1 6379
//從服務器6380已經完成對新master服務的同步
2937:X 29 Jul 01:59:25.271 * +slave-reconf-done slave 127.0.0.1:6380 127.0.0.1 6380 @ mymaster 127.0.0.1 6379
//master6379 故障轉移結束,所有的從服務器開始同步新的master
2937:X 29 Jul 01:59:25.347 # +failover-end master mymaster 127.0.0.1 6379
//配置變更主服務器的ip地址已經改變, 選擇master 為6381
2937:X 29 Jul 01:59:25.347 # +switch-master mymaster 127.0.0.1 6379 127.0.0.1 6381
//6381下的兩個從服務(新的從服務被識別并關聯)
2937:X 29 Jul 01:59:25.347 * +slave slave 127.0.0.1:6380 127.0.0.1 6380 @ mymaster 127.0.0.1 6381
2937:X 29 Jul 01:59:25.347 * +slave slave 127.0.0.1:6379 127.0.0.1 6379 @ mymaster 127.0.0.1 6381
//添加master下從服務6379 為客觀下線
2937:X 29 Jul 01:59:55.401 # +sdown slave 127.0.0.1:6379 127.0.0.1 6379 @ mymaster 127.0.0.1 6381
?
?
Sentinel6381.log
//判定master 主觀下線
2961:X 29 Jul 01:59:23.459 # +sdown master mymaster 127.0.0.1 6379
2961:X 29 Jul 01:59:23.545 # +new-epoch 1
//開始給6380投票
2961:X 29 Jul 01:59:23.548 # +vote-for-leader 4a6aebffdd1301bf054e722c34e8a6611418ba8a 1
2961:X 29 Jul 01:59:24.325 # +config-update-from sentinel 4a6aebffdd1301bf054e722c34e8a6611418ba8a 127.0.0.1 26380 @ mymaster 127.0.0.1 6379
2961:X 29 Jul 01:59:24.325 # +switch-master mymaster 127.0.0.1 6379 127.0.0.1 6381
2961:X 29 Jul 01:59:24.325 * +slave slave 127.0.0.1:6380 127.0.0.1 6380 @ mymaster 127.0.0.1 6381
2961:X 29 Jul 01:59:24.325 * +slave slave 127.0.0.1:6379 127.0.0.1 6379 @ mymaster 127.0.0.1 6381
2961:X 29 Jul 01:59:54.348 # +sdown slave 127.0.0.1:6379 127.0.0.1 6379 @ mymaster 127.0.0.1 6381
可以看到在 01:59:23秒 也就是一分鐘之后,三個監控master的sentinel 都判定了master為主觀下線(sdown),我們配置的至少有2個sentinel 同意master 主觀下線,master就會被切換到客觀下線(odown) 【+odown master mymaster 127.0.0.1 6379 #quorum 2/2】。當判斷master為客觀下線后,sentinel 就開始選舉出新的master,可以看到Sentinel6380.log 日志要比其他的sentinel.log多,因為整個選舉的過程是Sentinel6380 在牽頭執行。
?
4:在6381下查看整個服務的備份狀態
# Replication
role:master
connected_slaves:1
slave0:ip=127.0.0.1,port=6380,state=online,offset=1125775,lag=0
master_replid:514edab0972b4b6e5388edc4f14fbdb4d223d39e
master_replid2:541cd938f43b4f144e647881af409fa1884ea5a4
master_repl_offset:1125775
second_repl_offset:617714
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:77200
repl_backlog_histlen:1048576
可以看到6381的角色成為了master,只有一個slave,因為另一個掛了。
?
5:再次啟動6379服務
查看6379服務的日志
3037:S 29 Jul 02:46:47.753 # CONFIG REWRITE executed with success.
3037:S 29 Jul 02:46:48.359 * Connecting to MASTER 127.0.0.1:6381
3037:S 29 Jul 02:46:48.360 * MASTER <-> SLAVE sync started
3037:S 29 Jul 02:46:48.360 * Non blocking connect for SYNC fired the event.
3037:S 29 Jul 02:46:48.361 * Master replied to PING, replication can continue...
3037:S 29 Jul 02:46:48.362 * Trying a partial resynchronization (request 7b0dc6ac9c2188e3c92eb29eea200ea6c572619c:1).
3037:S 29 Jul 02:46:48.608 * Full resync from master: 514edab0972b4b6e5388edc4f14fbdb4d223d39e:1178142
3037:S 29 Jul 02:46:48.608 * Discarding previously cached master state.
3037:S 29 Jul 02:46:48.708 * MASTER <-> SLAVE sync: receiving 253 bytes from master
3037:S 29 Jul 02:46:48.708 * MASTER <-> SLAVE sync: Flushing old data
3037:S 29 Jul 02:46:48.708 * MASTER <-> SLAVE sync: Loading DB in memory
3037:S 29 Jul 02:46:48.708 * MASTER <-> SLAVE sync: Finished with success
3037:S 29 Jul 02:46:48.709 * Background append only file rewriting started by pid 3042
3037:S 29 Jul 02:46:48.750 * AOF rewrite child asks to stop sending diffs.
3042:C 29 Jul 02:46:48.750 * Parent agreed to stop sending diffs. Finalizing AOF...
3042:C 29 Jul 02:46:48.750 * Concatenating 0.00 MB of AOF diff received from parent.
3042:C 29 Jul 02:46:48.750 * SYNC append only file rewrite performed
3042:C 29 Jul 02:46:48.750 * AOF rewrite: 6 MB of memory used by copy-on-write
3037:S 29 Jul 02:46:48.781 * Background AOF rewrite terminated with success
3037:S 29 Jul 02:46:48.782 * Residual parent diff successfully flushed to the rewritten AOF (0.00 MB)
3037:S 29 Jul 02:46:48.782 * Background AOF rewrite finished successfully
?
看到啟動后重寫配置文件,然后自動連接6381這個新的master服務,開始從master 上全量同步數據
?
查看6381這個新master日志
?
//響應6379的同步請求
2809:M 29 Jul 02:46:48.362 * Slave 127.0.0.1:6379 asks for synchronization
//不接受同步部分數據請求
2809:M 29 Jul 02:46:48.362 * Partial resynchronization not accepted: Replication ID mismatch (Slave asked for '7b0dc6ac9c2188e3c92eb29eea200ea6c572619c', my replication IDs are '514edab0972b4b6e5388edc4f14fbdb4d223d39e' and '541cd938f43b4f144e647881af409fa1884ea5a4')
//開始同步
2809:M 29 Jul 02:46:48.362 * Starting BGSAVE for SYNC with target: disk
2809:M 29 Jul 02:46:48.607 * Background saving started by pid 3041
3041:C 29 Jul 02:46:48.607 * DB saved on disk
#6m內存用于寫復制
3041:C 29 Jul 02:46:48.608 * RDB: 6 MB of memory used by copy-on-write
//后臺保存成功
2809:M 29 Jul 02:46:48.708 * Background saving terminated with success
2809:M 29 Jul 02:46:48.708 * Synchronization with slave 127.0.0.1:6379 succeeded
?
查看sentinel日志
6379sentinel.log
//減去6379服務的主觀下線狀態
2908:X 29 Jul 02:46:37.610 # -sdown slave 127.0.0.1:6379 127.0.0.1 6379 @ mymaster 127.0.0.1 6381
//轉換為master6381 下的從服務
2908:X 29 Jul 02:46:47.628 * +convert-to-slave slave 127.0.0.1:6379 127.0.0.1 6379 @ mymaster 127.0.0.1 6381
?
6380sentinel.log
2937:X 29 Jul 02:46:37.767 # -sdown slave 127.0.0.1:6379 127.0.0.1 6379 @ mymaster 127.0.0.1 6381
?
6381sentinel.log
2961:X 29 Jul 02:46:38.023 # -sdown slave 127.0.0.1:6379 127.0.0.1 6379 @ mymaster 127.0.0.1 6381
?
再看下6381這個新master的整個備份信息
# Replication
role:master
connected_slaves:2
slave0:ip=127.0.0.1,port=6380,state=online,offset=1348132,lag=0
slave1:ip=127.0.0.1,port=6379,state=online,offset=1348132,lag=0
master_replid:514edab0972b4b6e5388edc4f14fbdb4d223d39e
master_replid2:541cd938f43b4f144e647881af409fa1884ea5a4
master_repl_offset:1348132
second_repl_offset:617714
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:299557
repl_backlog_histlen:1048576
新的master 增加了一個slave6379 從服務
?
我們再搭建前做的redis配置,當故障轉移成功后,這些配置會被重寫,重寫的內容基本都在配置文件的最后
Redis.conf配置文件,多了
# Generated by CONFIG REWRITE
slaveof 127.0.0.1 6381
?
Redis6381.conf配置文件沒有變
# Master-Slave replication. Use slaveof to make a Redis instance a copy of
# slaveof <masterip> <masterport>
slaveof 127.0.0.1 6381
?
新的master配置,也就是redis_slave6381.conf 已經沒有了slaveof 配置
# Master-Slave replication. Use slaveof to make a Redis instance a copy of
# slaveof <masterip> <masterport>
Sentinel.conf 也會發生變化,可以自己去看看
?
6:再次測試故障轉移后的 同步功能
之前的master已經不再支持set
127.0.0.1:6379> set name zhangxs
(error) READONLY You can't write against a read only slave.
新master set成功
127.0.0.1:6381> set name zhangxs
OK
127.0.0.1:6379> get name
"zhangxs"
127.0.0.1:6380> get name
"zhangxs"
同步沒問題。
?
轉移后的服務器變成了
服務角色 | 端口 | Redis.conf名稱 | sentinel配置文件名稱 | sentinel端口 | redis日志路徑 | sentinel路勁 |
從(master) | 6379 | redis.conf | sentinel.conf | 26379 | /home/zhangxs/data/redislog/redis_server/master.log | /home/zhangxs/data/redislog/sentinel/sentinel6379.log |
從(slave) | 6380 | redis_slave6380.conf | Sentinel6380.conf | 26380 | /home/zhangxs/data/redislog/redis_server/slave6380.log | /home/zhangxs/data/redislog/sentinel/sentinel6380.log |
主(slave) | 6381 | redis_slave6381.conf | Sentinel6381.conf | 26381 | /home/zhangxs/data/redislog/redis_server/slave6381.log | /home/zhangxs/data/redislog/sentinel/sentinel6381.log |
?
?
?
參考文檔:http://www.redis.cn/topics