mysql官方文檔中有這么一句話:
MySQL Proxy is currently an Alpha release and should not be used within?production environments.
So。。。
使用haproxy來做這些事,以下僅供參考:
環境配置
master ? ? ? ?192.168.1.106 ? ? ? ? ? ? master1
slave1 ? ? ? ? 192.168.1.107 ? ? ? ? ? ? master2master1(與master1主-主復制)
slave2 ? ? ? ? 192.168.1.110 ? ? ? ? ? ? slave2---->master1(master1的從庫)
slave3 ? ? ? ? 192.168.1.111 ? ? ? ? ? ? slave3---->master1(master1的從庫)
slave4 ? ? ? ??192.168.1.112 ? ? ? ? ? ? slave4---->?master2(master2的從庫)
monitor ? ? 192.168.1.200
192.168.1.105 ? eth1 寫ip
192.168.1.113 ? eth2 讀ip
說明:
當 master?停止復制, slave1?成為主庫,haproxy停止發送請求到master和slave2,slave3, slave1與slave2,slave3依然可以從master接收日志。
當slave1停止復制,master成為主庫,haproxy停止發送請求到slave1和slave4,master與slave4依然可以從slave1接收日志。
當 master和slave1同時停止復制,這時2臺主庫變成readonly模式,數據庫不能寫入 ,haproxy停止發送請求到slave2,slave3,slave4(腦裂)。
當slave1 offline時,master進入backup mode,haproxy停止發送請求到slave1,slave4。
當master offline時,slave1進入backup mode,haproxy停止發送請求到master,slave2,slave3。
當master和slave1同時offline,整個DB停止工作。
1、主從配置(略)
2、安裝 ?xinetd ,配置mysqlchk服務
vi /etc/xinetd.d/mysqlchk
--兩個master配置
service?mysqlchk-write
{
flags???????????=?REUSE
socket_type?????=?stream
port????????????=?9201
wait????????????=?no
user=?root
server??????????=?/opt/script/mysqlchk_status.sh
log_on_failure??+=?USERID
disable?????????=?no
only_from???????=?192.168.1.0/24??#recommended?toput?the?IPs?that?need
#?toconnectexclusively?(security?purposes)
per_source??????=?UNLIMITED
}
service?mysqlchk-read
{
flags???????????=?REUSE
socket_type?????=?stream
port????????????=?9202
wait????????????=?no
user=?root
server??????????=?/opt/script/mysqlchk_replication.sh
log_on_failure??+=?USERID
disable?????????=?no
only_from???????=?192.168.1.0/24??#recommended?toput?the?IPs?that?need
#?toconnectexclusively?(security?purposes)
per_source??????=?UNLIMITED
}
--所有slaves只需配置復制狀態檢查腳本
service?mysqlchk-read
{
flags???????????=?REUSE
socket_type?????=?stream
port????????????=?9202
wait????????????=?no
user=?root
server??????????=?/opt/script/mysqlchk_replication.sh
log_on_failure??+=?USERID
disable?????????=?no
only_from???????=?192.168.1.0/24??#recommended?toput?the?IPs?that?need
#?toconnectexclusively?(security?purposes)
per_source??????=?UNLIMITED
}
vi /etc/services
--兩個master添加:
mysqlchk-write????9201/tcp?????????#?MySQL?status?check
mysqlchk-read9202/tcp?????????#?MySQL?replicationcheck
--所有slaves添加:
mysqlchk-read9202/tcp?????????#?MySQL?replicationcheck
重啟xinetd
#?/etc/init.d/xinetd?stop
#?/etc/init.d/xinetd?start
查看端口號確認
[root@master?xinetd.d]#??netstat?-antup|grep?xinetd
tcp????????0??????0?0.0.0.0:9201????????????????0.0.0.0:*???????????????????LISTEN??????3077/xinetd
tcp????????0??????0?0.0.0.0:9202????????????????0.0.0.0:*???????????????????LISTEN??????3077/xinetd
3、monitor主機安裝haproxy
tar?zxvf?haproxy-1.4.23.tar.gz
cd?haproxy-1.4.23
make?TARGET=linux26?ARCH=x86_64
make?install
4、配置haproxy配置文件
vi /usr/local/haproxy-1.4.23/conf/haproxy-db.cfg
#?HAProxy?configuration?-?haproxy-db.cfg
global
maxconn?4096
daemon
pidfile?/usr/local/haproxy-1.4.23/haproxy.pid
#debug
#quiet
#chroot?/usr/share/haproxy
defaults
log?????global
mode????http
#optionhttplog
optiondontlognull
log?127.0.0.1?local0
retries?3
optionredispatch
maxconn?4096
timeout?connect1000ms
timeout?client?50000ms
timeout?server?50000ms
listen??stats?:8011
balance
mode?http
stats?enable
stats?auth?root:monitor
##
##?FRONTEND?##
##
#?Load-balanced?IPsforDB?writesandreads
#
frontend?db_write
mode?tcp
bind?192.168.1.105:3306
default_backend?cluster_db_write
frontend?db_read
mode?tcp
bind?192.168.1.113:3306
default_backend?cluster_db_read
#?Monitor?DB?server?availability
#
frontend?monitor_master
#
#?setmaster_backupto'up'or'down'
#
bind?127.0.0.1:9301
mode?http
#optionnolinger
acl?no_repl_master?nbsrv(master_replication)?eq?0
acl?no_repl_slave1?nbsrv(slave1_replication)?eq?0
acl?no_master?nbsrv(master_status)?eq?0
acl?no_slave1?nbsrv(slave1_status)?eq?0
monitor-uri?/monitor
monitor?fail?unless?no_repl_master?no_repl_slave1?no_slave1
monitor?fail?if?no_master?no_slave1
frontend?monitor_slave1
#
#?setslave1_backupto'up'or'down'
#
bind?127.0.0.1:9302
mode?http
#optionnolinger
acl?no_repl_master?nbsrv(master_replication)?eq?0
acl?no_repl_slave1?nbsrv(slave1_replication)?eq?0
acl?no_master?nbsrv(master_status)?eq?0
acl?no_slave1?nbsrv(slave1_status)?eq?0
monitor-uri?/monitor
monitor?fail?unless?no_repl_master?no_repl_slave1?no_master
monitor?fail?if?no_master?no_slave1
frontend?monitor_slave2
#
#?setslave2read-onlyslaveto'down'
#
bind?127.0.0.1:9303
mode?http
#optionnolinger
acl?no_repl_slave2?nbsrv(slave2_replication)?eq?0
acl?no_repl_master?nbsrv(master_replication)?eq?0
acl?slave1?nbsrv(slave1_status)?eq?1
monitor-uri?/monitor
monitor?fail?if?no_repl_slave2
monitor?fail?if?no_repl_master?slave1
frontend?monitor_slave3
#
#?setslave3read-onlyslaveto'down'
#
bind?127.0.0.1:9304
mode?http
#optionnolinger
acl?no_repl_slave3?nbsrv(slave3_replication)?eq?0
acl?no_repl_master?nbsrv(master_replication)?eq?0
acl?slave1?nbsrv(slave1_status)?eq?1
monitor-uri?/monitor
monitor?fail?if?no_repl_slave3
monitor?fail?if?no_repl_master?slave1
frontend?monitor_slave4
#
#?setslave4read-onlyslaveto'down'
#
bind?127.0.0.1:9305
mode?http
#optionnolinger
acl?no_repl_slave4?nbsrv(slave4_replication)?eq?0
acl?no_repl_slave1?nbsrv(slave1_replication)?eq?0
acl?master?nbsrv(master_status)?eq?1
monitor-uri?/monitor
monitor?fail?if?no_repl_slave4
monitor?fail?if?no_repl_slave1?master
#?Monitor?forsplit-brain?syndrome
#
frontend?monitor_splitbrain
#
#?setmaster_splitbrainandslave1_splitbrainto'up'
#
bind?127.0.0.1:9300
mode?http
#optionnolinger
acl?no_repl01?nbsrv(master_replication)?eq?0
acl?no_repl02?nbsrv(slave1_replication)?eq?0
acl?master?nbsrv(master_status)?eq?1
acl?slave1?nbsrv(slave1_status)?eq?1
monitor-uri?/monitor
monitor?fail?unless?no_repl01?no_repl02?master?slave1
##
##?BACKEND?##
##
#?Checkevery?DB?server?replication?status
#?-?perform?an?http?checkonport?9201?(replication?status)
#?-?setto'down'if?responseis'503?Service?Unavailable'
#?-?setto'up'if?responseis'200?OK'
#
backend?master_replication
mode?tcp
balance?roundrobin
optiontcpka
optionhttpchk
server?master?192.168.1.106:3306?checkport?9202?inter?5s?rise?1?fall?1
backend?slave1_replication
mode?tcp
balance?roundrobin
optiontcpka
optionhttpchk
server?slave1?192.168.1.107:3306?checkport?9202?inter?5s?rise?1?fall?1
backend?slave2_replication
mode?tcp
balance?roundrobin
optiontcpka
optionhttpchk
server?slave2?192.168.1.110:3306?checkport?9202?inter?5s?rise?1?fall?1
backend?slave3_replication
mode?tcp
balance?roundrobin
optiontcpka
optionhttpchk
server?slave3?192.168.1.111:3306?checkport?9202?inter?5s?rise?1?fall?1
backend?slave4_replication
mode?tcp
balance?roundrobin
optiontcpka
optionhttpchk
server?slave4?192.168.1.112:3306?checkport?9202?inter?5s?rise?1?fall?1
#?CheckMaster?DB?server?mysql?status
#?-?perform?an?http?checkonport?9201?(mysql?status)
#?-?setto'down'if?responseis'503?Service?Unavailable'
#?-?setto'up'if?responseis'200?OK'
#
backend?master_status
mode?tcp
balance?roundrobin
optiontcpka
optionhttpchk
server?master?192.168.1.106:3306?checkport?9201?inter?5s?rise?2?fall?2
backend?slave1_status
mode?tcp
balance?roundrobin
optiontcpka
optionhttpchk
server?slave1?192.168.1.107:3306?checkport?9201?inter?5s?rise?2?fall?2
#?DB?write?cluster
#?Failure?scenarios:
#?-?replication?'up'onmaster?&?slave1?=?writestomaster
#?-?replication?'down'onslave1?=?writestomaster
#?-?replication?'down'onmaster?=?writestoslave1
#?-?replication?'down'onmaster?&?slave1?=?go?nowhere,?split-brain,?cluster?FAIL!
#?-?mysql?'down'onslave1?=?writestomaster_backup
#?-?mysql?'down'onmaster?=?writestoslave1_backup
#?-?mysql?'down'onmaster?&?slave1?=?go?nowhere,?cluster?FAIL!
#
backend?cluster_db_write
#
#?-?max1?db?server?availableatalltimes
#?-?master?ispreferred?(topoflist)
#?-?db_backups?settheir'up'or'down'basedonresultsfrommonitor_monitor
#
mode?tcp
optiontcpka
balance?roundrobin
optionhttpchk?GET?/monitor
server?master?192.168.1.106:3306?weight?1?checkport?9202?inter?5s?rise?2?fall?1
server?slave1?192.168.1.107:3306?weight?1?checkport?9202?inter?5s?rise?2?fall?1?backup
server?master_backup?192.168.1.106:3306?weight?1?checkport?9301?inter?5s?rise?2?fall?2?addr?127.0.0.1?backup
server?slave1_backup?192.168.1.107:3306?weight?1?checkport?9302?inter?5s?rise?2?fall?2?addr?127.0.0.1?backup
#?DB?readcluster
#?Failure?scenarios
#?-?replication?'up'onmaster?&?slave1?=?readsonmaster,?slave1,alldb_slaves
#?-?replication?'down'onslave1?=?readsonmaster,?slavesofmaster
#?-?replication?'down'onmaster?=?readsonslave1,?slavesofslave1
#?-?replication?'down'onmaster?&?slave1?=?readsonmaster_splitbrainandmaster_splitbrainonly
#?-?mysql?'down'onslave1?=?readsonmaster_backup,?slavesofmaster
#?-?mysql?'down'onmaster?=?readsonslave1_backup,?slavesofslave1
#?-?mysql?'down'onmaster?&?slave1?=?go?nowhere,?cluster?FAIL!
#
backend?cluster_db_read
#
#?-?max2?master?db?servers?availableatalltimes
#?-?maxN?slave?db?servers?availableatalltimesexceptduring?split-brain
#?-?monitor?track?'up'and'down'ofmonitorinthe?cluster_db_write
#?-?db_backups?track?'up'and'down'ofdb_backupsinthe?cluster_db_write
#?-?db_splitbrains?settheir'up'or'down'basedonresultsfrommonitor_splitbrain
#
mode?tcp
optiontcpka
balance?roundrobin
optionhttpchk?GET?/monitor
server?master?192.168.1.106:3306?weight?1?track?cluster_db_write/master
server?slave1?192.168.1.107:3306?weight?1?track?cluster_db_write/slave1
server?master_backup?192.168.1.106:3306?weight?1?track?cluster_db_write/master_backup
server?slave1_backup?192.168.1.107:3306?weight?1?track?cluster_db_write/slave1_backup
server?master_splitbrain?192.168.1.106:3306?weight?1?checkport?9300?inter?5s?rise?1?fall?2?addr?127.0.0.1
server?slave1_splitbrain?192.168.1.107:3306?weight?1?checkport?9300?inter?5s?rise?1?fall?2?addr?127.0.0.1
#
#?Scaling?&?redundancy?options
#?-?db_slaves?settheir'up'or'down'basedonresultsfrommonitor_monitor
#?-?db_slaves?should?take?longer?torise
#
server?slave2_slave?192.168.1.110:3306?weight?1?checkport?9303?inter?5s?rise?5?fall?1?addr?127.0.0.1
server?slave3_slave?192.168.1.111:3306?weight?1?checkport?9304?inter?5s?rise?5?fall?1?addr?127.0.0.1
server?slave4_slave?192.168.1.112:3306?weight?1?checkport?9305?inter?5s?rise?5?fall?1?addr?127.0.0.1
5、啟動haproxy
haproxy?-f?/usr/local/haproxy-1.4.23/conf/haproxy-db.cfg
監控地址:http://192.168.1.200:8011/haproxy?stats
user:root??? password:monitor
一些參數說明 :
maxconn
Sets the maximum per-process number of concurrent connections to . It
is equivalent to the command-line argument "-n". Proxies will stop accepting
connections when this limit is reached.
daemon
Makes the process fork into background. This is the recommended mode of
operation. It is equivalent to the command line "-D" argument. It can be
disabled by the command line "-db" argument.
pidfile
Writes pids of all daemons into file . This option is equivalent to
the "-p" command line argument. The file must be accessible to the user
starting the process.
retries
Set the number of retries to perform on a server after a connection failure
May be used in sections:??? defaults | frontend | listen | backend
yes?? |??? no??? |?? yes? |?? yes
Arguments :
?? is the number of times a connection attempt should be retried on
a server when a connection either is refused or times out. The
default value is 3.
It is important to understand that this value applies to the number of
connection attempts, not full requests. When a connection has effectively
been established to a server, there will be no more retry.
In order to avoid immediate reconnections to a server which is restarting,
a turn-around timer of 1 second is applied before a retry occurs.
When "option redispatch" is set, the last retry may be performed on another
server even if a cookie references a different server.
See also : "option redispatch"
option redispatch
no option redispatch
Enable or disable session redistribution in case of connection failure
May be used in sections:??? defaults | frontend | listen | backend
yes?? |??? no??? |?? yes? |?? yes
Arguments : none
In HTTP mode, if a server designated by a cookie is down, clients may
definitely stick to it because they cannot flush the cookie, so they will not
be able to access the service anymore.
Specifying "option redispatch" will allow the proxy to break their
persistence and redistribute them to a working server.
It also allows to retry last connection to another server in case of multiple
connection failures. Of course, it requires having "retries" set to a nonzero
value.
This form is the preferred form, which replaces both the "redispatch" and
"redisp" keywords.
If this option has been enabled in a "defaults" section, it can be disabled
in a specific instance by prepending the "no" keyword before it.
option dontlognull
no option dontlognull
Enable or disable logging of null connections
May be used in sections :?? defaults | frontend | listen | backend
yes?? |??? yes?? |?? yes? |?? no
Arguments : none
In certain environments, there are components which will regularly connect to
various systems to ensure that they are still alive. It can be the case from
another load balancer as well as from monitoring systems. By default, even a
simple port probe or scan will produce a log. If those connections pollute
the logs too much, it is possible to enable option "dontlognull" to indicate
that a connection on which no data has been transferred will not be logged,
which typically corresponds to those probes.
It is generally recommended not to use this option in uncontrolled
environments (eg: internet), otherwise scans and other malicious activities
would not be logged.
If this option has been enabled in a "defaults" section, it can be disabled
in a specific instance by prepending the "no" keyword before it.
另外,使用keepalived實現代理層的HA。