文章详情

  • 游戏榜单
  • 软件榜单
关闭导航
热搜榜
热门下载
热门标签
php爱好者> php文档>Using HAProxy for MySQL failover and redundancy

Using HAProxy for MySQL failover and redundancy

时间:2010-02-27  来源:pkman110

10 08/09
7:00 am

Using HAProxy for MySQL failover and redundancy

Tags: haproxy, mysql, redundancy, scaling

(Update 1 – Aug 29, 2009:) It appears this configuration wasn’t working with HAProxy version 1.3.20 due to the “option nolinger” feature. I have removed it from the configuration and can confirm it works well with HAProxy v.1.3.15 to v.1.3.20. Because of this, you’ll also notice a significant increase in TIME_WAIT sessions, as well as ip_conntrack_count increasing from ~150 to ~925.

This post summarizes my reflections on failover, redundancy, and ultimately scaling MySQL databases using load-balancing software known as HAProxy.

At my current employer, we have been using HAProxy to build very simple server clusters to help clients scale their databases. It works for most people assuming their application:

  • Has an acceptable ratio of reads/writes (i.e: 100:1)
  • Can separate reads and writes at the application level

If your read/write ratio is lower, that’s when you need to look into different scaling solutions such as sharding.

I’ve designed a slightly more complex HAProxy configuration file which load-balances requests to MySQL databases. It detects failures such as broken replication and offline servers, and adjusts the availability of servers accordingly.

Each database server is running an xinetd daemon. Port 9201 is used to monitor replication and port 9200 is used to monitor mysql status. These ports are monitored by HAProxy as you will see in the configuration file below.

HAProxy backend to monitor replication

128
129
130
131
132
133
backend db01_replication
        mode tcp
        balance roundrobin
        option tcpka
        option httpchk
        server db01 172.16.0.60:3306 check port 9201 inter 1s rise 1 fall 1

HAProxy backend to monitor mysql status

168
169
170
171
172
173
backend db01_status
        mode tcp
        balance roundrobin
        option tcpka
        option httpchk
        server db01 172.16.0.60:3306 check port 9200 inter 1s rise 2 fall 2

I modified the mysqlchk_status.sh script found at SysBible with my own.

The mysqlchk_replication.sh script is similar to the one above, except it checks a few other variables such as Slave_IO_Running, Slave_SQL_Running and Seconds Behind Master. Success will always return a ‘200 OK’ and failures will always return a ‘503 Service Unavailable’.

My test setup

  • 2 HAProxy load-balancers in Active-Passive mode (VRRP using Keepalived)
  • 2 MySQL database servers with Master-Master replication in Active-Passive mode
  • 3 MySQL database servers with slave replication (read-only)

Failure scenarios

Based on a small set of failure scenarios, we’re able to determine how the load balancers should direct traffic. We obviously don’t want read requests from a database server who’s not replicating its master. We also don’t want to send writes to a server who’s offline. The examples below describe how HAProxy will react in those scenarios.

1. Replication breaks, lags, or stops working on DB02

  • DB01 becomes the master database server.
  • HAProxy stops sending requests to DB02 and DB05 (its slave).
  • Despite this, DB01 and DB05 are still able to receive replicated data from DB02.

2. Replication breaks, lags, or stops working on DB01

  • DB02 becomes the master database server.
  • HAProxy stops sending requests to DB01, DB03 and DB04 (its slaves).
  • Despite this, DB02, DB03 and DB04 are still able to receive replicated data from DB01.

3. Replication breaks, lags, or stops working on DB01 & DB02

  • There is no writable master database server. Service is severely degraded and action should be taken to bring one master server back into replication.
  • This is a split-brain problem. Both servers are online, but they aren’t replicating each other.
  • HAProxy only sends read requests to DB01 and DB02
  • HAProxy stops sending requests to DB03, DB04 and DB05 (the slaves).
  • Despite this, DB03 and DB04 are still able to receive replicated data from DB01.
  • Despite this, DB05 is still able to receive replicated data from DB02.

4. DB02 is offline, due to a server crash or something similar

  • DB01 becomes the master database server.
  • HAProxy stops sending requests to DB02 and DB05 (its slave).
  • DB05 can’t receive replicated data from DB02.
  • DB01 goes into backup mode which can have different settings to support more concurrency, send alerts, etc.

5. DB01 is offline, due to a server crash or something similar

  • DB02 becomes the master database server.
  • HAProxy stops sending requests to DB01, DB03 and DB04 (its slaves).
  • DB03 and DB04 can’t receive replicated data from DB01.
  • DB02 goes into backup mode which can have different settings to support more concurrency, send alerts, etc.

6. DB01 and DB02 are offline, due to a server crash or something similar

  • There is no master database server.
  • HAProxy stops sending requests to all DB servers.
  • Call your sysadmin because your website is probably down.

Download

WARNING / DISCLAIMER
This configuration has not been tested in a production environment and should be used at your own risk.

Here are the scripts and config files, or scroll down to view the code:

  • xinetd-mysqlchk
  • mysqlchk_status.sh
  • haproxy-db.cfg

The xinetd config file

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
#
# /etc/xinetd.d/mysqlchk
#
service mysqlchk_write
{
        flags           = REUSE
        socket_type     = stream
        port            = 9200
        wait            = no
        user            = nobody
        server          = /opt/mysqlchk_status.sh
        log_on_failure  += USERID
        disable         = no
        only_from       = 172.16.0.0/24 # recommended to put the IPs that need
                                    # to connect exclusively (security purposes)
}
 
service mysqlchk_read
{
        flags           = REUSE
        socket_type     = stream
        port            = 9201
        wait            = no
        user            = nobody
        server          = /opt/mysqlchk_replication.sh
        log_on_failure  += USERID
        disable         = no
        only_from       = 172.16.0.0/24 # recommended to put the IPs that need
                                    # to connect exclusively (security purposes)
}

The mysqlchk_status script

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
#!/bin/bash
#
# /opt/mysqlchk_status.sh
#
# This script checks if a mysql server is healthy running on localhost. It will
# return:
#
# "HTTP/1.x 200 OK\r" (if mysql is running smoothly)
#
# - OR -
#
# "HTTP/1.x 500 Internal Server Error\r" (else)
#
# The purpose of this script is make haproxy capable of monitoring mysql properly
#
# Author: Unai Rodriguez
#
# It is recommended that a low-privileged-mysql user is created to be used by
# this script. Something like this:
#
# mysql> GRANT SELECT on mysql.* TO 'mysqlchkusr'@'localhost' \
#     -> IDENTIFIED BY '257retfg2uysg218' WITH GRANT OPTION;
# mysql> flush privileges;
#
# Script modified by Alex Williams - August 4, 2009
#       - removed the need to write to a tmp file, instead store results in memory
 
MYSQL_HOST="172.16.0.60"
MYSQL_PORT="3306"
MYSQL_USERNAME="replication_user"
MYSQL_PASSWORD="replication_pass"
 
#
# We perform a simple query that should return a few results :-p
 
ERROR_MSG=`/usr/bin/mysql --host=$MYSQL_HOST --port=$MYSQL_PORT --user=$MYSQL_USERNAME --password=$MYSQL_PASSWORD -e "show databases;" 2>/dev/null`
 
#
# Check the output. If it is not empty then everything is fine and we return
# something. Else, we just do not return anything.
#
if [ "$ERROR_MSG" != "" ]
then
        # mysql is fine, return http 200
        /bin/echo -e "HTTP/1.1 200 OK\r\n"
        /bin/echo -e "Content-Type: Content-Type: text/plain\r\n"
        /bin/echo -e "\r\n"
        /bin/echo -e "MySQL is running.\r\n"
        /bin/echo -e "\r\n"
else
        # mysql is fine, return http 503
        /bin/echo -e "HTTP/1.1 503 Service Unavailable\r\n"
        /bin/echo -e "Content-Type: Content-Type: text/plain\r\n"
        /bin/echo -e "\r\n"
        /bin/echo -e "MySQL is *down*.\r\n"
        /bin/echo -e "\r\n"
fi

The HAProxy config file

WARNING / DISCLAIMER
This configuration has not been tested in a production environment and should be used at your own risk.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
# HAProxy configuration - haproxy-db.cfg
 
##
## FRONTEND ##
##
 
# Load-balanced IPs for DB writes and reads
#
frontend db_write
        bind 172.16.0.50:3306
        default_backend cluster_db_write
 
frontend db_read
        bind 172.16.0.51:3306
        default_backend cluster_db_read
 
# Monitor DB server availability
#
frontend monitor_db01
        #
        # set db01_backup to 'up' or 'down'
        #
        bind 127.0.0.1:9301
        mode http
        #option nolinger
 
        acl no_repl_db01 nbsrv(db01_replication) eq 0
        acl no_repl_db02 nbsrv(db02_replication) eq 0
        acl no_db01 nbsrv(db01_status) eq 0
        acl no_db02 nbsrv(db02_status) eq 0
 
        monitor-uri /dbs
        monitor fail unless no_repl_db01 no_repl_db02 no_db02
        monitor fail if no_db01 no_db02
 
frontend monitor_db02
        #
        # set db02_backup to 'up' or 'down'
        #
        bind 127.0.0.1:9302
        mode http
        #option nolinger
 
        acl no_repl_db01 nbsrv(db01_replication) eq 0
        acl no_repl_db02 nbsrv(db02_replication) eq 0
        acl no_db01 nbsrv(db01_status) eq 0
        acl no_db02 nbsrv(db02_status) eq 0
 
        monitor-uri /dbs
        monitor fail unless no_repl_db01 no_repl_db02 no_db01
        monitor fail if no_db01 no_db02
 
frontend monitor_db03
        #
        # set db03 read-only slave to 'down'
        #
        bind 127.0.0.1:9303
        mode http
        #option nolinger
 
        acl no_repl_db03 nbsrv(db03_replication) eq 0
        acl no_repl_db01 nbsrv(db01_replication) eq 0
        acl db02 nbsrv(db02_status) eq 1
 
        monitor-uri /dbs
        monitor fail if no_repl_db03
        monitor fail if no_repl_db01 db02
 
frontend monitor_db04
        #
        # set db04 read-only slave to 'down'
        #
        bind 127.0.0.1:9304
        mode http
        #option nolinger
 
        acl no_repl_db04 nbsrv(db04_replication) eq 0
        acl no_repl_db01 nbsrv(db01_replication) eq 0
        acl db02 nbsrv(db02_status) eq 1
 
        monitor-uri /dbs
        monitor fail if no_repl_db04
        monitor fail if no_repl_db01 db02
 
frontend monitor_db05
        #
        # set db05 read-only slave to 'down'
        #
        bind 127.0.0.1:9305
        mode http
        #option nolinger
 
        acl no_repl_db05 nbsrv(db05_replication) eq 0
        acl no_repl_db02 nbsrv(db02_replication) eq 0
        acl db01 nbsrv(db01_status) eq 1
 
        monitor-uri /dbs
        monitor fail if no_repl_db05
        monitor fail if no_repl_db02 db01
 
# Monitor for split-brain syndrome
#
frontend monitor_splitbrain
        #
        # set db01_splitbrain and db02_splitbrain to 'up'
        #
        bind 127.0.0.1:9300
        mode http
        #option nolinger
 
        acl no_repl01 nbsrv(db01_replication) eq 0
        acl no_repl02 nbsrv(db02_replication) eq 0
        acl db01 nbsrv(db01_status) eq 1
        acl db02 nbsrv(db02_status) eq 1
 
        monitor-uri /dbs
        monitor fail unless no_repl01 no_repl02 db01 db02
 
##
## BACKEND ##
##
 
# Check every DB server replication status
#      - perform an http check on port 9201 (replication status)
#      - set to 'down' if response is '503 Service Unavailable'
#      - set to 'up' if response is '200 OK'
#
backend db01_replication
        mode tcp
        balance roundrobin
        option tcpka
        option httpchk
        server db01 172.16.0.60:3306 check port 9201 inter 1s rise 1 fall 1
 
backend db02_replication
        mode tcp
        balance roundrobin
        option tcpka
        option httpchk
        server db02 172.16.0.61:3306 check port 9201 inter 1s rise 1 fall 1
 
backend db03_replication
        mode tcp
        balance roundrobin
        option tcpka
        option httpchk
        server db03 172.16.0.63:3306 check port 9201 inter 1s rise 1 fall 1
 
backend db04_replication
        mode tcp
        balance roundrobin
        option tcpka
        option httpchk
        server db04 172.16.0.64:3306 check port 9201 inter 1s rise 1 fall 1
 
backend db05_replication
        mode tcp
        balance roundrobin
        option tcpka
        option httpchk
        server db05 172.16.0.65:3306 check port 9201 inter 1s rise 1 fall 1
 
# Check Master DB server mysql status
#      - perform an http check on port 9200 (mysql status)
#      - set to 'down' if response is '503 Service Unavailable'
#      - set to 'up' if response is '200 OK'
#
backend db01_status
        mode tcp
        balance roundrobin
        option tcpka
        option httpchk
        server db01 172.16.0.60:3306 check port 9200 inter 1s rise 2 fall 2
 
backend db02_status
        mode tcp
        balance roundrobin
        option tcpka
        option httpchk
        server db02 172.16.0.61:3306 check port 9200 inter 1s rise 2 fall 2
 
# DB write cluster
#      Failure scenarios:
#      - replication 'up' on db01 & db02       = writes to db01
#      - replication 'down' on db02            = writes to db01
#      - replication 'down' on db01            = writes to db02
#      - replication 'down' on db01 & db02     = go nowhere, split-brain, cluster FAIL!
#      - mysql 'down' on db02                          = writes to db01_backup
#      - mysql 'down' on db01                          = writes to db02_backup
#      - mysql 'down' on db01 & db02           = go nowhere, cluster FAIL!
#
backend cluster_db_write
        #
        # - max 1 db server available at all times
        # - db01 is preferred (top of list)
        # - db_backups set their 'up' or 'down' based on results from monitor_dbs
        #
        mode    tcp
        option  tcpka
        balance roundrobin
        option  httpchk GET /dbs
        server  db01 172.16.0.60:3306 weight 1 check port 9201 inter 1s rise 2 fall 1
        server  db02 172.16.0.61:3306 weight 1 check port 9201 inter 1s rise 2 fall 1 backup
        server  db01_backup 172.16.0.60:3306 weight 1 check port 9301 inter 1s rise 2 fall 2 addr 127.0.0.1 backup
        server  db02_backup 172.16.0.61:3306 weight 1 check port 9302 inter 1s rise 2 fall 2 addr 127.0.0.1 backup
 
# DB read cluster
#      Failure scenarios
#      - replication 'up' on db01 & db02       = reads on db01, db02, all db_slaves
#      - replication 'down' on db02            = reads on db01, slaves of db01
#      - replication 'down' on db01            = reads on db02, slaves of db02
#      - replication 'down' on db01 & db02 = reads on db01_splitbrain and db01_splitbrain only
#      - mysql 'down' on db02                          = reads on db01_backup, slaves of db01
#      - mysql 'down' on db01                          = reads on db02_backup, slaves of db02
#      - mysql 'down' on db01 & db02           = go nowhere, cluster FAIL!
#
backend cluster_db_read
        #
        # - max 2 master db servers available at all times
        # - max N slave db servers available at all times except during split-brain
        # - dbs track 'up' and 'down' of dbs in the cluster_db_write
        # - db_backups track 'up' and 'down' of db_backups in the cluster_db_write
        # - db_splitbrains set their 'up' or 'down' based on results from monitor_splitbrain
        #
        mode    tcp
        option  tcpka
        balance roundrobin
        option  httpchk GET /dbs
        server  db01 172.16.0.60:3306 weight 1 track cluster_db_write/db01
        server  db02 172.16.0.61:3306 weight 1 track cluster_db_write/db02
        server  db01_backup 172.16.0.60:3306 weight 1 track cluster_db_write/db01_backup
        server  db02_backup 172.16.0.61:3306 weight 1 track cluster_db_write/db02_backup
        server  db01_splitbrain 172.16.0.60:3306 weight 1 check port 9300 inter 1s rise 1 fall 2 addr 127.0.0.1
        server  db02_splitbrain 172.16.0.61:3306 weight 1 check port 9300 inter 1s rise 1 fall 2 addr 127.0.0.1
        #
        #      Scaling & redundancy options
        #      - db_slaves set their 'up' or 'down' based on results from monitor_dbs
        #      - db_slaves should take longer to rise
        #
        server  db03_slave 172.16.0.63:3306 weight 1 check port 9303 inter 1s rise 5 fall 1 addr 127.0.0.1
        server  db04_slave 172.16.0.64:3306 weight 1 check port 9304 inter 1s rise 5 fall 1 addr 127.0.0.1
        server  db05_slave 172.16.0.65:3306 weight 1 check port 9305 inter 1s rise 5 fall 1 addr 127.0.0.1
Comments (14)
排行榜 更多 +
辰域智控app

辰域智控app

系统工具 下载
网医联盟app

网医联盟app

运动健身 下载
汇丰汇选App

汇丰汇选App

金融理财 下载