{"id":4828,"date":"2010-08-01T10:38:00","date_gmt":"2010-08-01T10:38:00","guid":{"rendered":"http:\/\/www.hbyconsultancy.com\/?p=4828"},"modified":"2010-08-01T10:38:00","modified_gmt":"2010-08-01T10:38:00","slug":"two-nodes-load-balance-and-failover-with-keepalived-and-ubuntu-server-10-04-x64","status":"publish","type":"post","link":"https:\/\/hbyconsultancy.com\/2010\/08\/two-nodes-load-balance-and-failover-with-keepalived-and-ubuntu-server-10-04-x64.html","title":{"rendered":"Two nodes Load balance and Failover with keepalived and Ubuntu Server 10.04 x64"},"content":{"rendered":"

In an ideal system architecture using load balancers in separate nodes is preferred, however it\u2019s also possible to have your load balancers in the same nodes with your applications. I have used in this architecture the same hardware as the previous Master\/Master MySQL cluster<\/a>, including Ubuntu server 10.04 x64, Apache2 as web server, two nodes HP DL380G6 with 3 hard disks 15K in RAID5 and connected to a SAN storage via Fiber. For load balancing and failover I used keepalived and LVS, and you can use heartbeat to get your cluster running.<\/p>\n

First you will need to set at least two IPs (10.10.0.1 and 10.10.0.2) for your servers, and one virtual (10.10.0.3) shared between the two servers, you will have for the first interface :<\/p>\n

# The primary network interface
\nauto eth0<\/strong>
\niface eth0 inet static
\naddress 10.10.0.1<\/strong>
\nnetmask 255.255.255.0
\nnetwork 10.10.0.0
\nbroadcast 10.10.0.255
\ngateway 10.10.0.250
\nauto eth0:0<\/strong>
\niface eth0:0 inet static
\naddress 10.10.0.3<\/strong>
\nnetmask 255.255.255.0
\nnetwork 10.10.0.0
\nbroadcast 10.10.0.255<\/code><\/p>\n

and the second interface :<\/p>\n

# The primary network interface
\nauto eth0<\/strong>
\niface eth0 inet static
\naddress 10.10.0.2<\/strong>
\nnetmask 255.255.255.0
\nnetwork 10.10.0.0
\nbroadcast 10.10.0.255
\ngateway 10.10.0.250
\nauto eth0:0<\/strong>
\niface eth0:0 inet static
\naddress 10.10.0.3<\/strong>
\nnetmask 255.255.255.0
\nnetwork 10.10.0.0
\nbroadcast 10.10.0.255<\/code><\/p>\n

Then we can start by installing keepalived (v1.1.17 is available in Ubuntu repositories)<\/p>\n

sudo apt-get install keepalived<\/code><\/p>\n

You will have to create two configuration files for the first node 10.10.0.1 (Master) and second node 10.10.0.2 (Backup). So we add in the master node :<\/p>\n

usr01@server01:~$ sudo nano \/etc\/keepalived\/keepalived.conf
\n# Keepalived Configuration File
\nvrrp_instance VI_1 {
\nstate MASTER<\/strong>
\ninterface eth0
\nvirtual_router_id 10
\npriority 200<\/strong>
\nvirtual_ipaddress {
\n10.10.0.3\/24
\n}
\nnotify_master \"\/etc\/keepalived\/notify.sh del 10.10.0.3\"
\nnotify_backup \"\/etc\/keepalived\/notify.sh add 10.10.0.3\"
\nnotify_fault \"\/etc\/keepalived\/notify.sh add 10.10.0.3\"
\n}
\nvirtual_server 10.10.0.3 80 {
\ndelay_loop 30
\nlb_algo rr<\/strong>
\nlb_kind DR<\/strong>
\npersistence_timeout 50
\nprotocol TCP
\nreal_server 10.10.0.1 80 {
\nweight 100
\nHTTP_GET {
\nurl {
\npath \/index.php
\ndigest d41d8cd98f00b204e9800998ecf8427e
\n}
\nconnect_timeout 3
\nnb_get_retry 3
\ndelay_before_retry 2
\n}
\n}
\nreal_server 10.10.0.2 80 {
\nweight 100
\nHTTP_GET {
\nurl {
\npath \/index.php
\ndigest d41d8cd98f00b204e9800998ecf8427e
\n}
\nconnect_timeout 3
\nnb_get_retry 3
\ndelay_before_retry 2
\n}
\n}
\n}
\n<\/code><\/p>\n

And in the backup node :<\/p>\n

usr01@server02:~$ cat \/etc\/keepalived\/keepalived.conf
\n# Keepalived Configuration File
\nvrrp_instance VI_1 {
\nstate BACKUP<\/strong>
\ninterface eth0
\nvirtual_router_id 10
\npriority 100<\/strong>
\nvirtual_ipaddress {
\n10.10.0.3\/24
\n}
\nnotify_master \"\/etc\/keepalived\/notify.sh del 10.10.0.3\"
\nnotify_backup \"\/etc\/keepalived\/notify.sh add 10.10.0.3\"
\nnotify_fault \"\/etc\/keepalived\/notify.sh add 10.10.0.3\"
\n}
\nvirtual_server 10.10.0.3 80 {
\ndelay_loop 30
\nlb_algo rr<\/strong>
\nlb_kind DR<\/strong>
\npersistence_timeout 50
\nprotocol TCP
\nreal_server 10.10.0.1 80 {
\nweight 100
\nHTTP_GET {
\nurl {
\npath \/check.txt
\ndigest d41d8cd98f00b204e9800998ecf8427e
\n}
\nconnect_timeout 3
\nnb_get_retry 3
\ndelay_before_retry 2
\n}
\n}
\nreal_server 10.10.0.2 80 {
\nweight 100
\nHTTP_GET {
\nurl {
\npath \/check.txt
\ndigest d41d8cd98f00b204e9800998ecf8427e
\n}
\nconnect_timeout 3
\nnb_get_retry 3
\ndelay_before_retry 2
\n}
\n}
\n}<\/code><\/p>\n

The hash is created using, notice that you can add exception so apache don\u2019t log check.txt requests.<\/p>\n

usr01@server01:~$ genhash -s 10.10.0.1 -p 80 -u \/check.txt
\nMD5SUM = d41d8cd98f00b204e9800998ecf8427e
\nusr01@server01:~$ genhash -s 10.10.0.2 -p 80 -u \/check.txt
\nMD5SUM = d41d8cd98f00b204e9800998ecf8427e<\/code><\/p>\n

Also in both nodes we have to add a small utility to notify (\/etc\/keepalived\/notify.sh) :<\/p>\n


\n#!\/bin\/bash
\nVIP=\"$2\"
\ncase \"$1\" in
\nadd)
\n\/sbin\/iptables -A PREROUTING -t nat -d $VIP -p tcp -j REDIRECT
\n;;
\ndel)
\n\/sbin\/iptables -D PREROUTING -t nat -d $VIP -p tcp -j REDIRECT
\n;;
\n*)
\necho \"Usage: $0 {add|del} ipaddress\"
\nexit 1
\nesac
\nexit 0<\/code><\/p>\n

Launch keepalived on the two nodes :<\/p>\n

sudo \/etc\/init.d\/keepalived start<\/code><\/p>\n

Now we need to enable ip_forward on the two nodes permanently<\/p>\n

net.ipv4.ip_forward = 1<\/code><\/p>\n

restart network on the two nodes<\/p>\n

sudo \/etc\/init.d\/networking restart<\/code><\/p>\n

And we can check that load balancing is working correctly on Master :<\/p>\n

usr01@server02:~$ sudo ipvsadm -L -n
\n[sudo] password for usr01:
\nIP Virtual Server version 1.2.1 (size=4096)
\nProt LocalAddress:Port Scheduler Flags
\n-> RemoteAddress:Port Forward Weight ActiveConn InActConn
\nTCP 10.10.0.3:80 rr persistent 50<\/strong>
\n-> 10.10.0.1:80 Local<\/strong> 100 0 0
\n-> 10.10.0.2:80 Route<\/strong> 100 0 0 <\/code><\/p>\n

Also on Backup server<\/p>\n

usr01@server02:~$ sudo ipvsadm -L -n
\n[sudo] password for usr01:
\nIP Virtual Server version 1.2.1 (size=4096)
\nProt LocalAddress:Port Scheduler Flags
\n-> RemoteAddress:Port Forward Weight ActiveConn InActConn
\nTCP 10.10.0.3:80 rr persistent 50<\/strong>
\n-> 10.10.0.1:80 Route<\/strong> 100 0 0
\n-> 10.10.0.2:80 Local<\/strong> 100 0 0 <\/code><\/p>\n

We are almost done, we only need to add a preroute rule on the backup node manually to get started :<\/p>\n

usr01@server02$ iptables -A PREROUTING -t nat -d 10.10.0.3 -p tcp -j REDIRECT
\nusr01@server02$ iptables -t nat --list
\nChain PREROUTING (policy ACCEPT)
\ntarget prot opt source destination
\n REDIRECT tcp -- anywhere 10.10.0.3 <\/strong>
\nChain POSTROUTING (policy ACCEPT)
\ntarget prot opt source destination
\nChain OUTPUT (policy ACCEPT)
\ntarget prot opt source destination<\/del>
\nusr01@server02$sudo iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT --to-destination 10.10.0.3:80
\nusr01@server02$ iptables -t nat --list
\ntarget prot opt source destination
\nDNAT tcp -- anywhere anywhere tcp dpt:www to:10.10.0.3:80<\/strong><\/p>\n

Chain INPUT (policy ACCEPT)
\ntarget prot opt source destination <\/p>\n

Chain OUTPUT (policy ACCEPT)
\ntarget prot opt source destination <\/p>\n

Chain POSTROUTING (policy ACCEPT)
\ntarget prot opt source destination <\/code><\/p>\n

That\u2019s all.<\/p>\n

Now you can connect to http:\/\/10.10.0.3 and you can notice load distributed between two nodes internally. In case one of the nodes fail, it will takes few seconds until the backup server notice the failure and update its iptables prerouting rule. When apache service goes down, you will notice that request on port 80 will be automatically redirected to second node.<\/p>\n

As I have mentioned in the beginning, failover control cannot goes without downtime in such architecture, but it still great to distribute load if you are limited in hardware.<\/p>\n

Finally, it will be much easier (even faster) to load balance using Round Robin DNS from active directory for example, if you can manage to monitor failed service or node, however this architecture remain better on failover even with a short downtime.<\/p>\n

Update 2017-11-14 :<\/strong> I had an issue with iptables REDIRECT which was not redirecting to virtual IP anymore, replace it with DNAT fixes the issue.<\/p>\n","protected":false},"excerpt":{"rendered":"

In an ideal system architecture using load balancers in separate nodes is preferred, however it\u2019s also possible to have your load balancers in the same nodes with your applications. I have used in this architecture the same hardware as the previous Master\/Master MySQL cluster, including Ubuntu server 10.04 x64, Apache2 as web server, two nodes […]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[14],"tags":[25,47,95,129,140,149,261],"jetpack_sharing_enabled":true,"jetpack_featured_media_url":"","_links":{"self":[{"href":"https:\/\/hbyconsultancy.com\/wp-json\/wp\/v2\/posts\/4828"}],"collection":[{"href":"https:\/\/hbyconsultancy.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/hbyconsultancy.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/hbyconsultancy.com\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/hbyconsultancy.com\/wp-json\/wp\/v2\/comments?post=4828"}],"version-history":[{"count":0,"href":"https:\/\/hbyconsultancy.com\/wp-json\/wp\/v2\/posts\/4828\/revisions"}],"wp:attachment":[{"href":"https:\/\/hbyconsultancy.com\/wp-json\/wp\/v2\/media?parent=4828"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/hbyconsultancy.com\/wp-json\/wp\/v2\/categories?post=4828"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/hbyconsultancy.com\/wp-json\/wp\/v2\/tags?post=4828"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}