Keepalived

High Availability Web Server Using Keepalived


Overview


In this article we are going to launch a high availability website which is running on nginx using Keepalived.


Prerequisites


  • Two CentOS/any Linux OS installed machine, one for LB1 and other for LB2.
  • Proper networking access between the machines and outside world.
  • We assume that you have the basic working knowledge of Nginx.

In this article we are using the two machines.

  • LB001 machine which is running on CentOS Linux release 7.2.1511 having ip (192.168.20.22) and VIP (192.168.20.100).
  • LB002 machine which is running on CentOS Linux release 7.2.1511 having ip (192.168.20.23) and VIP (192.168.20.100).

Note: Don't assign VIP initially. It will be automatically assign/take care by keepalived which we will see in coming steps.


Brief Overview of Keepalived


Keepalived is a Linux implementation of the VRRP (Virtual Router Redundancy Protocol) protocol to make IPs highly available - a so called VIP (Virtual IP).


Generally the VRRP protocol take care that one of nodes is master in pool of the servers where the keepalived is running. The backup node(s) always listens for multicast packets from a node which have a higher priority. If the backup node fails to receive VRRP advertisements for a period longer than three times of the advertisement timer, the backup node takes the master state and assigns the configured IP(s) to itself. In case there are more than one backup nodes with the same priority, the one with the highest IP wins the election.


Objective


Our main aim is to run the high availability website.

  • Install Nginx web servers
  • Install Keepalived
  • Check the IP failover


Initial IP Configuration on both LB's




Nginx Installation on LB001 (192.168.20.22)


On our first machine LB001 (192.168.20.22), we are installing the Nginx with the below command.


[root@lb001 somesh]# rpm -Uvh http://nginx.org/packages/centos/7/noarch/RPMS/nginx-release-centos-7-0.el7.ngx.noarch.rpm

[root@lb001 somesh]# yum -y install nginx

[root@lb001 somesh]# systemctl start nginx

[root@lb001 somesh]# systemctl enable nginx

[root@lb001 somesh]# systemctl status nginx

[root@lb001 somesh]# firewall-cmd --zone=public --permanent --add-service=http

[root@lb001 somesh]# firewall-cmd --zone=public --permanent --add-service=https

[root@lb001 somesh]# firewall-cmd --reload


Access the Nginx test page using the machine ip http://192.168.20.22/



Nginx Installation on LB002 (192.168.20.23)


On our second machine LB002 (192.168.20.23), we are installing the Nginx with the below command.


[root@lb002 somesh]# rpm -Uvh http://nginx.org/packages/centos/7/noarch/RPMS/nginx-release-centos-7-0.el7.ngx.noarch.rpm

[root@lb002 somesh]# yum -y install nginx

[root@lb002 somesh]# systemctl start nginx

[root@lb002 somesh]# systemctl enable nginx

[root@lb002 somesh]# systemctl status nginx

[root@lb002 somesh]# firewall-cmd --zone=public --permanent --add-service=http

[root@lb002 somesh]# firewall-cmd --zone=public --permanent --add-service=https

[root@lb002 somesh]# firewall-cmd --reload

 
Access the Nginx test page using the machine ip http://192.168.20.23/

 

Document Root (Create demo pages for each LB)


For a highly availability setup in production/real scenario, you would want both servers to serve exactly the same page. However, for the understanding, we will use Nginx to indicate which of the two servers is serving our requests at any given time. To do this, we will change the default index.html page on each of our web server.
 
On our first web server (lb001), replace the contents of the file with this:
  

Now check the webpage in web browser.

 


On our second web server (lb002), replace the contents of the file with this:
  

Now check the webpage in web browser.

 



Keepalived Installation and Configuration on LB001


[root@lb001 somesh]# yum -y install keepalived

 

[root@lb001 somesh]# yum -y install keepalived

 

[root@lb001 somesh]# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf_back

 

[root@lb001 keepalived]# cat keepalived.conf
vrrp_script chk_health {

script "pidof nginx"
interval 2         # check status in every 2 seconds
fall 3             # require 3 failures for KO
rise 2             # require 2 successes for OK

}

vrrp_instance VI_1 {

interface enp0s3      # Could be the same on both machine, whichever starts first will become master, or choose MASTER/BACKUP
state MASTER          # Virtual router id must be same
virtual_router_id 51  # Could be the same on both instances, unless using MASTER/BACKUP then Higher priority for Master
priority 105
authentication {

auth_type PASS
auth_pass somesh123

}
# VIP assigned here on interface enp0s3 that keepalived will monitor
virtual_ipaddress {

192.168.20.100/24 dev enp0s3

}

track_script {

chk_health

}

}

[root@lb001 keepalived]# systemctl start keepalived

 

[root@lb001 keepalived]# systemctl enable keepalived

 

[root@lb001 keepalived]# systemctl status keepalived

 

[root@lb001 keepalived]# tail -f /var/log/messages
Jan 24 15:43:37 lb001 Keepalived_vrrp[19867]: Sending gratuitous ARP on enp0s3 for 192.168.20.100
Jan 24 15:43:37 lb001 Keepalived_vrrp[19867]: Sending gratuitous ARP on enp0s3 for 192.168.20.100
Jan 24 15:43:37 lb001 Keepalived_vrrp[19867]: Sending gratuitous ARP on enp0s3 for 192.168.20.100
Jan 24 15:43:37 lb001 Keepalived_vrrp[19867]: Sending gratuitous ARP on enp0s3 for 192.168.20.100
Jan 24 15:43:42 lb001 Keepalived_vrrp[19867]: Sending gratuitous ARP on enp0s3 for 192.168.20.100
Jan 24 15:43:42 lb001 Keepalived_vrrp[19867]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on enp0s3 for 192.168.20.100
Jan 24 15:43:42 lb001 Keepalived_vrrp[19867]: Sending gratuitous ARP on enp0s3 for 192.168.20.100


 
 
 

Keepalived Installation and Configuration on LB002


" width="20" height="20">

[root@lb002 somesh]# yum -y install keepalived

 

[root@lb002 somesh]# yum -y install keepalived

 

[root@lb002 somesh]# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf_back

 

[root@lb002 keepalived]# cat keepalived.conf
vrrp_script chk_health {

script "pidof nginx"
interval 2         # check status in every 2 seconds
fall 3             # require 3 failures for KO
rise 2             # require 2 successes for OK

}

vrrp_instance VI_1 {

interface enp0s3      # Could be the same on both machine, whichever starts first will become master, or choose MASTER/BACKUP
state BACKUP          # Virtual router id must be same
virtual_router_id 51  # Could be the same on both instances, unless using MASTER/BACKUP then less priority for Backup
priority 100

authentication {

auth_type PASS
auth_pass somesh123

}

# VIP assigned here on interface enp0s3 that keepalived will monitor
virtual_ipaddress {

192.168.20.100/24 dev enp0s3

}

track_script {

chk_health

}

}

[root@lb002 keepalived]# systemctl start keepalived

 

[root@lb002 keepalived]# systemctl enable keepalived

 

[root@lb002 keepalived]# systemctl status keepalived

 

[root@lb002 keepalived]# tail -f /var/log/messages

Jan 24 15:43:42 lb002 Keepalived_vrrp[11732]: Sending gratuitous ARP on enp0s3 for 192.168.20.100
Jan 24 15:43:42 lb002 Keepalived_vrrp[11732]: Sending gratuitous ARP on enp0s3 for 192.168.20.100
Jan 24 15:43:42 lb002 Keepalived_vrrp[11732]: Sending gratuitous ARP on enp0s3 for 192.168.20.100
Jan 24 15:43:42 lb002 Keepalived_vrrp[11732]: Sending gratuitous ARP on enp0s3 for 192.168.20.100
Jan 24 15:43:47 lb002 Keepalived_vrrp[11732]: Sending gratuitous ARP on enp0s3 for 192.168.20.100
Jan 24 15:43:47 lb002 Keepalived_vrrp[11732]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on enp0s3 for 192.168.20.100
Jan 24 15:43:47 lb002 Keepalived_vrrp[11732]: Sending gratuitous ARP on enp0s3 for 192.168.20.100

 

Nginx Failover with Keepalived


After keepalived and Nginx is running on both machine the VIP is assigned to Master LB (LB001).

 

 

If we try to access website using the VIP (192.168.20.100) then it's showing the Web Server1 because the request is served by the Master LB (LB001).


 

Now try to stop Nginx on master LB (LB001) and check on the VIP (192.168.20.100) that the site is available or not. If the site is available then the failover is happen using keepalived.

 

In the below snapshot we can see that the VIP is switch-over to Backup LB (LB002) after Nginx stopped on Master.


 

 

Here we can see that the website is up and running after stopped the nginx on Master LB001. It’s all done .. 🙂 🙂



Code


Code_@_Github


Leave a Reply

Your email address will not be published. Required fields are marked *