20171029 KeepAlived

作者: 哈喽别样 | 来源:发表于2017-10-30 23:12 被阅读125次

一、高可用集群

(一)提升系统高可用性的解决方案:冗余(redundant)

  • 工作模式

    • active/passive:主备
    • active/active:双主
  • 以心跳方式通告

    • active --> HEARTBEAT --> passive
    • active <--> HEARTBEAT <--> active
  • 故障处理

    • failover:故障切换,即某资源的主节点故障时,将资源转移至其它节点的操作
    • failback:故障移回,即某资源的主节点故障后重新修改上线后,将之前已转移至其它节点的资源重新切回的过程

(二)HA Cluster实现方案

  • ais:应用接口规范完备复杂的HA集群
    RHCS:Red Hat Cluster Suite红帽集群套件
    heartbeat
    corosync

  • vrrp协议实现:虚拟路由冗余协议
    keepalived

二、KeepAlived基本介绍

(一)VRRP(Virtual Router Redundancy Protocol)协议术语

  • 虚拟路由器:Virtual Router,多个物理路由器对外以一个IP地址提供服务,仿佛一台路由器

    • 虚拟路由器标识:VRID(0-255),唯一标识虚拟路由器
    • VIP:Virtual IP,虚拟IP
    • VMAC:Virutal MAC (00-00-5e-00-01-VRID),虚拟MAC
  • 物理路由器
    master:主设备
    backup:备用设备
    priority:优先级

(二)KeepAlived的工作特性

  • 通告:心跳,优先级等;周期性

  • 工作方式:抢占式,非抢占式

  • 安全认证:

    • 无认证
    • 简单字符认证:预共享密钥
    • MD5
  • 工作模式:

    • 主/备:单虚拟路由器
    • 主/主:主/备(虚拟路由器1),备/主(虚拟路由器2)

(三)KeepAlived的功能

  • vrrp协议完成地址流动

  • 为vip地址所在的节点生成ipvs规则(在配置文件中预先定义)

  • 为ipvs集群的各RS做健康状态检测

  • 基于脚本调用接口通过执行脚本完成脚本中定义的功能,进而影响集群事务,以此支持nginx, haproxy等服务

三、KeepAlived的配置

(一)HA Cluster配置准备:

  • 各节点时间必须同步:ntp服务(CentOS 6), chrony(CentOS 7)

    // 由于ntp/chrony服务不能同步差距过大的时间,需要先使用ntpdate命令同步一次,再开启服务
    ntpdate ntp_server_ip
    // 开启chronyd服务(CentOS 7)
    vim /etc/chrony.conf
    server 172.18.0.1 iburst
    systemctl enable chronyd
    systemctl start chronyd
    // 开启ntp服务(CentOS 6)
    vim /etc/ntp.conf
    server 172.18.0.1 iburst
    chkconfig ntpd on
    service ntpd start
    
  • 确保iptables及selinux不会成为阻碍

  • 各节点之间可通过主机名互相通信(对KA并非必须),建议使用/etc/hosts文件实现

  • 各节点之间的root用户可以基于密钥认证的ssh服务完成互相通信(对KA并非必须)

    ssh-keygen
    ssh-copy-id destination_ip
    

(二)KeepAlived的程序环境

  • 主配置文件:/etc/keepalived/keepalived.conf

  • 主程序文件:/usr/sbin/keepalived

  • Unit File:/usr/lib/systemd/system/keepalived.service

  • Unit File的环境配置文件:/etc/sysconfig/keepalived

(三)KeepAlived的配置文件结构

  • GLOBAL CONFIGURATION:全局设置
    Global definitions
    Static routes/addresses

  • VRRPD CONFIGURATION:VRRP设置
    VRRP synchronization group(s):vrrp同步组
    VRRP instance(s):即一个vrrp虚拟路由器

  • LVS CONFIGURATION:LVS设置
    Virtual server group(s)
    Virtual server(s):ipvs集群的vs和rs

(四)配置虚拟路由器

  • 语法:

    vrrp_instance <STRING> {
    ....
    }
    
  • 专用参数:

    • state MASTER | BACKUP
      当前节点在此虚拟路由器上的初始状态;只能有一个是MASTER,余下的都应该为BACKUP
    • interface IFACE_NAME
      绑定为当前虚拟路由器使用的物理接口
    • virtual_router_id VRID
      当前虚拟路由器惟一标识,范围是0-255
    • priority 100
      当前物理节点在此虚拟路由器中的优先级;范围1-254
    • advert_int 1
      vrrp通告的时间间隔,默认1s
    • authentication:认证机制
    authentication {
    auth_type AH|PASS
    auth_pass <PASSWORD> 仅前8位有效
    }
    
    • virtual_ipaddress:虚拟IP
    virtual_ipaddress { 
    <IPADDR>/<MASK> brd <IPADDR> dev <STRING> scope <SCOPE> label <LABEL>
    }
    
    • track_interface:配置监控网络接口,一旦出现故障,则转为FAULT状态实现地址转移
    track_interface {
    eth0
    eth1
    …
    }
    
    • nopreempt:定义工作模式为非抢占模式
    • preempt_delay 300:抢占式模式,节点上线后触发新选举操作的延迟时长,默认模式
    • 定义通知脚本:
      notify_master <STRING> | <QUOTED-STRING>:
      当前节点成为主节点时触发的脚本
      notify_backup <STRING> | <QUOTED-STRING>:
      当前节点转为备节点时触发的脚本
      notify_fault <STRING> | <QUOTED-STRING>:
      当前节点转为“失败”状态时触发的脚本
      notify <STRING> | <QUOTED-STRING>:
      通用格式的通知触发机制,一个脚本可完成以上三种状态的转换时的通知
  • 实验1:实现主/备虚拟路由器

    • 实验环境:
      物理路由器1:ip: 192.168.136.230, 主机名: node1, MASTER
      物理路由器2:ip: 192.168.136.130, 主机名: node2, BACKUP
      VIP:192.168.136.100
    // 配置物理路由器1
    vim /etc/keepalived/keepalived.conf
    global_defs {
       notification_email {
         root@localhost
       }
       notification_email_from node1@localhost
       smtp_server 127.0.0.1
       smtp_connect_timeout 30
       router_id node1                  // vrrp中的路由器主机名
       vrrp_mcast_group4 224.0.0.58     // 设置组播ip地址
    }
    
    vrrp_instance VI_1 {
        state MASTER
        interface ens37
        virtual_router_id 51
        priority 100
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass dd73f9d6          // openssl rand -hex 4 生成8位16进制密码
        }
        virtual_ipaddress {
            192.168.136.100/24
        }
    }
    
    systemctl start keepalived
    
    // 配置物理路由器2
    vim /etc/keepalived/keepalived.conf
    global_defs {
       notification_email {
         root@localhost
       }
       notification_email_from node2@localhost
       smtp_server 127.0.0.1
       smtp_connect_timeout 30
       router_id node2
       vrrp_mcast_group4 224.0.0.58
    }
    
    vrrp_instance VI_1 {
        state BACKUP
        interface ens37
        virtual_router_id 51
        priority 90                    //作为BACKUP优先级比MASTER要低
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass dd73f9d6         // 密码与node1相同
        }
        virtual_ipaddress {
           192.168.136.100/24
        }
    }
    
    systemctl start keepalived
    
    • 测试
      node1的ip地址已经出现VIP

    监听组播地址的tcp连接tcpdump -i ens37 -nn host 224.0.0.58,此时关闭node1的keepalived服务systemctl stop keepalived,自动由node2接管并开始声明自身拥有虚拟路由器的IP

    VIP此时已经被node2接管

  • 实验2:实现keepalived日志
vim /etc/sysconfig/keepalived
KEEPALIVED_OPTIONS="-D -S 3"    // -D:详细日志,-S 3: 设置日志facility为local3
vim /etc/rsyslog.conf 
local3.*               /var/log/keepalived.log    // 设置日志存储路径
systemctl restart rsyslog
systemctl restart keepalived
tail -f  /var/log/keepalived.log
  • 实验3:实现主/主虚拟路由器,并且当节点发生变化时主动发送邮件

    • 实验环境
      物理路由器1:ip: 192.168.136.230, 主机名: node1
      物理路由器2:ip: 192.168.136.130, 主机名: node2
      虚拟路由器1:MASTER: node1, BACKUP: node2, VIP: 192.168.136.100
      虚拟路由器2:MASTER: node2, BACKUP: node1, VIP: 192.168.136.200
    // 配置物理路由器1(虚拟路由器1的MASTER,虚拟路由器2的BACKUP)
    vim /etc/keepalived/keepalived.conf
       smtp_server 127.0.0.1
       smtp_connect_timeout 30
       router_id node1
       vrrp_mcast_group4 224.0.0.58
    }
    // 虚拟路由器1的设置
    vrrp_instance VI_1 {
        state MASTER
        interface ens37
        virtual_router_id 51
        priority 100
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass dd73f9d6
        }
        virtual_ipaddress {
            192.168.136.100/24
        }
        notify_master "/etc/keepalived/notify.sh master"
        notify_backup "/etc/keepalived/notify.sh backup"
        notify_fault "/etc/keepalived/notify.sh fault"
    }
    // 虚拟路由器2的设置
    vrrp_instance VI_2 {
        state BACKUP
        interface ens37
        virtual_router_id 61
        priority 80
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass a56c19be
        }
        virtual_ipaddress {
            192.168.136.200/24
       }
        notify_master "/etc/keepalived/notify.sh master"
        notify_backup "/etc/keepalived/notify.sh backup"
        notify_fault "/etc/keepalived/notify.sh fault"
    }
    systemctl restart keepalived
    
    // 配置物理路由器2(虚拟路由器1的BACKUP,虚拟路由器2的MASTER)
    vim /etc/keepalived/keepalived.conf
    global_defs {
       notification_email {
       root@localhost
       }
       notification_email_from node2@localhost
       smtp_server 127.0.0.1
       smtp_connect_timeout 30
       router_id node2
       vrrp_mcast_group4 224.0.0.58
    }
    // 虚拟路由器1的设置
    vrrp_instance VI_1 {
        state BACKUP
        interface ens37
        virtual_router_id 51
        priority 90
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass dd73f9d6
        }
        virtual_ipaddress {
           192.168.136.100/24
        }
        notify_master "/etc/keepalived/notify.sh master"
        notify_backup "/etc/keepalived/notify.sh backup"
        notify_fault "/etc/keepalived/notify.sh fault"
    }
    // 虚拟路由器2的设置
    vrrp_instance VI_2 {
        state MASTER
        interface ens37
        virtual_router_id 61
        priority 100
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass a56c19be
        }
        virtual_ipaddress {
            192.168.136.200/24
        }
        notify_master "/etc/keepalived/notify.sh master"
        notify_backup "/etc/keepalived/notify.sh backup"
        notify_fault "/etc/keepalived/notify.sh fault"
    }
    
    // 在物理路由器1,2上添加脚本文件
    vim /etc/keepalived/notify.sh
    #! /bin/bash
    
    contact='root@localhost'
    notify() {
            mailsubject="$(hostname) to be $1, vip floating"
            mailbody="$(date +'%F %T'): vrrp transition, $(hostname) changed to be $1"
            echo "$mailbody" | mail -s "$mailsubject" $contact
    }
    
    case $1 in
    master)
            notify master
            ;;
    backup)
            notify backup
            ;;
    fault)
            notify fault
            ;;
    *)
            echo "Usage: $(basename $0) {master|backup|fault}"
            exit 1
            ;;
    esac
    chmod +x /etc/keepalived/notify.sh
    
    • 测试
      监听组播地址的tcp连接tcpdump -i ens37 -nn host 224.0.0.58,可以看到node1, node2分别声明拥有虚拟路由器1(vrid 51)、虚拟路由器2(vrid61)的IP地址

    分别查看node1和node2的网卡IP地址,进一步确认上述结果

    此时,断开node1的网络连接
    虚拟路由器1的VIP立即由node2的网卡接管

    恢复node1的网络连接,在node1和node2上都可以看到相应的邮件通知:
    node1上通知出错,很快通知自身被切换为BACKUP,恢复网络连接后通知自身重新变为MASTER;

    node2上通知自身切换为MASTER,恢复网络连接后通知自身切换为BACKUP

(五)Keepalived支持IPVS

  • 语法:
virtual_server {IP port | fwmark int}
{
    ...
    real_server{
        ...
    }
    ...
}
  • virtual_server常用参数

    • delay_loop <INT>
      检查后端服务器的时间间隔
    • lb_algo rr|wrr|lc|wlc|lblc|sh|dh
      定义调度方法
    • lb_kind NAT|DR|TUN
      集群的类型
    • persistence_timeout <INT>
      持久连接时长
    • protocol TCP
      服务协议,仅支持TCP
    • sorry_server<IPADDR> <PORT>
      所有RS故障时,备用服务器地址
  • real_server <IPADDR> <PORT>常用参数

    • weight <INT>
      RS权重
    • notify_up <STRING>|<QUOTED-STRING>
      RS上线通知脚本
    • notify_down <STRING>|<QUOTED-STRING>
      RS下线通知脚本
    • HTTP_GET|SSL_GET|TCP_CHECK|SMTP_CHECK|MISC_CHECK { ... }
      定义当前主机的健康状态检测方法
  • HTTP_GET|SSL_GET:应用层健康状态检测

    HTTP_GET|SSL_GET {
    url {
    path <URL_PATH>               // 定义要监控的URL
    status_code <INT>             // 判断上述检测机制为健康状态的响应码
    digest <STRING>               // 判断为健康状态的响应的内容的校验码
    }
    connect_timeout <INTEGER>     // 连接请求的超时时长
    nb_get_retry <INT>            // 重试次数
    delay_before_retry <INT>      // 重试之前的延迟时长
    connect_ip <IP ADDRESS>       // 向当前RS哪个IP地址发起健康状态检测请求
    connect_port <PORT>           // 向当前RS的哪个PORT发起健康状态检测请求
    bindto <IP ADDRESS>           // 发出健康状态检测请求时使用的源地址
    bind_port <PORT>              // 发出健康状态检测请求时使用的源端口
    }
    
  • TCP_CHECK参数

    • connect_ip <IP ADDRESS>
      向当前RS的哪个IP地址发起健康状态检测请求
    • connect_port <PORT>
      向当前RS的哪个PORT发起健康状态检测请求
    • bindto <IP ADDRESS>
      发出健康状态检测请求时使用的源地址
    • bind_port <PORT>
      发出健康状态检测请求时使用的源端口
    • connect_timeout <INTEGER>
      连接请求的超时时长
  • 实验4:实现主/备模型的IPVS集群

    • 实验环境:
      LB1(master)/VS:IP: 192.168.136.230
      LB2(backup)/VS:IP: 192.168.136.130
      VIP:192.168.136.100
      RS1:IP: 192.168.136.229
      RS2:IP: 192.168.136.129
    // 配置LB1的keepalived设置
    vim /etc/keepalived/keepalived.conf
    global_defs {
       notification_email {
         root@localhost
       }
       notification_email_from node1@localhost
       smtp_server 127.0.0.1
       smtp_connect_timeout 30
       router_id node1
       vrrp_mcast_group4 224.0.0.58
    }
    
    vrrp_instance VI_1 {
        state MASTER
        interface ens37
        virtual_router_id 51
        priority 100
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass dd73f9d6
        }
        virtual_ipaddress {
            192.168.136.100/24
        }
        notify_master "/etc/keepalived/notify.sh master"
        notify_backup "/etc/keepalived/notify.sh backup"
        notify_fault "/etc/keepalived/notify.sh fault"
    }
    
    virtual_server 192.168.136.100 80{
        delay_loop 3
        lb_algo wrr
        lb_kind DR
        protocol TCP
        sorry_server 127.0.0.1 80
        real_server 192.168.136.229 80{
            weight 2
            HTTP_GET {
                url {
                  path /
                  status_code 200
                }
                connect_timeout 1
                nb_get_retry 3
                delay_before_retry 1
            }
        }
        real_server 192.168.136.129 80{
            weight 1
            HTTP_GET {
                url {
                  path /
                  status_code 200
                }
                connect_timeout 1
                nb_get_retry 3
                delay_before_retry 1
            }
        }
    }
    
    // 配置LB2的keepalived设置
    vim /etc/keepalived/keepalived.conf
    global_defs {
       notification_email {
         root@localhost
       }
       notification_email_from node2@localhost
       smtp_server 127.0.0.1
       smtp_connect_timeout 30
       router_id node2
       vrrp_mcast_group4 224.0.0.58
    }
    
    vrrp_instance VI_1 {
        state BACKUP
        interface ens37
        virtual_router_id 51
        priority 90
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass dd73f9d6
        }
        virtual_ipaddress {
           192.168.136.100/24
        }
        notify_master "/etc/keepalived/notify.sh master"
        notify_backup "/etc/keepalived/notify.sh backup"
        notify_fault "/etc/keepalived/notify.sh fault"
    }
    
    virtual_server 192.168.136.100 80{
        delay_loop 3
        lb_algo wrr
        lb_kind DR
        protocol TCP
        sorry_server 127.0.0.1 80
        real_server 192.168.136.229 80{
            weight 2
            HTTP_GET {
                url {
                  path /
                  status_code 200
                }
                connect_timeout 1
                nb_get_retry 3
                delay_before_retry 1
            }
        }
        real_server 192.168.136.129 80{
            weight 1
            HTTP_GET {
                url {
                  path /
                  status_code 200
                }
                connect_timeout 1
                nb_get_retry 3
                delay_before_retry 1
            }
        }
    }
    
    // 配置LB1, LB2的sorry server服务
    echo sorry on LB1 > /var/www/html/index.html     // LB1上操作
    echo sorry on LB2 > /var/www/html/index.html     // LB2上操作
    systemctl start httpd
    
    // 配置RS1, RS2的Web服务
    echo RS1 homepage > /var/www/html/index.html     // RS1上操作
    echo RS2 homepage > /var/www/html/index.html     // RS2上操作
    systemctl start httpd
    
    // 编辑脚本实现:禁止RS响应ARP请求,并将网卡绑定VIP
    vim lvs_dr_rs.sh
    #! /bin/bash
    vip='192.168.136.100'
    mask='255.255.255.255'
    dev=lo:1
    rpm -q httpd &> /dev/null || yum -y install httpd &>/dev/null
    service httpd start &> /dev/null && echo "The httpd Server is Ready!"
    
    case $1 in
    start)
        echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
        echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
        echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
        echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
        ifconfig $dev $vip netmask $mask broadcast $vip up
        echo "The RS Server is Ready!"
        ;;
    stop)
        ifconfig $dev down
        echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
        echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
        echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
        echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce
        echo "The RS Server is Canceled!"
        ;;
    *)
        echo "Usage: $(basename $0) start|stop"
        exit 1
        ;;
    esac
    
    chmod +x lvs_dr_rs.sh
    bash lvs_dr_rs.sh start
    
    // LB1, LB2启动KeepAlived服务,进行测试
    systemctl start keepalived
    

    访问VIP(192.168.136.100)的Web服务,正常工作

    停止RS2的Web服务,自动进行健康检查,全部调度至RS1

    停止RS1的Web服务,自动进行健康检查,调度至LB1的sorry server

    停止LB1的KeepAlived服务,自动切换至LB2

  • 实验5:实现主/主模型的IPVS集群

    • 实验环境:
      LB1/VS1:IP: 192.168.136.230,后端RS: RS1, RS2
      LB2/VS2:IP: 192.168.136.130,后端RS: RS3, RS4
      LB1 VIP:192.168.136.100
      LB2 VIP:192.168.136.200
      RS1:IP: 192.168.136.229
      RS2:IP: 192.168.136.129
      RS3:IP: 192.168.136.240
      RS4:IP: 192.168.136.250
      LB之间互为MASTER与BACKUP的关系
      MASTER:LB1,BACKUP:LB2
      MASTER:LB2,BACKUP:LB1
    // 配置LB1, LB2的keepalived设置
    global_defs {
       notification_email {
         root@localhost
       }
       notification_email_from node1@localhost     // LB1上操作
       notification_email_from node1@localhost     // LB2上操作
       smtp_server 127.0.0.1
       smtp_connect_timeout 30
       router_id node1                             // LB1上操作
       router_id node2                             // LB2上操作
       vrrp_mcast_group4 224.0.0.58
    }
    vrrp_instance VI_1 {
        state MASTER                               // LB1上操作
        state BACKUP                               // LB2上操作
        interface ens37
        virtual_router_id 51
        priority 100                               // LB1上操作
        priority 90                                // LB2上操作
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass dd73f9d6
        }
        virtual_ipaddress {
            192.168.136.100/24
        }
        notify_master "/etc/keepalived/notify.sh master"
        notify_backup "/etc/keepalived/notify.sh backup"
        notify_fault "/etc/keepalived/notify.sh fault"
    }
    vrrp_instance VI_2 {
        state BACKUP                               // LB1上操作
        state MASTER                               // LB2上操作
        interface ens37
        virtual_router_id 61
        priority 80                                // LB1上操作
        priority 100                               // LB2上操作
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass a56c19be
        }
        virtual_ipaddress {
            192.168.136.200/24
        }
        notify_master "/etc/keepalived/notify.sh master"
        notify_backup "/etc/keepalived/notify.sh backup"
        notify_fault "/etc/keepalived/notify.sh fault"
    
    }
    virtual_server 192.168.136.100 80{
        delay_loop 3
        lb_algo wrr
        lb_kind DR
        protocol TCP
        sorry_server 127.0.0.1 80
        real_server 192.168.136.229 80{
            weight 2
            HTTP_GET {
                url {
                  path /
                  status_code 200
                }
                connect_timeout 1
                nb_get_retry 3
                delay_before_retry 1
            }
        }
    real_server 192.168.136.129 80{
            weight 1
            HTTP_GET {
                url {
                  path /
                  status_code 200
                }
                connect_timeout 1
                nb_get_retry 3
                delay_before_retry 1
            }
        }
    }
    virtual_server 192.168.136.200 80{
        delay_loop 3
        lb_algo wrr
        lb_kind DR
        protocol TCP
        sorry_server 127.0.0.1 80
        real_server 192.168.136.240 80{
            weight 2
            HTTP_GET {
                url {
                  path /
                  status_code 200
                }
                connect_timeout 1
                nb_get_retry 3
                delay_before_retry 1
            }
        }
        real_server 192.168.136.250 80{
            weight 1
            HTTP_GET {
                url {
                  path /
                  status_code 200
                }
                connect_timeout 1
                nb_get_retry 3
                delay_before_retry 1
           }
       }
    }
    
    // 配置LB1, LB2的sorry server服务
    echo sorry on LB1 > /var/www/html/index.html     // LB1上操作
    echo sorry on LB2 > /var/www/html/index.html     // LB2上操作
    systemctl start httpd
    
    // 配置RS1, RS2, RS3, RS4的Web服务
    echo RS1 homepage > /var/www/html/index.html     // RS1上操作
    echo RS2 homepage > /var/www/html/index.html     // RS2上操作
    echo RS3 homepage > /var/www/html/index.html     // RS3上操作
    echo RS4 homepage > /var/www/html/index.html     // RS4上操作
    systemctl start httpd
    
    // 编辑脚本实现:禁止RS响应ARP请求,并将网卡绑定VIP
    vim lvs_dr_rs.sh
    #! /bin/bash
    vip='192.168.136.100'                            // RS1, RS2上操作
    vip='192.168.136.200'                            // RS3, RS4上操作
    mask='255.255.255.255'
    dev=lo:1
    rpm -q httpd &> /dev/null || yum -y install httpd &>/dev/null
    service httpd start &> /dev/null && echo "The httpd Server is Ready!"
    
    case $1 in
    start)
        echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
        echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
        echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
        echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
        ifconfig $dev $vip netmask $mask broadcast $vip up
        echo "The RS Server is Ready!"
        ;;
    stop)
        ifconfig $dev down
        echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
        echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
        echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
        echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce
        echo "The RS Server is Canceled!"
        ;;
    *)
        echo "Usage: $(basename $0) start|stop"
        exit 1
        ;;
    esac
    
    chmod +x lvs_dr_rs.sh
    bash lvs_dr_rs.sh start
    
    // LB1, LB2启动KeepAlived服务,进行测试
    systemctl start keepalived
    

使用ipvsadm -Ln命令查看ipvs调度策略,与KeepAlived的配置吻合

访问VIP1, VIP2(192.168.136.100, 192.168.136.200)的Web服务,正常工作

停止RS1的Web服务,自动进行健康检查,全部调度至RS2

停止RS2的Web服务,自动进行健康检查,调度至LB1的sorry server

停止LB1的KeepAlived服务,自动切换至LB2

停止RS3的Web服务,自动进行健康检查,全部调度至RS4

停止RS4的Web服务,自动进行健康检查,调度至LB2的sorry server

(六)Keepalived调用脚本进行资源监控

  • keepalived调用外部的辅助脚本进行资源监控,并根据监控的结果状态实现优先动态调整

  • vrrp_script:自定义资源监控脚本,vrrp实例根据脚本返回值,公共定义,可被多个实例调用,定义在vrrp实例之外

  • track_script:调用vrrp_script定义的脚本去监控资源,定义在实例之内,调用事先定义的vrrp_script

    • 分两步:(1) 先定义一个脚本;(2) 调用此脚本
      格式:
    // 定义脚本,定义在实例外
    vrrp_script <SCRIPT_NAME> {
        script ""     // 引号内为脚本命令
        interval INT
        weight -INT
    }
    // 调用脚本,定义在实例内
    track_script {
        SCRIPT_NAME_1
        SCRIPT_NAME_2
    }
    
  • 实验6:实现主/主模型的高可用Nginx反向代理

    • 实验环境:
      LB1/VS1:IP: 192.168.136.230,后端RS: RS1, RS2
      LB2/VS2:IP: 192.168.136.130,后端RS: RS3, RS4
      LB1 VIP:192.168.136.100
      LB2 VIP:192.168.136.200
      RS1:IP: 192.168.136.229
      RS2:IP: 192.168.136.129
      RS3:IP: 192.168.136.240
      RS4:IP: 192.168.136.250
      LB之间互为MASTER与BACKUP的关系
      MASTER:LB1,BACKUP:LB2
      MASTER:LB2,BACKUP:LB1
    // 配置LB1, LB2的KeepAlived设置
    vim /etc/keepalived/keepalived.conf
    global_defs {
       notification_email {
         root@localhost
       }
       notification_email_from node1@localhost     // LB1上操作
       notification_email_from node2@localhost     // LB2上操作
       smtp_server 127.0.0.1
       smtp_connect_timeout 30
       router_id node1                             // LB1上操作
       router_id node2                             // LB2上操作
       vrrp_mcast_group4 224.0.0.58
    }
    
    vrrp_script chk_nginx {
            script "killall -0 nginx && exit 0 || exit 1;"
            interval 1
            weight -20
            fall 3
            rise 3
    }
    vrrp_instance VI_1 {
        state MASTER                               // LB1上操作
        state BACKUP                               // LB2上操作
        interface ens37
        virtual_router_id 51
        priority 100                               // LB1上操作
        priority 90                                // LB2上操作
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass dd73f9d6
        }
        virtual_ipaddress {
            192.168.136.100/24
        }
        notify_master "/etc/keepalived/notify.sh master"
        notify_backup "/etc/keepalived/notify.sh backup"
        notify_fault "/etc/keepalived/notify.sh fault"
    // 下面的脚本引用仅在LB1的配置文件出现
        track_script {
            chk_nginx
        }
    }
    vrrp_instance VI_2 {
        state BACKUP                               // LB1上操作
        state MASTER                               // LB2上操作
        interface ens37
        virtual_router_id 61
        priority 90                                // LB1上操作
        priority 100                               // LB2上操作
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass a56c19be
        }
        virtual_ipaddress {
            192.168.136.200/24
        }
        notify_master "/etc/keepalived/notify.sh master"
        notify_backup "/etc/keepalived/notify.sh backup"
        notify_fault "/etc/keepalived/notify.sh fault"
    // 下面的脚本引用仅在LB2的配置文件出现
        track_script {
            chk_nginx
        }
    }
    
    // 配置LB1,LB2的nginx反向代理
    vim /etc/nginx/nginx.conf
    http {
        upstream websrvs1 {
            server 192.168.136.229:80 weight=2;
            server 192.168.136.129:80 weight=1;
        }
        upstream websrvs2 {
            server 192.168.136.240:80 weight=2;
            server 192.168.136.250:80 weight=1;
        }
        server {
            listen  192.168.136.100:80;
            location / {
                    proxy_pass http://websrvs1;
            }
        }
        server {
            listen  192.168.136.200:80;
            location / {
                    proxy_pass http://websrvs2;
            }
        }
    }
    nginx -t
    systemctl start nginx
    
    // 配置RS1, RS2, RS3, RS4的Web服务
    echo RS1 homepage > /var/www/html/index.html     // RS1上操作
    echo RS2 homepage > /var/www/html/index.html     // RS2上操作
    echo RS3 homepage > /var/www/html/index.html     // RS3上操作
    echo RS4 homepage > /var/www/html/index.html     // RS4上操作
    systemctl start httpd
    
    // LB1, LB2启动KeepAlived服务,进行测试
    systemctl start keepalived
    

    登录192.168.136.100和192.168.136.200的web服务,确实按照设置要求调度

    停止RS2的httpd服务,全部调度至RS1

    停止RS3的httpd服务,全部调度至RS4

    关闭LB2的nginx反向代理服务,通过tcpdump -i ens37 -nn host 224.0.0.58查看组播情况。三个红框依次表达:
    (1)未关闭nginx前的组播状态
    (2)关闭nginx后,LB2的vrid 61权重减去20变作80,而LB1vrid 61的权重为90
    (3)由于LB1的权重高,VIP2的所有权被LB1接管

    关闭LB1的nginx反向代理服务,通过tcpdump -i ens37 -nn host 224.0.0.58查看组播情况。三个红框依次表达:
    (1)未关闭nginx前的组播状态
    (2)关闭nginx后,LB1的vrid 51权重减去20变作80,而LB2的vrid 51权重为90
    (3)由于LB2的权重高,VIP1的所有权被LB2接管

    由于此时两个nginx反向代理均关闭,故访问192.168.136.100和192.168.136.200的web服务全部失败

    打开LB2的nginx反向代理服务,通过tcpdump -i ens37 -nn host 224.0.0.58查看组播情况。三个红框依次表达:
    (1)未打开nginx前的组播状态
    (2)打开nginx后,LB2的vrid 61权重增加20变作100,而LB1的vrid 61权重为90
    (3)由于LB2的权重高,VIP2的所有权被LB2接管

    此时VIP1和VIP2均由LB2上的nginx服务器进行反向代理,192.168.136.100和192.168.136.200的web服务全部恢复

(七)Keepalived同步组

  • LVS NAT模型VIP和DIP需要同步,需要同步组

  • 格式:

    vrrp_sync_group VG_1 {
      group {
          VI_1 # name of vrrp_instance(below)
          VI_2 # One for each moveable IP.
      }
    }
    vrrp_instance VI_1 {
      eth0
      vip
    }
    vrrp_instance VI_2 {
      eth1
      dip
    }
    

相关文章

网友评论

本文标题:20171029 KeepAlived

本文链接:https://www.haomeiwen.com/subject/yjtupxtx.html