美文网首页
Ceph RGW:高可用负载均衡部署

Ceph RGW:高可用负载均衡部署

作者: 圣地亚哥_SVIP | 来源:发表于2019-07-19 18:45 被阅读0次

    RGW多网关高可用部署


    上一篇文章完成Ceph Cluster的搭建,这里基于上一个集群搭建RGW高可用负载均衡结构。
    RGW涉及realm,zonggroup,zone:

    • zone: 对应一个独立后端 ceph 集群;包含一个或多个rgw实例在。同一个zonegroup下,zone之间实现active-active模式,数据相互同步备份,提供灾备功能;
    • zonegroup: 由一个或多个zone组成,其中同一个zonegroup下,所有object都在同一个名字空间下,具有独一无二的ID;
    • realm: 由一个或多个zongegroup组成的全局唯一的空间。

    配置pool:
    数据分布:

    • .rgw.root: 存放集群命名空间信息
    • {zone}.rgw.control: watch/notify,如在RGWCache中就使用此处的object实现同步
    • {zone}.rgw.meta: 存放用户的keys, mails, 账号等信息
    • {zone}.rgw.log: 操作日志信息
    • {zone}.rgw.buckets.index: object的索引信息,每个bucket,rados都会创建一个对象,其命名格式为:.dir.{bucket_id};object存放的内容为key-val结构,key为object,val为object的索引信息
    • {zone}.rgw.buckets.data: object数据存放

    注: 一般buckets.index会设置在高性能的pool上。

    下面实施RGW网关的部署:
    规划realm,zonegroup,zone: (sh,pd,upc)
    

    图示:


    RGW

    创建pool:

    #ceph daemon mon.luminous1 config set  mon_max_pg_per_osd 6000
    #ceph osd pool create .rgw.root 32 32
    #ceph osd pool create upc.rgw.control 32 32
    #ceph osd pool create upc.rgw.meta 32 32
    #ceph osd pool create upc.rgw.log 32 32
    #ceph osd pool create upc.rgw.buckets.index 32 32
    #ceph osd pool create upc.rgw.buckets.data 32 32
    

    使能pool:

    #ceph osd pool application enable .rgw.root rgw
    #ceph osd pool application enable upc.rgw.control rgw
    #ceph osd pool application enable upc.rgw.meta rgw
    #ceph osd pool application enable upc.rgw.log rgw
    #ceph osd pool application enable upc.rgw.buckets.index rgw
    #ceph osd pool application enable upc.rgw.buckets.data rgw
    

    设置realm,zonegroup,zone:

    #radosgw-admin realm create --rgw-realm=sh --default
    #radosgw-admin zonegroup create --rgw-zonegroup=pd --rgw-realm=sh --master --default
    #radosgw-admin zone create --rgw-zonegroup=pd --rgw-zone=upc --master --default
        #radosgw-admin zone default --rgw-zonegroup=pd --rgw-zone=upc --rgw-realm=sh
        #radosgw-admin period update --commit
    

    防火墙配置:

    rgw监听80端口,Haproxy监听443端口,Haproxy监控口8000:

    #ansible ceph -m command -a "iptables -I INPUT -p tcp --dport 80 -j ACCEPT" -l luminous1,luminous2,luminous3
    #ansible ceph -m command -a "iptables -I INPUT -p tcp --dport 443 -j ACCEPT" -l luminous1,luminous2,luminous3
    #ansible ceph -m command -a "iptables -I INPUT -p tcp --dport 8000 -j ACCEPT" -l luminous1,luminous2,luminous3
    #ansible ceph -m command -a "service iptables save" -l luminous1,luminous2,luminous3
    

    在节点luminous1,luminous2,luminous3上部署rgw实例:

    创建网关用户及keyring文件:

    生成keyring文件
    #ansible ceph -m command -a "ceph-authtool --create-keyring /etc/ceph/ceph.client.rgw.keyring" -l luminous1
    生成key及用户
    #ansible ceph -m command -a "ceph-authtool /etc/ceph/ceph.client.rgw.keyring -n client.rgw.luminous1 --gen-key" -l luminous1
    绑定用户和key,以及添加权限信息
    #ansible ceph -m command -a "ceph-authtool -n client.rgw.luminous1 --cap osd 'allow rwx' --cap mon 'allow rwx' /etc/ceph/ceph.client.rgw.keyring" -l luminous1
    用户及key的权限信息添加至集群
    #ansible ceph -m command -a "ceph auth add client.rgw.luminous1 -i /etc/ceph/ceph.client.rgw.keyring" -l luminous1
    
    #ansible ceph -m command -a "ceph-authtool --create-keyring /etc/ceph/ceph.client.rgw.keyring" -l luminous2
    #ansible ceph -m command -a "ceph-authtool /etc/ceph/ceph.client.rgw.keyring -n client.rgw.luminous2 --gen-key" -l luminous2
    #ansible ceph -m command -a "ceph-authtool -n client.rgw.luminous2 --cap osd 'allow rwx' --cap mon 'allow rwx' /etc/ceph/ceph.client.rgw.keyring" -l luminous2
    #ansible ceph -m command -a "ceph auth add client.rgw.luminous2 -i /etc/ceph/ceph.client.rgw.keyring" -l luminous2
    
    #ansible ceph -m command -a "ceph-authtool --create-keyring /etc/ceph/ceph.client.rgw.keyring" -l luminous3
    #ansible ceph -m command -a "ceph-authtool /etc/ceph/ceph.client.rgw.keyring -n client.rgw.luminous3 --gen-key" -l luminous3
    #ansible ceph -m command -a "ceph-authtool -n client.rgw.luminous3 --cap osd 'allow rwx' --cap mon 'allow rwx' /etc/ceph/ceph.client.rgw.keyring" -l luminous3
    #ansible ceph -m command -a "ceph auth add client.rgw.luminous3 -i /etc/ceph/ceph.client.rgw.keyring" -l luminous3
    
    #ansible ceph -m file -a "path=/etc/ceph/ceph.client.rgw.keyring owner=ceph group=ceph mode=0600"
    

    编辑配置文件,如下:

    [global]
    fsid = e87cd2a8-3a98-4c60-b2f2-cb4f88c845a0
    public_network = 192.168.30.110/24
    cluster_network = 192.168.130.142/24
    mon_initial_members = luminous1, luminous2, luminous3
    mon_host = 192.168.30.110,192.168.30.103,192.168.30.202
    auth_cluster_required = cephx
    auth_service_required = cephx
    auth_client_required = cephx
    mon_allow_pool_delete = true
    
    [client.rgw.luminous1]
    host=luminous1
    keyring = /etc/ceph/ceph.client.rgw.keyring
    rgw_zone=upc
    log file = /var/log/ceph/client.rgw.uprgw.log
    rgw frontends = civetweb port=80 num_threads=500
    
    [client.rgw.luminous2]
    host=luminous2
    keyring = /etc/ceph/ceph.client.rgw.keyring
    rgw_zone=upc
    log file = /var/log/ceph/client.rgw.uprgw.log
    rgw frontends = civetweb port=80 num_threads=500
    
    [client.rgw.luminous3]
    host=luminous3
    keyring = /etc/ceph/ceph.client.rgw.keyring
    rgw_zone=upc
    log file = /var/log/ceph/client.rgw.uprgw.log
    rgw frontends = civetweb port=80 num_threads=500
    

    推送配置文件:

    #ceph-deploy --overwrite-conf config push luminous1 luminous2 luminous3
    

    起服务:

    #ansible ceph -m command -a "systemctl start ceph-radosgw@rgw.luminous1" -l luminous1
    #ansible ceph -m command -a "systemctl start ceph-radosgw@rgw.luminous2" -l luminous2
    #ansible ceph -m command -a "systemctl start ceph-radosgw@rgw.luminous3" -l luminous3
    

    测试:

    创建s3用户:
    #radosgw-admin user create --uid=ups3 --display-name="S3 User"
        {
            "user_id": "ups3",
            "display_name": "S3 User",
            "email": "",
            "suspended": 0,
            "max_buckets": 1000,
            "auid": 0,
            "subusers": [],
            "keys": [
                {
                    "user": "ups3",
                    "access_key": "035WRIE70VNP1OICPPGL",
                    "secret_key": "F35Cd4MmsdAtolb8jQsVLtc1XjEfR7Xzjel7gP8l"
                }
            ],
            "swift_keys": [],
            "caps": [],
            "op_mask": "read, write, delete",
            "default_placement": "",
            "placement_tags": [],
            "bucket_quota": {
                "enabled": false,
                "check_on_raw": false,
                "max_size": -1,
                "max_size_kb": 0,
                "max_objects": -1
            },
            "user_quota": {
                "enabled": false,
                "check_on_raw": false,
                "max_size": -1,
                "max_size_kb": 0,
                "max_objects": -1
            },
            "temp_url_keys": [],
            "type": "rgw"
        }
    
    安装s3客户端:
    #yum install s3cmd -y
    #s3cmd --configure
    设置:
        access-key =
        secret-key = 
        
        host_base = 192.168.30.110
        host_bucket = 192.168.30.110/%(bucket)
    # s3cmd mb s3://first
    Bucket 's3://first/' created
    
    # s3cmd ls
    2019-07-18 06:52  s3://first
    
    # s3cmd put ceph.conf s3://first
    upload: 'ceph.conf' -> 's3://first/ceph.conf'  [1 of 1]
     926 of 926   100% in    0s    26.59 kB/s  done
     
    # s3cmd del s3://first/ceph.conf
    delete: 's3://first/ceph.conf'
    
    # s3cmd rb s3://first
    Bucket 's3://first/' removed
    

    同理测试另两个网关。

    以上完成rgw网关的部署。

    Haproxy及Keepalived的部署

    原理: 利用Haproxy实现rgw多实例的负载均衡,keepalived保证VIP的高可用,同时保证 VIP 与 Haproxy 运行在同一节点。

    VIP: 192.168.130.10, Port: 443

    Haproxy及Keepalived安装:

    #ansible ceph -m yum -a "name=haproxy,keepalived state=installed" -l luminous1,luminous2,luminous3
    

    系统参数设置,主要"转发"及"绑定非本地IP":

    #ansible ceph -m command -a "echo ‘net.ipv4.ip_forward = 1’ >> /etc/sysctl.conf" -l luminous1,luminous2,luminous3
    #ansible ceph -m command -a "echo ‘net.ipv4.ip_nonlocal_bind = 1’ >> /etc/sysctl.conf" -l luminous1,luminous2,luminous3
    

    配置Haproxy:

    #管理入口
    listen admin_stats
        stats   enable
        bind    *:8000
        mode    http 
        option  httplog
        log     global
        maxconn 10
        stats   refresh 30s
        stats   uri /admin
        stats   realm haproxy
        stats   auth admin:admin
        stats   hide-version 
        stats   admin if TRUE
    
    frontend  main *:443
        default_backend  rgw
    
    backend rgw
        balance     roundrobin
        server  rgw1 192.168.30.110:80 check inter 2000 rise 2 fall 5
        server  rgw2 192.168.30.103:80 check inter 2000 rise 2 fall 5 
        server  rgw3 192.168.30.202:80 check inter 2000 rise 2 fall 5 
    

    配置Keepalived:

    • 配置 Haproxy 的启停脚本;
    • 配置 VIP 及监听的网卡;
    • 配置手动切花主备的配置;

    防火墙:

    keepalived使用的是vrrp协议,需要放开vrrp协议
    #ansible ceph -m command -a "iptables -I INPUT -p vrrp -j ACCEPT" -l luminous1,luminous2,luminous3
    #ansible ceph -m command -a "service iptables save" -l luminous1,luminous2,luminous3
    

    keepalived配置项:

    Master节点,keepalived.conf:

    ! Configuration File for keepalived
    
    global_defs {
    #注释EMAIL相关配置项
    #   notification_email {
    #     acassen@firewall.loc
    #     failover@firewall.loc
    #     sysadmin@firewall.loc
    #   }
    #   notification_email_from Alexandre.Cassen@firewall.loc
    #   smtp_server 192.168.200.1
    #   smtp_connect_timeout 30
       router_id LVS_DEVEL
       vrrp_skip_check_adv_addr
       vrrp_strict
       vrrp_garp_interval 0
       vrrp_gna_interval 0
    }
    #配置监测脚本,用于手动切换master
    vrrp_script check_keepalived {
       script "/usr/bin/bash -c '[[ -f /etc/keepalived/down ]] && exit 1 || exit 0'"
       interval 1
       weight -20
    }
    #实例配置
    vrrp_instance VI_1 {
        state MASTER
        #VIP绑定的网卡
        interface eth0
        #keepalived集群id
        virtual_router_id 51
        #主节点的优先级,用于VIP的抢占
        priority 100
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass 1111
        }
        #VIP
        virtual_ipaddress {
            192.168.30.10
        }
        #绑定监测脚本
        track_script {
          check_keepalived
        }
        #Haproxy服务与VIP绑定,保证VIP和Haproxy同时只在同一个节点上运行。
        notify_master "/etc/keepalived/haproxy.sh start"
        notify_backup "/etc/keepalived/haproxy.sh stop"
    }
    

    Backup节点,keepalived.conf:

    ! Configuration File for keepalived
    
    global_defs {
    #   notification_email {
    #     acassen@firewall.loc
    #     failover@firewall.loc
    #     sysadmin@firewall.loc
    #   }
    #   notification_email_from Alexandre.Cassen@firewall.loc
    #   smtp_server 192.168.200.1
    #   smtp_connect_timeout 30
       router_id LVS_DEVEL
       vrrp_skip_check_adv_addr
       vrrp_strict
       vrrp_garp_interval 0
       vrrp_gna_interval 0
    }
    
    vrrp_instance VI_1 {
        #节点角色
        state BACKUP
        #VIP绑定的网卡
        interface eth1
        #keepalived集群id
        virtual_router_id 51
        #备节点的优先级
        priority 90
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass 1111
        }
        virtual_ipaddress {
            192.168.30.10
        }
        notify_master "/etc/keepalived/haproxy.sh start"
        notify_backup "/etc/keepalived/haproxy.sh stop"
    }
    

    haproxy.sh:

    #!/bin/bash
    case "$1" in
      start)
        systemctl start haproxy
      ;;
      stop)
        systemctl stop haproxy
      ;;
      restart)
        systemctl restart haproxy
      ;;
      *)
        echo "Usage:$0 start|stop|restart"
      ;;
    esac
    

    luminous1为master,luminous2/luminous3为backup,拷贝文件及起keepalived服务:

    #ansible ceph -m copy -a "src=/root/ceph/ansible/keepalived_master.conf dest=/etc/keepalived/keepalived.conf mode=0644" -l luminous1
    #ansible ceph -m copy -a "src=/root/ceph/ansible/keepalived_backup.conf dest=/etc/keepalived/keepalived.conf mode=0644" -l luminous2,luminous3
    #ansible ceph -m copy -a "src=/root/ceph/ansible/haproxy.sh dest=/etc/keepalived/ mode=0755" -l luminous1,luminous2,luminous3
    #ansible ceph -m command -a "systemctl start keepalived" -l luminous1,luminous2,luminous3
    #ansible ceph -m command -a "systemctl enable keepalived" -l luminous1,luminous2,luminous3
    

    测试此集群:


    主节点:

    #ip a
    3: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
        link/ether 52:54:00:e6:00:7c brd ff:ff:ff:ff:ff:ff
        inet 192.168.30.110/24 brd 192.168.30.255 scope global noprefixroute dynamic eth0
           valid_lft 2376sec preferred_lft 2376sec
        inet 192.168.30.10/32 scope global eth0
           valid_lft forever preferred_lft forever
        inet6 fe80::e88c:c9cf:9bc0:5b9f/64 scope link noprefixroute 
           valid_lft forever preferred_lft forever
    

    手动切换:

    [root@luminous1 ansible]#touch /etc/keepalived/down
    [root@luminous1 ansible]#ip a
    3: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
        link/ether 52:54:00:e6:00:7c brd ff:ff:ff:ff:ff:ff
        inet 192.168.30.110/24 brd 192.168.30.255 scope global noprefixroute dynamic eth0
           valid_lft 2313sec preferred_lft 2313sec
        inet6 fe80::e88c:c9cf:9bc0:5b9f/64 scope link noprefixroute 
           valid_lft forever preferred_lft forever
    [root@luminous1 ansible]# systemctl status haproxy
    ● haproxy.service - HAProxy Load Balancer
       Loaded: loaded (/usr/lib/systemd/system/haproxy.service; disabled; vendor preset: disabled)
       Active: inactive (dead)
    
    以上VIP已将漂移,同时此节点haproxy已经关闭
    

    luminous3节点,VIP漂移至luminous3,同时haproxy在此节点被拉起:

    [root@luminous3 ~]# ip a
    3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
        link/ether 52:54:00:71:dc:59 brd ff:ff:ff:ff:ff:ff
        inet 192.168.30.202/24 brd 192.168.30.255 scope global noprefixroute dynamic eth1
           valid_lft 2409sec preferred_lft 2409sec
        inet 192.168.30.10/32 scope global eth1
           valid_lft forever preferred_lft forever
        inet6 fe80::f7b2:702e:19da:f6c0/64 scope link noprefixroute 
           valid_lft forever preferred_lft forever
    [root@luminous3 ~]# systemctl status haproxy
    ● haproxy.service - HAProxy Load Balancer
       Loaded: loaded (/usr/lib/systemd/system/haproxy.service; disabled; vendor preset: disabled)
       Active: active (running) since Fri 2019-07-19 15:41:22 CST; 5min ago
     Main PID: 18576 (haproxy-systemd)
       CGroup: /system.slice/haproxy.service
               ├─18576 /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
               ├─18577 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
               └─18578 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
    

    VIP漂回,luminous1节点:

    [root@luminous1 ansible]# rm -rf /etc/keepalived/down 
    [root@luminous1 ansible]# ip a
    3: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
        link/ether 52:54:00:e6:00:7c brd ff:ff:ff:ff:ff:ff
        inet 192.168.30.110/24 brd 192.168.30.255 scope global noprefixroute dynamic eth0
           valid_lft 3282sec preferred_lft 3282sec
        inet 192.168.30.10/32 scope global eth0
           valid_lft forever preferred_lft forever
        inet6 fe80::e88c:c9cf:9bc0:5b9f/64 scope link noprefixroute 
           valid_lft forever preferred_lft forever
    

    测试s3功能,更改/root/.s3cfg以下变更为VIP:

    host_base = 192.168.30.10:443
    host_bucket = 192.168.30.10:443/%(bucket)
    

    测试 s3cmd 命令一切正常。

    以上完成Ceph RGW高可用负载均衡方案。

    相关文章

      网友评论

          本文标题:Ceph RGW:高可用负载均衡部署

          本文链接:https://www.haomeiwen.com/subject/lozpaftx.html