美文网首页
Docker-Swarm部署Cassandra集群

Docker-Swarm部署Cassandra集群

作者: 野生DBNull | 来源:发表于2022-12-06 12:23 被阅读0次

    Docker-Swarm介绍

    官网是最好的老师 https://docs.docker.com/engine/swarm/

    Cassandra介绍

    官网是最好的老师 https://cassandra.apache.org/_/index.html

    在Docker-Swarm中部署Cassandra集群

    如果没有稳定可靠的分布式文件系统,这就限制很多,而且使用Docker-Swarm的意义并不是特别大,除非有像我司这种需要交付无运维的使用方,需要用到它的多服务器编排。

    注意!!!请仔细阅读下面的主要大坑描述!!!
    注意!!!请仔细阅读下面的主要大坑描述!!!
    注意!!!请仔细阅读下面的主要大坑描述!!!

    网上有很多的解决方案,我几乎都完看了,没有一个是能交付生产使用的。主要问题出现在网络的选择上。多次踩坑之后我们选择了使用HostNetwork来进行部署,如果你们介意使用这个网络的话可以去看看其他的教程,等出现各种奇葩问题之后再回来看下面的内容也不迟。

    使用非HostNetwork会遇到的坑
    • overlay网络无法固定IP,CASSANDRA_SEEDS需要一个IP(虽然你可以写域名,但是启动脚本还是会给你转为IP),一旦重启后整个集群状态就不可用了。
    • 访问Cassandra的时候需要连接将Cassandra的域名转为IP去连接,IP不固定会抛出Cassandra连不上的问题,除非你监听域名的变更,这样的话Cassandra的封装底层编码会更加复杂。
    • 当Cassandra调度飘移后会出现内部CASSANDRA_SEEDS访问不到的问题,整个集群的数据同步也会出现各种问题,最差的情况就是变为三个独立的Cassandra。

    开始部署Cassandra集群

    对Swarm各节点进行编号
    docker node ls
    docker node update --label-add cassandra1=true node1 # 确定将Cassandra1调度到哪台机器节点
    docker node update --label-add cassandra2=true node2 # 确定将Cassandra2调度到哪台机器节点
    docker node update --label-add cassandra3=true node3 # 确定将Cassandra3调度到哪台机器节点
    
    在三台机器上面建立文件挂载文件夹

    这一步如果不做的话会出现问题

    sh ./gendir.sh
    
    #! /bin/bash
    if [ ! -d "/data/cassandra" ]; then
        mkdir /data/cassandra
        mkdir /data/cassandra/_data
    fi
    
    构建编排文件

    这里需要将文件命名为docker-compose.template.yml因为后续我们需要用shell将主机真实IP打入编排文件中。当然,使用makefile这种方式去做也行,但是我个人比较喜欢使用简单方便的shell脚本去实现这一步。

    cat docker-compose.template.yml
    
    version: '3.8'
    services:
      # Cassandra
      cassandra1:
        image: cassandra:4.0.0
        hostname: cassandra1
        cap_add:
          - SYS_NICE
        environment:
          - CASSANDRA_CLUSTER_NAME=cassandra
          - CASSANDRA_SEEDS=${CASSANDRA_SEEDS}
          - JVM_OPTS=-Xmx6144m -Xms2048m # 限制内存大小
        networks:
          host_network:
        volumes:
          - cassandra1_data:/var/lib/cassandra
        deploy:
          mode: replicated
          replicas: 1
          resources:
            limits:
              memory: 6G
            reservations:
              memory: 2G
          placement:
            max_replicas_per_node: 1
            constraints:
              - node.labels.cassandra1==true
          restart_policy:
            condition: on-failure
            delay: 5s
            # max_attempts: 3
            window: 120s
      cassandra2:
        image: cassandra:4.0.0
        hostname: cassandra2
        cap_add:
          - SYS_NICE
        environment:
          - CASSANDRA_CLUSTER_NAME=cassandra
          - CASSANDRA_SEEDS=${CASSANDRA_SEEDS}
          - JVM_OPTS=-Xmx6144m -Xms2048m # 限制内存大小
        networks:
          host_network:
        volumes:
          - cassandra2_data:/var/lib/cassandra
        deploy:
          mode: replicated
          replicas: 1
          resources:
            limits:
              memory: 6G
            reservations:
              memory: 2G
          placement:
            max_replicas_per_node: 1
            constraints:
              - node.labels.cassandra2==true
          restart_policy:
            condition: on-failure
            delay: 5s
            # max_attempts: 3
            window: 120s
      cassandra3:
        image: cassandra:4.0.0
        hostname: cassandra3
        cap_add:
          - SYS_NICE
        environment:
          - CASSANDRA_CLUSTER_NAME=cassandra
          - CASSANDRA_SEEDS=${CASSANDRA_SEEDS}
          - JVM_OPTS=-Xmx6144m -Xms2048m # 限制内存大小
        networks:
          host_network:
        volumes:
          - cassandra3_data:/var/lib/cassandra
        deploy:
          mode: replicated
          replicas: 1
          resources:
            limits:
              memory: 6G
            reservations:
              memory: 2G
          placement:
            max_replicas_per_node: 1
            constraints:
              - node.labels.cassandra3==true
          restart_policy:
            condition: on-failure
            delay: 5s
            # max_attempts: 3
            window: 120s
    
    networks:
      host_network:
        name: host
        external: true
        attachable: true
    
    volumes:
      cassandra1_data:
        driver: local
        driver_opts:
          type: none
          o: bind
          device: /data/cassandra/_data
      cassandra2_data:
        driver: local
        driver_opts:
          type: none
          o: bind
          device: /data/cassandra/_data
      cassandra3_data:
        driver: local
        driver_opts:
          type: none
          o: bind
          device: /data/cassandra/_data
    
    构建启动脚本
    cat ./app.sh
    
    #! /bin/bash
    
    p_name="cassandra"
    script=${1:-"status"}
    
    get_lable_addr() {
        nodes=$(docker node ls -q | xargs docker node inspect -f '{{ .Description.Hostname }}:{{ .Status.Addr}}:{{ range $k, $v := .Spec.Labels }}{{ $k }}={{ $v }} {{end}}' | grep cassandra | grep ${1} | awk -F ":" '{print $2}')
        ip_addr=''
        for node in ${nodes[@]}; do
            # tmp=$(ping ${node} -c 1 | sed '1{s/[^(]*(//;s/).*//;q}')
            tmp=${node}
            if [ ! -z ${tmp} ]; then
                if [ ! -z ${ip_addr} ]; then
                    ip_addr="${ip_addr},${tmp}"
                else
                    ip_addr=${tmp}
                fi
            fi
        done
        echo ${ip_addr}
        return $?
    }
    
    create() {
        compose_path="$(pwd)/docker-compose.yml"
        cp ./docker-compose.template.yml ./docker-compose.yml
        cassandra_ip=$(get_lable_addr cassandra)
        echo "$(sed "s/\${CASSANDRA_SEEDS}/${cassandra_ip}/" ./docker-compose.yml)" >./docker-compose.yml
    }
    
    start() {
        docker stack deploy -c ./docker-compose.yml --with-registry-auth ${p_name}
    }
    
    stop() {
        docker stack rm ${p_name}
    }
    
    status() {
        docker stack services ${p_name}
    }
    
    ps() {
        docker stack ps --no-trunc ${p_name}
    }
    
    update() {
        docker stack deploy --prune -c ./docker-compose.yml --with-registry-auth ${p_name}
    }
    
    main() {
        if [ ${script} == "start" ]; then
            start
        elif [ ${script} == "stop" ]; then
            stop
        elif [ ${script} == "update" ]; then
            update
        elif [ ${script} == "status" ]; then
            status
        elif [ ${script} == "ps" ]; then
            ps
        elif [ ${script} == "create" ]; then
            create
        else
            echo 'Instruction does not exist'
        fi
    }
    
    main
    
    
    开放防火墙

    由于使用的是HostNetwork所以防火墙需要自行解决
    构建防火墙开放文件 open-firewall.sh

    cat ./open-firewall.sh
    
    #! /bin/bash
    
    script=${1:-"status"}
    directional=${2}
    
    openports=("7000" "9042")
    
    open_firewall_ip() {
        directionalArr=($(echo ${directional} | tr ',' ' '))
        for port in ${openports[@]}; do
            for d in ${directionalArr[@]}; do
                echo "Open firewall -> [${d}:${port}]"
                firewall-cmd --permanent --add-rich-rule="rule family="ipv4" source address="${d}" port protocol="tcp" port="${port}" accept"
            done
        done
        firewall-cmd --reload
    }
    
    close_firewall_ip() {
        directionalArr=($(echo ${directional} | tr ',' ' '))
        for port in ${openports[@]}; do
            for d in ${directionalArr[@]}; do
                echo "Close firewall -> [${d}:${port}]"
                firewall-cmd --permanent --remove-rich-rule="rule family="ipv4" source address="${d}" port protocol="tcp" port="${port}" accept"
            done
        done
        firewall-cmd --reload
    }
    
    open_firewall() {
        for port in ${openports[@]}; do
            echo "Open firewall -> [0.0.0.0:${port}]"
            firewall-cmd --zone=public --add-port=${port}/tcp --permanent
        done
        firewall-cmd --reload
    }
    
    close_firewall() {
        for port in ${openports[@]}; do
            echo "Close firewall -> [0.0.0.0:${port}]"
            firewall-cmd --zone=public --remove-port=${port}/tcp --permanent
        done
        firewall-cmd --reload
    }
    
    main() {
        if [ ${script} == "close" ]; then
            if [ ! -z ${directional} ]; then
                close_firewall_ip
            else
                close_firewall
            fi
        elif [ ${script} == "open" ]; then
            if [ ! -z ${directional} ]; then
                open_firewall_ip
            else
                open_firewall
            fi
        elif [ ${script} == "status" ]; then
            firewall-cmd --list-rich-rules
            firewall-cmd --list-ports
        else
            echo 'Instruction does not exist'
        fi
    }
    
    main
    
    

    执行启动命令

    sh ./open-firewall.sh # 查看目前已经开放的端口
    
    sh ./open-firewall.sh open # 开放端口
    sh ./open-firewall.sh open 192.168.1.11 # 定向开放端口
    sh ./open-firewall.sh open 192.168.1.11,192.168.1.12 # 定向向多个IP开放端口
    
    sh ./open-firewall.sh close # 关闭端口
    sh ./open-firewall.sh close 192.168.1.142 # 关闭定向端口
    sh ./open-firewall.sh close 192.168.1.142,0.0.0.0 # 关闭多个定向端口
    
    启动Cassandra
    sh ./app.sh create # 根据模板创建编排文件
    sh ./app.sh start # 启动
    sh ./app.sh stop # 停止
    sh ./app.sh status # 查看状态
    sh ./app.sh ps # 查看详情
    

    相关文章

      网友评论

          本文标题:Docker-Swarm部署Cassandra集群

          本文链接:https://www.haomeiwen.com/subject/iifmnktx.html