美文网首页个人学习
4.基于StatefulSet实现MySQL一主多从集群

4.基于StatefulSet实现MySQL一主多从集群

作者: 哆啦A梦_ca52 | 来源:发表于2019-12-04 22:53 被阅读0次
    修改build地址
    root@master:/opt/k8s-data/dockerfile/linux37/redis# vim build-command.sh 
    #!/bin/bash
    TAG=$1
    docker build -t harbor.wyh.net/linux37/redis:${TAG} .
    sleep 3
    docker push  harbor.wyh.net/linux37/redis:${TAG}
    
    修改基础镜像地址
    root@master:/opt/k8s-data/dockerfile/linux37/redis# vim Dockerfile 
    #JDK Base Image
    FROM harbor.wyh.net/baseimages/centos:7.6.18102
    MAINTAINER zhangshijie "zhangshijie@magedu.net"
    ADD redis-4.0.14.tar.gz /usr/local/src
    RUN ln -sv /usr/local/src/redis-4.0.14 /usr/local/redis && cd /usr/local/redis && make && cp src/redis-cli /usr/sbin/ && cp src/redis-server  /usr/sbin/ && mkdir -pv /data/redis-data
    ADD redis.conf /usr/local/redis/redis.conf
    ADD run_redis.sh /usr/local/redis/run_redis.sh
    EXPOSE 6379
    CMD ["/usr/local/redis/run_redis.sh"]
    
    root@master:/opt/k8s-data/dockerfile/linux37/redis# bash build-command.sh v4.0.14
    启动radis
    root@master:/opt/k8s-data/dockerfile/linux37/redis# docker run -it --rm  harbor.magedu.net/linux37/redis:v4.0.14
    root@haproxy1:~# mkdir /data/k8sdata/linux37/redis-datadir-1
    修改redis的pv地址和数据存放路径
    root@master:/opt/k8s-data/yaml/linux37/redis/pv# vim redis-persistentvolume.yaml 
    
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: redis-datadir-pv-1
      namespace: linux37
    spec:
      capacity:
        storage: 10Gi
      accessModes:
        - ReadWriteOnce
      nfs:
        path: /data/k8sdata/linux37/redis-datadir-1
        server: 192.168.200.201
    查看pvc
    root@master:/opt/k8s-data/yaml/linux37/redis/pv# vim redis-persistentvolumeclaim.yaml 
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: redis-datadir-pvc-1
      namespace: linux37
    spec:
      volumeName: redis-datadir-pv-1
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 10Gi
    创建pv
    root@master:/opt/k8s-data/yaml/linux37/redis/pv# kubectl apply -f redis-persistentvolume.yaml 
    查看pv,但是还没有绑定pvc
    root@master:/opt/k8s-data/yaml/linux37/redis/pv# kubectl get pv -n linux37
    NAME                     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                             STORAGECLASS   REASON   AGE
    redis-datadir-pv-1       10Gi       RWO            Retain           Available                                                             19s
    创建pvc
    root@master:/opt/k8s-data/yaml/linux37/redis/pv# kubectl apply -f redis-persistentvolumeclaim.yaml 
    查看刚创建的pvc
    root@master:/opt/k8s-data/yaml/linux37/redis/pv# kubectl get pvc -n linux37
    NAME                      STATUS   VOLUME                   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    redis-datadir-pvc-1       Bound    redis-datadir-pv-1       10Gi       RWO                           34s
    查看pv已经绑定在pvc上了
    root@master:/opt/k8s-data/yaml/linux37/redis/pv# kubectl get pv -n linux37
    
    NAME                     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                             STORAGECLASS   REASON   AGE
    redis-datadir-pv-1       10Gi       RWO            Retain           Bound    linux37/redis-datadir-pvc-1                               2m29s
    root@master:/opt/k8s-data/yaml/linux37/redis# vim redis.yaml 
              image: harbor.wyh.net/linux37/redis:v4.0.14
    修改镜像地址
    创建server
    root@master:/opt/k8s-data/yaml/linux37/redis# kubectl apply -f redis.yaml 
    查看pod是在哪个node节点的,是哪个node节点就用哪个node节点映射的ip地址
    root@master:/opt/k8s-data/yaml/linux37/redis# kubectl get pod -n linux37 -o wide
    NAME                                              READY   STATUS        RESTARTS   AGE   IP               NODE              NOMINATED NODE   READINESS GATES
    deploy-devops-redis-d567d7694-sb2fm               1/1     Running       0          25m   172.31.167.112   192.168.200.206   <none>           <none>
    
    
    进入到容器里查看是否生效
    

    root@master:/opt/k8s-data/yaml/linux37/redis# kubectl exec -it deploy-devops-redis-d567d7694-sb2fm bash -n linux37

    image.png
    添加一个key
    key已经添加进去了
    [root@deploy-devops-redis-d567d7694-sb2fm /]# redis-cli 
    127.0.0.1:6379> AUTH 123456
    OK
    127.0.0.1:6379> keys *
    1) "name"
    127.0.0.1:6379> get name
    "zhangsan"
    查看redis存储信息
    root@haproxy1:/data/k8sdata/linux37/redis-datadir-1# cat appendonly.aof 
    *2
    $6
    SELECT
    $1
    0
    *3
    $3
    SET
    $4
    name
    $8
    zhangsan
    
    

    删除pod之后在,试试是否可以恢复

    root@master:~# kubectl delete pod deploy-devops-redis-d567d7694-sb2fm -n linux37
    
    

    验证Redis数据高可用:
    删除redis的pod,然后重新创建pod验证新生成的pod中是否有之前的数据,可能有丢失数据的几率,取决于是否
    开启AOF或者dump数据的功能及设置

    然后在次查看,由于是后端存储所以信息没有丢失,就是pod被删除了,也可以通过后端存储调用

    root@master:~# kubectl exec -it deploy-devops-redis-d567d7694-wd7p5 bash -n linux37
    [root@deploy-devops-redis-d567d7694-wd7p5 /]# redis-cli 
    127.0.0.1:6379> auth 123456
    OK
    127.0.0.1:6379> get key
    (nil)
    127.0.0.1:6379> keys *
    1) "name"
    127.0.0.1:6379> get name
    "zhangsan"
    
    
    StatefulSet
    
    StatefulSet是为了解决有状态服务的问题(对应Deployments和ReplicaSets是为无状态服务而设计),其应用场景包括
    •稳定的持久化存储,即Pod重新调度后还是能访问到相同的持久化数据,基于PVC来实现
    •稳定的网络标志,即Pod重新调度后其PodName和HostName不变,基于Headless Service(即没有Cluster IP的Service)来实现
    •有序部署,有序扩展,即Pod是有顺序的,在部署或者扩展的时候要依据定义的顺序依次依次进行(即从0到N-1,在下一个Pod运行之前所有之前的Pod必须都是Running和Ready状态),基于init containers来实现
    •有序收缩,有序删除(即从N-1到0)
    
    从上面的应用场景可以发现,StatefulSet由以下几个部分组成:
    •用于定义网络标志(DNS domain)的Headless Service
    •用于创建PersistentVolumes的volumeClaimTemplates
    •定义具体应用的StatefulSet
    
    StatefulSet中每个Pod的DNS格式为statefulSetName-{0..N-1}.serviceName.namespace.svc.cluster.local,其中
    •serviceName为Headless Service的名字
    •0..N-1为Pod所在的序号,从0开始到N-1
    •statefulSetName为StatefulSet的名字
    •namespace为服务所在的namespace,Headless Servic和StatefulSet必须在相同的namespace
    •.cluster.local为Cluster Domain
    具体参考
    https://www.kubernetes.org.cn/statefulset
    

    实战案例之MySQL 主从架构:
    https://www.kubernetes.org.cn/statefulset
    基于StatefulSet实现:
    Pod调度运行时,如果应用不需要任何稳定的标示、有序的部署、删除和扩展,则应该使用一组无状态副本的控制
    器来部署应用,例如 Deployment 或 ReplicaSet更适合无状态服务需求,而StatefulSet适合管理所有有状态的服
    务,比如MySQL、MongoDB集群等。
    基于StatefulSet 实现的MySQL 一主多从架构

    image.png
    StatefulSet本质上是Deployment的一种变体,在v1.9版本中已成为GA版本,它为了解决有状态服务的问题,
    它所管理的Pod拥有固定的Pod名称,启停顺序,在StatefulSet中,Pod名字称为网络标识(hostname),还必
    须要用到共享存储。
    在Deployment中,与之对应的服务是service,而在StatefulSet中与之对应的headless service,
    headless service,即无头服务,与service的区别就是它没有Cluster IP,解析它的名称时将返回该
    Headless Service对应的全部Pod的Endpoint列表。
    StatefulSet 特点:
    -> 给每个pdo分配固定且唯一的网络标识符
    -> 给每个pod分配固定且持久化的外部存储
    -> 对pod进行有序的部署和扩展
    -> 对pod进有序的删除和终止
    -> 对pod进有序的自动滚动更新
    
    StatefulSet的组成部分:
    Headless Service:用来定义Pod网络标识( DNS domain)。
    StatefulSet:定义具体应用,有多少个Pod副本,并为每个Pod定义了一个域名。
    volumeClaimTemplates: 存储卷申请模板,创建PVC,指定pvc名称大小,将自动创建pvc,且pvc必须由存
    储类供应
    
    创建mysql数据目录
    root@haproxy1:/data/k8sdata/linux37/redis-datadir-1# mkdir /data/linux37/mysql-datadir-1
    root@haproxy1:/data/k8sdata/linux37/redis-datadir-1# mkdir /data/linux37/mysql-datadir-2
    root@haproxy1:/data/k8sdata/linux37/redis-datadir-1# mkdir /data/linux37/mysql-datadir-3
    root@haproxy1:/data/k8sdata/linux37/redis-datadir-1# mkdir /data/linux37/mysql-datadir-4
    root@haproxy1:/data/k8sdata/linux37/redis-datadir-1# mkdir /data/linux37/mysql-datadir-5
    
    
    修改nfs地址
    root@master:/opt/k8s-data/yaml/linux37/mysql/pv# cat mysql-persistentvolume.yaml 
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: mysql-datadir-1
      namespace: linux37
    spec:
      capacity:
        storage: 50Gi
      accessModes:
        - ReadWriteOnce
      nfs:
        path: /data/linux37/mysql-datadir-1
        server: 192.168.200.201 
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: mysql-datadir-2
      namespace: linux37
    spec:
      capacity:
        storage: 50Gi
      accessModes:
        - ReadWriteOnce
      nfs:
        path: /data/linux37/mysql-datadir-2
        server: 192.168.200.201 
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: mysql-datadir-3
      namespace: linux37
    spec:
      capacity:
        storage: 50Gi
      accessModes:
        - ReadWriteOnce
      nfs:
        path: /data/linux37/mysql-datadir-3
        server: 192.168.200.201
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: mysql-datadir-4
      namespace: linux37
    spec:
      capacity:
        storage: 50Gi
      accessModes:
        - ReadWriteOnce
      nfs:
        path: /data/linux37/mysql-datadir-4
        server: 192.168.200.201
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: mysql-datadir-5
      namespace: linux37
    spec:
      capacity:
        storage: 50Gi
      accessModes:
        - ReadWriteOnce
      nfs:
        path: /data/linux37/mysql-datadir-5
        server: 192.168.200.201
    创建pv
    root@master:/opt/k8s-data/yaml/linux37/mysql/pv# kubectl apply -f mysql-persistentvolume.yaml 
    
    root@master:/opt/k8s-data/yaml/linux37/mysql/pv# kubectl get pv -n linux37 | grep mysql | wc -l
    5
    下载mysql5.7的镜像
    root@master:/opt/k8s-data/dockerfile# docker pull mysql:5.7
    root@master:/opt/k8s-data/dockerfile# docker run --it --rm mysql:5.7 bash 
    查看mysql版本
    root@9589a9780c7e:/# mysql -V
    mysql  Ver 14.14 Distrib 5.7.28, for Linux (x86_64) using  EditLine wrapper
    打标签
    root@master:~# docker tag mysql:5.7 harbor.wyh.net/linux37/mysql:5.27
    root@master:~# docker  tag mysql:5.7 harbor.wyh.net/linux37/mysql:v5.7.27
    上传镜像
    root@master:~# docker push harbor.wyh.net/linux37/mysql:v5.7.27
    
    准备xtrabackup镜像
    root@master:~# docker tag registry.cn-hangzhou.aliyuncs.com/hxpdocker/xtrabackup:1.0 harbor.wyh.net/linux37/xtrabackup:1.0
    root@master:~# docker push harbor.wyh.net/linux37/xtrabackup:1.0
    修改镜像地址
    root@master:/opt/k8s-data/yaml/linux37/mysql# cat mysql-statefulset.yaml | grep ^[^#]
    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      name: mysql
    spec:
      selector:
        matchLabels:
          app: mysql
      serviceName: mysql
      replicas: 3
      template:
        metadata:
          labels:
            app: mysql
        spec:
          initContainers:
          - name: init-mysql
            image: harbor.wyh.net/linux37/mysql:v5.7.27 
            command:
            - bash
            - "-c"
            - |
              set -ex
              # Generate mysql server-id from pod ordinal index.
              [[ `hostname` =~ -([0-9]+)$ ]] || exit 1
              ordinal=${BASH_REMATCH[1]}
              echo [mysqld] > /mnt/conf.d/server-id.cnf
              # Add an offset to avoid reserved server-id=0 value.
              echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf
              # Copy appropriate conf.d files from config-map to emptyDir.
              if [[ $ordinal -eq 0 ]]; then
                cp /mnt/config-map/master.cnf /mnt/conf.d/
              else
                cp /mnt/config-map/slave.cnf /mnt/conf.d/
              fi
            volumeMounts:
            - name: conf
              mountPath: /mnt/conf.d
            - name: config-map
              mountPath: /mnt/config-map
          - name: clone-mysql
            image: harbor.wyh.net/linux37/xtrabackup:1.0 
            command:
            - bash
            - "-c"
            - |
              set -ex
              # Skip the clone if data already exists.
              [[ -d /var/lib/mysql/mysql ]] && exit 0
              # Skip the clone on master (ordinal index 0).
              [[ `hostname` =~ -([0-9]+)$ ]] || exit 1
              ordinal=${BASH_REMATCH[1]}
              [[ $ordinal -eq 0 ]] && exit 0
              # Clone data from previous peer.
              ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql
              # Prepare the backup.
              xtrabackup --prepare --target-dir=/var/lib/mysql
            volumeMounts:
            - name: data
              mountPath: /var/lib/mysql
              subPath: mysql
            - name: conf
              mountPath: /etc/mysql/conf.d
          containers:
          - name: mysql
            image: harbor.wyh.net/linux37/mysql:v5.7.27 
            env:
            - name: MYSQL_ALLOW_EMPTY_PASSWORD
              value: "1"
            ports:
            - name: mysql
              containerPort: 3306
            volumeMounts:
            - name: data
              mountPath: /var/lib/mysql
              subPath: mysql
            - name: conf
              mountPath: /etc/mysql/conf.d
            resources:
              requests:
                cpu: 500m
                memory: 1Gi
            livenessProbe:
              exec:
                command: ["mysqladmin", "ping"]
              initialDelaySeconds: 30
              periodSeconds: 10
              timeoutSeconds: 5
            readinessProbe:
              exec:
                # Check we can execute queries over TCP (skip-networking is off).
                command: ["mysql", "-h", "127.0.0.1", "-e", "SELECT 1"]
              initialDelaySeconds: 5
              periodSeconds: 2
              timeoutSeconds: 1
          - name: xtrabackup
            image: harbor.wyh.net/linux37/xtrabackup:1.0 
            ports:
            - name: xtrabackup
              containerPort: 3307
            command:
            - bash
            - "-c"
            - |
              set -ex
              cd /var/lib/mysql
              # Determine binlog position of cloned data, if any.
              if [[ -f xtrabackup_slave_info ]]; then
                # XtraBackup already generated a partial "CHANGE MASTER TO" query
                # because we're cloning from an existing slave.
                mv xtrabackup_slave_info change_master_to.sql.in
                # Ignore xtrabackup_binlog_info in this case (it's useless).
                rm -f xtrabackup_binlog_info
              elif [[ -f xtrabackup_binlog_info ]]; then
                # We're cloning directly from master. Parse binlog position.
                [[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1
                rm xtrabackup_binlog_info
                echo "CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\
                      MASTER_LOG_POS=${BASH_REMATCH[2]}" > change_master_to.sql.in
              fi
              # Check if we need to complete a clone by starting replication.
              if [[ -f change_master_to.sql.in ]]; then
                echo "Waiting for mysqld to be ready (accepting connections)"
                until mysql -h 127.0.0.1 -e "SELECT 1"; do sleep 1; done
                echo "Initializing replication from clone position"
                # In case of container restart, attempt this at-most-once.
                mv change_master_to.sql.in change_master_to.sql.orig
                mysql -h 127.0.0.1 <<EOF
              $(<change_master_to.sql.orig),
                MASTER_HOST='mysql-0.mysql',
                MASTER_USER='root',
                MASTER_PASSWORD='',
                MASTER_CONNECT_RETRY=10;
              START SLAVE;
              EOF
              fi
              # Start a server to send backups when requested by peers.
              exec ncat --listen --keep-open --send-only --max-conns=1 3307 -c \
                "xtrabackup --backup --slave-info --stream=xbstream --host=127.0.0.1 --user=root"
            volumeMounts:
            - name: data
              mountPath: /var/lib/mysql
              subPath: mysql
            - name: conf
              mountPath: /etc/mysql/conf.d
            resources:
              requests:
                cpu: 100m
                memory: 100Mi
          volumes:
          - name: conf
            emptyDir: {}
          - name: config-map
            configMap:
              name: mysql
      volumeClaimTemplates:
      - metadata:
          name: data
        spec:
          accessModes: ["ReadWriteOnce"]
          resources:
            requests:
              storage: 10Gi
    
    root@master:/opt/k8s-data/yaml/linux37/mysql# kubectl apply -f .
    

    镜像准备:
    https://github.com/docker-library/ #github 下载地址
    基础镜像准备:

    找镜像地址
    进入到容器
    mysql> root@master:/opt/k8s-data/yaml/linux37/mysql# kubectl exec -it mysql-0 bash
    创建数据库
    mysql> create database linux37;
    查看容器有挂载
    root@mysql-1:/# df -h |grep 19
    192.168.200.201:/data/linux37/mysql-datadir-5/mysql   98G  7.2G   86G   8% /var/lib/mysql
    
    进入到另一个容器里,查看数据是否同步,是一主两从

    相关文章

      网友评论

        本文标题:4.基于StatefulSet实现MySQL一主多从集群

        本文链接:https://www.haomeiwen.com/subject/utnkgctx.html