k8s

作者: 藏鋒1013 | 来源:发表于2019-08-03 10:41 被阅读0次

    k8s容器编排

    1. k8s集群的安装

    1.1 k8s的架构

    image.png

    除了核心组件,还有一些推荐的Add-ons:

    组件名称 说明
    kube-dns 负责为整个集群提供DNS服务
    lngress Controller 微服务提供外网入口
    Heapster 提供资源监控
    Dashboard 提供GUI
    Federation 提供跨可用区的集群
    Fluentd-elasticsearch 提供集群日志采集、存储与查询

    1.2 修改IP地址、主机和host解析

    IP地址 主机名
    10.0.0.11 k8s-master
    10.0.0.12 k8s-node-1
    10.0.0.13 k8s-node-2

    1.3 master节点安装etcd

    修改host:
    echo  '192.168.12.201  mirrors.aliyun.com' >>/etc/hosts
    修改yum源:
    curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
    
    yum install etcd -y
    vim /etc/etcd/etcd.conf
    
    6行:ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
    21行:ETCD_ADVERTISE_CLIENT_URLS="http://10.0.0.11:2379"
    
    systemctl start etcd.service
    systemctl enable etcd.service
    
    etcdctl set testdir/testkey0 0
    etcdctl get testdir/testkey0
    
    etcdctl -C http://10.0.0.11:2379 cluster-health
    

    1.4 master节点安装kubernetes

    yum install kubernetes-master.x86_64 -y
    
    vim /etc/kubernetes/apiserver 
    8行:  KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
    11行:KUBE_API_PORT="--port=8080"
    17行:KUBE_ETCD_SERVERS="--etcd-servers=http://10.0.0.11:2379"
    23行:KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
    
    vim /etc/kubernetes/config
    22行:KUBE_MASTER="--master=http://10.0.0.11:8080"
    
    systemctl enable kube-apiserver.service
    systemctl restart kube-apiserver.service
    systemctl enable kube-controller-manager.service
    systemctl restart kube-controller-manager.service
    systemctl enable kube-scheduler.service
    systemctl restart kube-scheduler.service
    

    1.5 node节点安装kubernetes

    两个节点都执行:
    yum install kubernetes-node.x86_64 -y
    
    vim /etc/kubernetes/config 
    22行:KUBE_MASTER="--master=http://10.0.0.11:8080"
    
    vim /etc/kubernetes/kubelet
    5行:KUBELET_ADDRESS="--address=0.0.0.0"
    8行:KUBELET_PORT="--port=10250"
    11行:KUBELET_HOSTNAME="--hostname-override=10.0.0.12"(本机IP)
    14行:KUBELET_API_SERVER="--api-servers=http://10.0.0.11:8080"
    
    systemctl enable kubelet.service
    systemctl start kubelet.service
    systemctl enable kube-proxy.service
    systemctl start kube-proxy.service
    

    1.6 所有节点配置flannel网络master节点:

    两个节点都执行:
    yum install kubernetes-node.x86_64 -y
    
    vim /etc/kubernetes/config 
    22行:KUBE_MASTER="--master=http://10.0.0.11:8080"
    
    vim /etc/kubernetes/kubelet
    5行:KUBELET_ADDRESS="--address=0.0.0.0"
    8行:KUBELET_PORT="--port=10250"
    11行:KUBELET_HOSTNAME="--hostname-override=10.0.0.12"(本机IP)
    14行:KUBELET_API_SERVER="--api-servers=http://10.0.0.11:8080"
    
    systemctl enable kubelet.service
    systemctl start kubelet.service
    systemctl enable kube-proxy.service
    systemctl start kube-proxy.service
    测试:
    iptables -P FORWARD ACCEPT
    wget http://192.168.12.201/docker_image/docker_busybox.tar.gz
    docker  load -i  docker_busybox.tar.gz
    docker  run  -it  docker.io/busybox:latest
    
    [root@k8s-master ~]# vim /usr/lib/systemd/system/docker.service
    第十七行后加入以下内容:
    ExecStartPost=/usr/sbin/iptables  -P FORWARD ACCEPT
    systemctl daemon-reload
    systemctl restart docker
    

    1.7 配置master为镜像仓库

    #master节点
    vim /etc/sysconfig/docker
    OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false --registry-mirror=https://registry.docker-cn.com --insecure-registry=10.0.0.11:5000'
    
    systemctl restart docker
    
    docker run -d -p 5000:5000 --restart=always --name registry -v /opt/myregistry:/var/lib/registry  registry
    
    #node节点
    vim /etc/sysconfig/docker
    OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false --insecure-registry=10.0.0.11:5000'
    
    systemctl restart docker
    

    2. 什么是k8s,k8s有什么功能?

    k8s是一个docker集群的管理工具

    2.1 k8s的核心功能

    自愈: 重新启动失败的容器,在节点不可用时,替换和重新调度节点上的容器,对用户定
    义的健康检查不响应的容器会被中止,并且在容器准备好服务之前不会把其向客户端广播。

    弹性伸缩: 通过监控容器的cpu的负载值,如果这个平均高于80%,增加容器的数量,如果
    这个平均低于10%,减少容器的数量

    服务的自动发现和负载均衡: 不需要修改您的应用程序来使用不熟悉的服务发现机制,
    Kubernetes 为容器提供了自己的 IP 地址和一组容器的单个 DNS 名称,并可以在它们之间进行负载均衡。

    滚动升级和一键回滚: Kubernetes 逐渐部署对应用程序或其配置的更改,同时监视应
    用程序运行状况,以确保它不会同时终止所有实例。 如果出现问题,Kubernetes会为您恢复更改,利用日益增长的部署解决方案的生态系统。

    2.2 k8s的历史

    2.3 k8s的安装

    安装方法 说明
    yum安装 1.5版本
    源码编译安装 难度最大,可以安装最新版
    二进制安装 步骤繁琐,可以安装最新版,shell、ansible、saltstack
    kubeadm 安装 最容易,网络,可以安装最新版
    minikube 适合开发人员体验k8s,网络

    2.4 k8s的应用场景

    k8s最适合跑微服务项目!

    3. k8s常用的资源

    3.1 创建pod资源

    k8s yaml的主要组成:

    apiVersion: v1  api版本
    kind: pod   资源类型
    metadata:   属性
    spec:       详细
    
    k8s_pod.yaml
    
    apiVersion: v1
    kind: Pod
    metadata:
      name: nginx
      labels:
        app: web
    spec:
      containers:
        - name: nginx
          image: 10.0.0.11:5000/nginx:1.13
          ports:
            - containerPort: 80
    

    pod资源:至少由两个容器组成,pod基础容器和业务容器组成
    pod是k8s最小的资源单位

    3.2 ReplicationController资源

    rc:保证指定数量的pod始终存活,rc通过标签选择器来关联pod

    k8s资源的常见操作:

    kubectl   create  -f   xxx.yaml
    kubectl   get  pod|rc
    kubectl  describe  pod  nginx
    kubectl  delete   pod  nginx   或者kubectl delete  -f  xxx.yaml
    kubectl  edit  pod   nginx
    

    创建一个rc:

    apiVersion: v1
    kind: ReplicationController
    metadata:
      name: nginx
    spec:
      replicas: 5
      selector:
        app: myweb
      template:
        metadata:
          labels:
            app: myweb
        spec:
          containers:
          - name: myweb
            image: 10.0.0.11:5000/nginx:1.13
            ports:
            - containerPort: 80
    

    rc的滚动升级:

    新建一个nginx-rc1.15.yaml

    image.png
    升级:
    kubectl rolling-update nginx -f nginx-rc1.15.yaml --update-period=10s
    
    回滚:
    kubectl rolling-update nginx2 -f k8s_rc.yml --update-period=1s
    查看升级回滚是否成功:
    kubectl describe pod|grep 1.13
    

    3.3 service资源

    service帮助pod暴露端口

    1. 创建一个service
    [root@k8s-master svc]# cat k8s_svc.yml 
    apiVersion: v1
    kind: Service
    metadata:
      name: myweb
    spec:
      type: NodePort
      ports:
        - port: 80           #clusterIP
          nodePort: 30000  #nodeport
          targetPort: 80    #podport
      selector:
    app: myweb2
    2. 创建
    [root@k8s-master svc]# kubectl create -f k8s_svc.yml 
    service "myweb" created
    3. 修改文件kubectl edit myweb
     [root@k8s-master svc]# kubectl edit svc myweb
    # Please edit the object below. Lines beginning with a '#' will be ignored,
    # and an empty file will abort the edit. If an error occurs while saving this file will be
    # reopened with the relevant failures.
    #
    apiVersion: v1
    kind: Service
    metadata:
      creationTimestamp: 2019-07-26T10:00:21Z
      name: myweb
      namespace: default
      resourceVersion: "40169"
      selfLink: /api/v1/namespaces/default/services/myweb
      uid: 2dfb93c0-af8c-11e9-a5d2-000c29c93def
    spec:
      clusterIP: 10.254.214.106
      ports:
      - nodePort: 30000
        port: 80
        protocol: TCP
        targetPort: 80
      selector:
        app: myweb
      sessionAffinity: None
      type: NodePort
    status:
      loadBalancer: {}
    

    网页访问10.0.0.12:30000或者10.0.0.13:30000
    出现如下画面:


    image.png

    修改nodePort范围

    vim  /etc/kubernetes/apiserver
    KUBE_API_ARGS="--service-node-port-range=3000-50000"
    service默认使用iptables来实现负载均衡,新版本中推荐使用lvs(四层负载均衡)
    

    3.4 deployment资源

    有rc在滚动升级之后,会造成服务访问中断,于是k8s引入了deployment资源

    创建deployment:

    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      name: nginx-deployment
    spec:
      replicas: 3
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: 10.0.0.11:5000/nginx:1.13
            ports:
            - containerPort: 80
            resources:
              limits:
                cpu: 100m
              requests:
                cpu: 100m
    

    deployment升级和回滚

    命令行创建deployment:
    kubectl run   nginx  --image=10.0.0.11:5000/nginx:1.13 --replicas=3 --record
    命令行升级版本:
    kubectl set image deploy nginx nginx=10.0.0.11:5000/nginx:1.15
    查看deployment所有历史版本:
    kubectl rollout history deployment nginx
    deployment回滚到上一个版本:
    kubectl rollout undo deployment nginx
    deployment回滚到指定版本:
    kubectl rollout undo deployment nginx --to-revision=2
    

    3.5 tomcat+mysql练习

    在k8s中容器之间相互访问,通过VIP地址!

    准备环境:

    [root@k8s-node02 ~]# ll
    total 1449636
    -rw-r--r-- 1 root root 202872320 Jul 22 18:50 docker_centos6.9.tar.gz
    -rw-r--r-- 1 root root 392823296 Jul 22 18:50 docker-mysql-5.7.tar.gz
    drwxr-xr-x 2 root root        25 Jul 26 21:32 k8s
    -rw------- 1 root root 519031808 Jul 26 20:59 kod.tar.gz
    -rw-r--r-- 1 root root 369691136 Jul 22 18:50 tomcat-app-v2.tar.gz
    [root@k8s-node02 ~]# docker load -i docker-mysql-5.7.tar.gz
    [root@k8s-node02 ~]# docker load -i tomcat-app-v2.tar.gz 
    [root@k8s-node02 ~]# docker tag docker.io/mysql:5.7 10.0.0.11:5000/mysql:5.7
    [root@k8s-node02 ~]# docker tag docker.io/kubeguide/tomcat-app:v2 10.0.0.11:5000/tomcat-app:v2
    

    主节点环境准备:

    [root@k8s-master tomcat_demo]# ll
    total 16
    -rw-r--r-- 1 root root 416 Jul 22 18:43 mysql-rc.yml
    -rw-r--r-- 1 root root 145 Jul 22 18:43 mysql-svc.yml
    -rw-r--r-- 1 root root 490 Jul 29 11:54 tomcat-rc.yml
    -rw-r--r-- 1 root root 162 Jul 22 18:43 tomcat-svc.yml
    [root@k8s-master tomcat_demo]# cat mysql-rc.yml 
    apiVersion: v1
    kind: ReplicationController
    metadata:
      name: mysql
    spec:
      replicas: 1
      selector:
        app: mysql
      template:
        metadata:
          labels:
            app: mysql
        spec:
          containers:
            - name: mysql
              image: 10.0.0.11:5000/mysql:5.7
              ports:
              - containerPort: 3306
              env:
              - name: MYSQL_ROOT_PASSWORD
                value: '123456'
    [root@k8s-master tomcat_demo]# cat mysql-svc.yml 
    apiVersion: v1
    kind: Service
    metadata:
      name: mysql
    spec:
      ports:
        - port: 3306
          targetPort: 3306
      selector:
        app: mysql
    [root@k8s-master tomcat_demo]# cat tomcat-rc.yml 
    apiVersion: v1
    kind: ReplicationController
    metadata:
      name: myweb
    spec:
      replicas: 1
      selector:
        app: myweb
      template:
        metadata:
          labels:
            app: myweb
        spec:
          containers:
            - name: myweb
              image: 10.0.0.11:5000/tomcat-app:v2
              ports:
              - containerPort: 8080
              env:
              - name: MYSQL_SERVICE_HOST
                value: '10.254.61.36'  (ip需根据实际修改)
              - name: MYSQL_SERVICE_PORT
                value: '3306'
    [root@k8s-master tomcat_demo]# cat tomcat-svc.yml 
    apiVersion: v1
    kind: Service
    metadata:
      name: myweb
    spec:
      type: NodePort
      ports:
        - port: 8080
          nodePort: 30008
      selector:
        app: myweb
    

    步骤:

    ##清除myweb,防止重复冲突
    kubectl delete  svc myweb
    ##创建:
    kubectl create -f mysql-rc.yml
    kubectl create -f mysql-svc.yml
    
    vim tomcat-rc.yml
    lvalue: '10.254.61.36'(ip使用kubectl get all查看)
    
    kubectl create -f tomcat-rc.yml 
    kubectl create -f tomcat-svc.yml
    

    网页访问10.0.0.13:30008/demo(端口kubuctl get all查看)

    出现如下画面:


    image.png

    4. k8s的附加组件

    4.1 dns服务

    安装dns服务:

    1. 下载dns_docker镜像包
    wget http://192.168.21.201/docker_k8s_dns.tar.gz
    2. 导入dns_docker镜像包(node2节点)
    docker load -i docker_k8s_dns.tar.gz
    3. 修改skydns-rc.yaml
    spec:
      nodeSelector:
        kubernetes.io/hostname: k8s-node-2
      containers:
    4. 创建dns服务
    kubectl  create  -f   skydns-rc.yaml
    5. 检查
    kubectl get all --namespace=kube-system
    6. 修改所有node节点kubelet的配置文件
    vim  /etc/kubernetes/kubelet
    
    KUBELET_ARGS="--cluster_dns=10.254.230.254 --cluster_domain=cluster.local"
    
    systemctl   restart kubelet
    

    4.2 namespace命令空间

    namespace做资源隔离

    位置:在所有yaml文件中加入namespace
    在name之后,spec之前

    例如:namespace: oldwang

    [root@k8s-master tomcat_demo]# cat tomcat-rc.yml 
    apiVersion: v1
    kind: ReplicationController
    metadata:
      name: myweb
      namespace: oldwang
    spec:
      replicas: 1
      selector:
        app: myweb
      template:
        metadata:
          labels:
            app: myweb
        spec:
          containers:
            - name: myweb
              image: 10.0.0.11:5000/tomcat-app:v2
              ports:
              - containerPort: 8080
              env:
              - name: MYSQL_SERVICE_HOST
                value: 'mysql'
              - name: MYSQL_SERVICE_PORT
                value: '3306'
    
    [root@k8s-master tomcat_demo]# cat tomcat-svc.yml 
    apiVersion: v1
    kind: Service
    metadata:
      name: myweb
      namespace: oldwang
    spec:
      type: NodePort
      ports:
        - port: 8080
          nodePort: 30008
      selector:
        app: myweb
    

    4.3 健康检查

    4.3.1 探针的种类

    探针的种类 说明
    livenessProbe 健康状态检查,周期性检查服务是否存活,检查结果失败,将重启机器
    readinessProbe 可用性周期,周期性检查服务是否可用,不可用将从service的endpoints中移除

    4.3.2 探针的检测方法

    检测方法 说明
    exec 执行一段命令
    httpGet 检测某个http请求的返回状态码
    tcpSocket 测试某个端口是否能够连接

    liveness 探针的exec使用

    vi nginx_pod_exec.yaml
    apiVersion: v1
    kind: Pod
    metadata:
    name: exec
    spec:
    containers:
     - name: nginx
      image: 10.0.0.11:5000/nginx:1.13
      ports:
       - containerPort: 80
      args:
       - /bin/sh
       - -c
       - touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
      livenessProbe:
       exec:
        command:
         - cat
         - /tmp/healthy
       initialDelaySeconds: 5
       periodSeconds: 5
    

    liveness探针的HTTPGet使用

    vi  nginx_pod_httpGet.yaml
    apiVersion: v1
    kind: Pod
    metadata:
    name: httpget
    spec:
    containers:
     - name: nginx
      image: 10.0.0.11:5000/nginx:1.13
      ports:
       - containerPort: 80
      livenessProbe:
       httpGet:
        path: /index.html
        port: 80
       initialDelaySeconds: 3
       periodSeconds: 3
    

    liveness探针的tcpSocket使用

    vi  nginx_pod_tcpSocket.yaml
    apiVersion: v1
    kind: Pod
    metadata:
    name: tcpSocket
    spec:
    containers:
     - name: nginx
      image: 10.0.0.11:5000/nginx:1.13
      ports:
       - containerPort: 80
      livenessProbe:
       tcpSocket:
        port: 80
       initialDelaySeconds: 3
       periodSeconds: 3
    readiness探针的httpGet使用
    vi  nginx-rc-httpGet.yaml
    aapiVersion: v1
    kind: ReplicationController
    metadata:
    name: readiness
    spec:
    replicas: 2
    selector:
     app: readiness
    template:
     metadata:
      labels:
       app: readiness
     spec:
      containers:
      - name: readiness
       image: 10.0.0.11:5000/nginx:1.13
       ports:
       - containerPort: 80
       readinessProbe:
        httpGet:
         path: /qiangge.html
         port: 80
        initialDelaySeconds: 3
        periodSeconds: 3
    

    4.4 dashboard服务

    4.4.1. 上传并导入镜像,打标签

    ##上传镜像:
    [root@k8s-node02 ~]# ll 
    -rw-r--r--  1 root root  86984704 Jul 22 18:50 kubernetes-dashboard-amd64_v1.4.1.tar.gz
    ##导入镜像:
    [root@k8s-node02 ~]# docker load -i kubernetes-dashboard-amd64_v1.4.1.tar.gz
    ##打标签:
    [root@k8s-node02 ~]# docker tag index.tenxcloud.com/google_containers/kubernetes-dashboard-amd64:v1.4.1 10.0.0.11:5000/kubernetes-dashboard-amd64:v1.41
    

    4.4.2. 创建dashborad的deployment和service

    [root@k8s-master deploment]# cat dashboard-svc.yaml 
    apiVersion: v1
    kind: Service
    metadata:
      name: kubernetes-dashboard
      namespace: kube-system
      labels:
        k8s-app: kubernetes-dashboard
        kubernetes.io/cluster-service: "true"
    spec:
      selector:
        k8s-app: kubernetes-dashboard
      ports:
      - port: 80
        targetPort: 9090
    [root@k8s-master deploment]# cat dashboard.yaml 
    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
    # Keep the name in sync with image version and
    # gce/coreos/kube-manifests/addons/dashboard counterparts
      name: kubernetes-dashboard-latest
      namespace: kube-system
    spec:
      replicas: 1
      template:
        metadata:
          labels:
            k8s-app: kubernetes-dashboard
            version: latest
            kubernetes.io/cluster-service: "true"
        spec:
          containers:
          - name: kubernetes-dashboard
            image: 10.0.0.11:5000/kubernetes-dashboard-amd64:v1.4.1
            resources:
              # keep request = limit to keep this container in guaranteed class
              limits:
                cpu: 100m
                memory: 50Mi
              requests:
                cpu: 100m
                memory: 50Mi
            ports:
            - containerPort: 9090
            args:
             -  --apiserver-host=http://10.0.0.11:8080
            livenessProbe:
              httpGet:
                path: /
                port: 9090
              initialDelaySeconds: 30
              timeoutSeconds: 30
    
    [root@k8s-master deploment]# kubectl create -f .
    

    4.4.3. 访问http://10.0.0.11:8080/ui/

    image.png

    4.5 通过apiservicer反向代理

    第一种:NodePort类型

    type: NodePort
    ports:
     - port: 80
      targetPort: 80
      nodePort: 30008
    

    第二种:ClusterIP类型

    type: ClusterIP
    ports:
     - port: 80
      targetPort: 80
    

    5. k8s弹性伸缩

    k8s弹性伸缩,需要附加插件heapster监控

    5.1 安装heapster监控

    5.1.1. 上传并导入镜像,打标签

    [root@k8s-node02 ~]# ls *.tar.gz
    [root@k8s-node02 ~]# for n in `ls *.tar.gz`;do docker load -i $n ;done
    [root@k8s-node02 ~]# docker tag docker.io/kubernetes/heapster_grafana:v2.6.0 10.0.0.11:5000/heapster_grafana:v2.6.0
    [root@k8s-node02 ~]# docker tag  docker.io/kubernetes/heapster_influxdb:v0.5 10.0.0.11:5000/heapster_influxdb:v0.5
    [root@k8s-node02 ~]# docker tag docker.io/kubernetes/heapster:canary 10.0.0.11:5000/heapster:canary
    

    5.1.2. 上传配置文件

    [root@k8s-master heapster]# ll
    total 20
    -rw-r--r-- 1 root root  414 Jul 22 18:43 grafana-service.yaml
    -rw-r--r-- 1 root root  622 Jul 30 12:16 heapster-controller.yaml
    -rw-r--r-- 1 root root  249 Jul 22 18:43 heapster-service.yaml
    -rw-r--r-- 1 root root 1473 Jul 22 18:43 influxdb-grafana-controller.yaml
    -rw-r--r-- 1 root root  259 Jul 22 18:43 influxdb-service.yaml
    [root@k8s-master heapster]# kubectl create -f .
    

    5.1.3. 打开dashboard验证

    网页访问10.0.0.11:/ui/,出现如下画面:


    image.png

    5.2 弹性伸缩

    5.2.1 修改rc的配置文件

    containers:
    - name: myweb
      image: 10.0.0.11:5000/nginx:1.13
      ports:
      - containerPort: 80
      resources:
        limits:
          cpu: 100m
        requests:
          cpu: 100m
    

    5.2.2 创建弹性伸缩规则

    kubectl autoscale  -n qiangge replicationcontroller myweb --max=8 --min=1 --cpu-
    percent=8
    

    5.2.3 测试扩容

    yum install httpd-tools.x86_64 -y 
    ab -n 1000000 -c 30 http://172.16.99.2/index.html
    

    扩容:

    image.png

    缩容:

    image.png

    6. 持久化存储

    pv: persistent volume 全局的资源 pv,node
    pvc: persistent volume claim 局部的资源(namespace)pod,rc,svc

    6.1 安装nfs服务端(10.0.0.11)

    yum install nfs-utils.x86_64 -y
    mkdir /data
    vim /etc/exports
    /data  10.0.0.0/24(rw,async,no_root_squash,no_all_squash)
    systemctl start rpcbind
    systemctl start nfs
    systemctl enable rpcbind
    systemctl enable nfs
    

    6.2 在node节点安装nfs客户端

    yum install nfs-utils.x86_64 -y
    showmount -e 10.0.0.11
    

    6.3 创建pv和pvc

    上传yaml配置文件,创建pv和pvc
    [root@k8s-master volume]# ll
    total 20
    -rw-r--r-- 1 root root 292 Jul 30 15:33 mysql_pv2.yaml
    -rw-r--r-- 1 root root 162 Jul 30 15:32 mysql_pvc.yaml
    -rw-r--r-- 1 root root 291 Jul 30 15:29 mysql_pv.yaml
    -rw-r--r-- 1 root root 581 Jul 30 15:38 mysql-rc-pvc.yml
    -rw-r--r-- 1 root root 145 Jul 30 16:33 mysql-svc.yml
    

    6.4 创建mysql-rc,pod模板里使用volume

          volumeMounts:
            - name: mysql
              mountPath: /var/lib/mysql
        volumes:
        - name: mysql
          persistentVolumeClaim:
            claimName: tomcat-mysql
    
    image.png

    6.5 验证持久化

    验证方法一:删除mysql的pod,数据库不丢

    kubectl delete pod mysql-gt054


    image.png

    验证方法二:查看nfs服务端,是否有mysql的数据文件

    HPE_APP存在说明持久化成功


    image.png

    6.6: 分布式存储glusterfs

    (1)环境准备:

    主机名 IP地址 环境
    glusterfs01 10.0.0.14 CentOS7.6,内存512M,增加两块10G硬盘,host解析
    glusterfs02 10.0.0.15 CentOS7.6,内存512M,增加两块10G硬盘,host解析
    glusterfs03 10.0.0.16 CentOS7.6,内存512M,增加两块10G硬盘,host解析

    k8s-node节点添加glusterfs解析

    (2)什么是glusterfs

    Glusterfs是一个开源分布式文件系统,具有强大的横向扩展能力,可支持PB存储数量和
    数千客户端,通过网络互联成一个并行的网络文件系统。具有可扩展性、高性能、高可用性等特点。

    (3)安装glusterfs节点

    所有节点:

    yum install centos-release-gluster -y
    yum install install glusterfs-server -y
    systemctl start glusterd.service
    systemctl enable glusterd.service
    mkdir -p /gfs/test1
    mkdir -p /gfs/test2
    mkfs.xfs /dev/sdb
    mkfs.xfs /dev/sdc
    blkid
    vim /etc/fstab
    mount -a
    

    (4)添加存储资源池

    [root@glusterfs01 ~]# gluster pool list
    UUID                                    Hostname    State
    1de813d6-2ae6-40bd-a566-a64df0a6cceb    localhost   Connected 
    [root@glusterfs01 ~]# gluster peer probe glusterfs01
    [root@glusterfs01 ~]# gluster peer probe glusterfs02
    [root@glusterfs01 ~]# gluster pool list
    UUID                                        Hostname  State
    f6e62dbd-dc37-4bc0-9610-8b0f12f964e7    glusterfs02 Connected 
    1abe8071-b8f1-48a4-83af-1aa94ad860fe        glusterfs03 Connected 
    1de813d6-2ae6-40bd-a566-a64df0a6cceb    localhost   Connected 
    [root@glusterfs01 ~]#
    

    (5)glusterfs卷管理

    创建分布式复制卷:
    gluster volume create oldwang replica 2 glusterfs01:/gfs/test1 \
    glusterfs01:/gfs/test2 glusterfs02:/gfs/test1 glusterfs02:/gfs/test2 force
    启动卷:
    gluster volume start oldwang
    查看卷:
    gluster volume info oldwang
    挂在卷:
    mount -t glusterfs 10.0.0.14:/oldwang /mnt
    

    (6)分布式复制卷图解

    image.png

    (7)分布式复制卷扩容

    扩容前查看容量:
    df -h
    扩容命令:
    glusterfs volume add-brick oldwang glusterfs03:/gfs/test1 glusterfs03:/gfs/test2 force
    扩容后查看容量:
    df -h
    

    7. 与jenkins集成实现ci/cd

    IP地址 服务 内存
    10.0.0.11 kube-apiserver 8080 1G
    10.0.0.13 k8s-node2 2G
    10.0.0.14 jenkins(tomcat+jdk) 8080、kubectl 1G
    10.0.0.15 gitlab 8080,80 2G

    7.1 安装gitlab并上传代码

    7.1.1 安装:

    wget https://mirrors.tuna.tsinghua.edu.cn/gitlab-ce/yum/el7/gitlab-ce-11.9.11-ce.0.el7.x86_64.rpm
    yum localinstall gitlab-ce-11.9.11-ce.0.el7.x86_64.rpm -y
    

    7.1.2 配置:

    vim /etc/gitlab/gitlab.rb
    external_url 'http://10.0.0.15'
    prometheus_monitoring['enable'] = false
    

    7.1.3 应用并启动服务:

    gitlab-ctl reconfigure
    

    7.1.4 使用浏览器访问http://10.0.0.15,修改root用户密码,创建project

    image.png

    7.1.5 上传代码到git仓库

    [root@glusterfs02 ~]# cd /srv/
    rz -E
    [root@glusterfs02 srv]# ll
    total 232
    -rw-r--r-- 1 root root 91014 Jul 31 15:25 xiaoniaofeifei.zip
    
    [root@glusterfs02 srv]# unzip xiaoniaofeifei.zip 
    [root@glusterfs02 srv]# rm -fr xiaoniaofeifei.zip 
    
    [root@glusterfs02 srv]# git config --global user.name "Administrator"
    [root@glusterfs02 srv]# git config --global user.email "admin@example.com"
    [root@glusterfs02 srv]# git init
    [root@glusterfs02 srv]# git remote add origin http://10.0.0.15/root/xiaoniao.git
    [root@glusterfs02 srv]# git add .
    [root@glusterfs02 srv]# git commit -m "Initial commit"
    [root@glusterfs02 srv]# git push -u origin master
    
    [root@glusterfs02 srv]# yum install docker -y
    [root@glusterfs02 srv]# vim dockerfike
    FROM  10.0.0.11:5000/nginx:1.13
    ADD    .  /usr/share/nginx/html
    [root@glusterfs02 srv]# git add .
    [root@glusterfs02 srv]# git commit -m "add dockerfile"
    [root@glusterfs02 srv]# git push -u origin master
    

    7.2 安装jenkins,并自动构建docker镜像

    7.2.1 安装jenkins

    cd /opt/
    rz -E
    [root@glusterfs01 opt]# ll
    total 814604
    -rw-r--r-- 1 root root   9128610 Jul 31 15:26 apache-tomcat-8.0.27.tar.gz
    -rw-r--r-- 1 root root 569408968 Jul 31 15:26 gitlab-ce-11.9.11-ce.0.el7.x86_64.rpm
    -rw-r--r-- 1 root root 166044032 Jul 31 15:26 jdk-8u102-linux-x64.rpm
    -rw-r--r-- 1 root root  89566714 Jul 31 15:26 jenkin-data.tar.gz
    -rw-r--r-- 1 root root  89566714 Jul 31 15:26 jenkins.war
    
    rpm -ivh jdk-8u102-linux-x64.rpm 
    mkdir /app
    tar xf apache-tomcat-8.0.27.tar.gz -C /app
    rm -fr /app/apache-tomcat-8.0.27/webapps/*
    mv jenkins.war /app/apache-tomcat-8.0.27/webapps/ROOT.war
    tar xf jenkin-data.tar.gz -C /root
    /app/apache-tomcat-8.0.27/bin/startup.sh 
    netstat -lntup
    

    7.2.2 访问jenkins

    访问http://10.0.0.14:8080/,默认账号密码admin:123456


    image.png

    7.2.3 配置jenkins拉取gitlab代码凭据

    a. 在jenkins上生成秘钥对

    [root@glusterfs02 ~]# ssh-keygen
    
    image.png

    b:复制公钥粘贴gitlab上

    image.png

    c:jenkins上创建全局凭据

    image.png
    image.png

    7.2.4 拉取代码测试

    image.png

    拉取成功:

    image.png

    7.2.5 编写dockerfile并测试

    vim dockerfile
    
    FROM 10.0.0.11:5000/nginx:1.13
    add .  /usr/share/nginx/html
    
    image.png
    添加docker build构建时不add的文件
    vim  .dockerignore
    dockerfile
    
    docker build -t xiaoniao:v1 .
    docker run -d -p 88:80 xiaoniao:v1
    
    打开浏览器测试访问xiaoniaofeifei的项目
    

    7.2.6 上传dockerfile和.dockerignore到私有仓库

    git add docker  .dockerignore
    git commit -m "fisrt commit"
    git push -u origin master
    

    7.2.7 点击jenkins立即构建,自动构建docker镜像并上传到私有仓库

    修改jenkins 工程配置
    
    docker  build  -t  10.0.0.11:5000/test:v$BUILD_ID  .
    docker  push 10.0.0.11:5000/test:v$BUILD_ID
    

    7.3 jenkins自动部署应用到k8s

    测试:

    kubectl -s 10.0.0.11:8080  get nodes
    [root@glusterfs01 opt]# kubectl -s 10.0.0.11:8080 get nodes
    NAME        STATUS     AGE
    10.0.0.13   NotReady   5d
    
    [root@glusterfs01 opt]# kubectl -s 10.0.0.11:8080 get nodes
    NAME        STATUS     AGE
    10.0.0.13   NotReady   5d
    
    if [ -f /tmp/xiaoniao.lock ];then
        docker  build  -t  10.0.0.11:5000/xiaoniao:v$BUILD_ID  .
        docker  push 10.0.0.11:5000/xiaoniao:v$BUILD_ID
        kubectl -s 10.0.0.11:8080 set image  -n xiaoniao deploy xiaoniao xiaoniao=10.0.0.11:5000/xiaoniao:v$BUILD_ID
        echo "更新成功"
    else
        docker  build  -t  10.0.0.11:5000/xiaoniao:v$BUILD_ID  .
        docker  push 10.0.0.11:5000/xiaoniao:v$BUILD_ID
        kubectl  -s 10.0.0.11:8080  create  namespace  xiaoniao
        kubectl  -s 10.0.0.11:8080  run   xiaoniao  -n xiaoniao  --image=10.0.0.11:5000/xiaoniao:v$BUILD_ID --replicas=3 --record
        kubectl  -s 10.0.0.11:8080   expose -n xiaoniao deployment xiaoniao --port=80 --type=NodePort
        port=`kubectl -s 10.0.0.11:8080  get svc -n xiaoniao|grep -oP '(?<=80:)\d+'`
        echo "你的项目地址访问是http://10.0.0.13:$port"
        touch /tmp/xiaoniao.lock
    fi
    

    jenkins一键回滚

    kubectl  -s 10.0.0.11:8080 rollout undo -n xiaoniao  deploymen
    

    相关文章

      网友评论

        本文标题:k8s

        本文链接:https://www.haomeiwen.com/subject/dogddctx.html