美文网首页
学习kubernetes(一):kubeasz部署基于Flann

学习kubernetes(一):kubeasz部署基于Flann

作者: ljyfree | 来源:发表于2019-03-28 15:34 被阅读0次

    参考http://aspirer.wang/?p=1205,使用kubeasz部署kubernetes,只涉及容器在节点间通信

    测试环境

    准备三台虚拟机(CentOS7.5)

    k8s-master:10.25.151.100
    k8s-node-1:10.25.151.103
    k8s-node-2:10.25.151.104
    

    准备工作(主节点上进行)

    • 下载安装必要软件
    # yum install git python-pip -y
    # pip install pip --upgrade -i http://mirrors.aliyun.com/pypi/simple/ --trusted-host mirrors.aliyun.com
    # pip install --no-cache-dir ansible -i http://mirrors.aliyun.com/pypi/simple/ --trusted-host mirrors.aliyun.com
    
    • 配置密钥并输入登录各个节点的root密码
    # ssh-keygen -t rsa -b 2048 (三个回车)
    # ssh-copy-id 10.25.151.100
    # ssh-copy-id 10.25.151.101
    # ssh-copy-id 10.25.151.102
    
    • 获取kubeasz
    # git clone https://github.com/gjmzj/kubeasz.git
    # ll
    total 213968
    -rw-------. 1 root root      1518 Mar 26 02:18 anaconda-ks.cfg
    drwxr-xr-x. 2 root root      4096 Mar 11 09:02 bin
    -rw-r--r--. 1 root root 219093842 Mar 27  2019 k8s.1-13-4.tar.gz
    drwxr-xr-x. 3 root root        36 Mar 26 04:29 kubeasz
    
    
    • 拷贝文件
    # mkdir -p /etc/ansible
    # mv kubeasz/* /etc/ansible
    
    # tar zxvf k8s.1-13-4.tar.gz 
    bin/
    bin/loopback
    bin/kubelet
    bin/docker-init
    bin/docker-compose
    bin/docker-proxy
    bin/portmap
    bin/containerd-shim
    bin/etcd
    bin/containerd
    bin/helm
    bin/cfssl-certinfo
    bin/kube-proxy
    bin/kube-controller-manager
    bin/cfssljson
    bin/bridge
    bin/ctr
    bin/kube-apiserver
    bin/docker
    bin/etcdctl
    bin/kubectl
    bin/dockerd
    bin/cfssl
    bin/calicoctl
    bin/readme.md
    bin/host-local
    bin/kube-scheduler
    bin/runc
    bin/flannel
    #
    # mkdir -p /etc/ansible
    # mv bin/* /etc/ansible/bin
    mv: overwrite ‘/etc/ansible/bin/readme.md’? y
    #
    
    • 配置ansible的hosts文件
    # cd /etc/ansible
    # cp example/hosts.allinone.example hosts
    编辑后
    # cat hosts 
    # 集群部署节点:一般为运行ansible 脚本的节点
    # 变量 NTP_ENABLED (=yes/no) 设置集群是否安装 chrony 时间同步
    [deploy]
    10.25.151.100 NTP_ENABLED=no
    
    # etcd集群请提供如下NODE_NAME,注意etcd集群必须是1,3,5,7...奇数个节点
    [etcd]
    10.25.151.100 NODE_NAME=etcd1
    
    [kube-master]
    10.25.151.100
    
    [kube-node]
    10.25.151.103
    10.25.151.104
    
    # 参数 NEW_INSTALL:yes表示新建,no表示使用已有harbor服务器
    # 如果不使用域名,可以设置 HARBOR_DOMAIN=""
    [harbor]
    #192.168.1.8 HARBOR_DOMAIN="harbor.yourdomain.com" NEW_INSTALL=no
    
    # 【可选】外部负载均衡,用于自有环境负载转发 NodePort 暴露的服务等
    [ex-lb]
    #192.168.1.6 LB_ROLE=backup EX_VIP=192.168.1.250
    #192.168.1.7 LB_ROLE=master EX_VIP=192.168.1.250
    
    [all:vars]
    # ---------集群主要参数---------------
    #集群部署模式:allinone, single-master, multi-master
    DEPLOY_MODE=allinone
    
    #集群 MASTER IP,自动生成
    MASTER_IP="{{ groups['kube-master'][0] }}"
    KUBE_APISERVER="https://{{ MASTER_IP }}:6443"
    
    # 集群网络插件,目前支持calico, flannel, kube-router, cilium
    CLUSTER_NETWORK="flannel"
    
    # 服务网段 (Service CIDR),注意不要与内网已有网段冲突
    SERVICE_CIDR="10.68.0.0/16"
    
    # POD 网段 (Cluster CIDR),注意不要与内网已有网段冲突
    CLUSTER_CIDR="172.20.0.0/16"
    
    # 服务端口范围 (NodePort Range)
    NODE_PORT_RANGE="20000-40000"
    
    # kubernetes 服务 IP (预分配,一般是 SERVICE_CIDR 中第一个IP)
    CLUSTER_KUBERNETES_SVC_IP="10.68.0.1"
    
    # 集群 DNS 服务 IP (从 SERVICE_CIDR 中预分配)
    CLUSTER_DNS_SVC_IP="10.68.0.2"
    
    # 集群 DNS 域名
    CLUSTER_DNS_DOMAIN="cluster.local."
    
    # ---------附加参数--------------------
    #默认二进制文件目录
    bin_dir="/opt/kube/bin"
    
    #证书目录
    ca_dir="/etc/kubernetes/ssl"
    
    #部署目录,即 ansible 工作目录
    base_dir="/etc/ansible"
    #
    
    • 确认节点网络可达
    # ansible all -m ping
    10.25.151.104 | SUCCESS => {
        "changed": false, 
        "ping": "pong"
    }
    10.25.151.100 | SUCCESS => {
        "changed": false, 
        "ping": "pong"
    }
    10.25.151.103 | SUCCESS => {
        "changed": false, 
        "ping": "pong"
    }
    
    • 查看playbook
    # ll
    total 88
    -rw-r--r--.  1 root root   499 Mar 26 04:19 01.prepare.yml
    -rw-r--r--.  1 root root    58 Mar 26 04:19 02.etcd.yml
    -rw-r--r--.  1 root root    87 Mar 26 04:19 03.docker.yml
    -rw-r--r--.  1 root root   532 Mar 26 04:19 04.kube-master.yml
    -rw-r--r--.  1 root root    72 Mar 26 04:19 05.kube-node.yml
    -rw-r--r--.  1 root root   346 Mar 26 04:19 06.network.yml
    -rw-r--r--.  1 root root    77 Mar 26 04:19 07.cluster-addon.yml
    -rw-r--r--.  1 root root  1521 Mar 26 04:19 11.harbor.yml
    -rw-r--r--.  1 root root   411 Mar 26 04:19 22.upgrade.yml
    -rw-r--r--.  1 root root  1394 Mar 26 04:19 23.backup.yml
    -rw-r--r--.  1 root root  1391 Mar 26 04:19 24.restore.yml
    -rw-r--r--.  1 root root  1723 Mar 26 04:19 90.setup.yml
    -rw-r--r--.  1 root root  5941 Mar 26 04:19 99.clean.yml
    -rw-r--r--.  1 root root 10283 Mar 26 04:19 ansible.cfg
    drwxr-xr-x.  2 root root  4096 Mar 26 04:32 bin
    drwxr-xr-x.  4 root root    36 Mar 26 04:19 dockerfiles
    drwxr-xr-x.  8 root root    92 Mar 26 04:19 docs
    drwxr-xr-x.  2 root root    47 Mar 26 04:19 down
    drwxr-xr-x.  2 root root   254 Mar 26 04:19 example
    -rw-r--r--.  1 root root  1884 Mar 26 04:34 hosts
    drwxr-xr-x. 14 root root   218 Mar 26 04:19 manifests
    drwxr-xr-x.  2 root root   245 Mar 26 04:19 pics
    -rw-r--r--.  1 root root  5056 Mar 26 04:19 README.md
    drwxr-xr-x. 22 root root  4096 Mar 26 04:19 roles
    drwxr-xr-x.  2 root root   272 Mar 26 04:19 tools
    [root@k8s-master ansible]# 
    
    

    利用ansible集群安装

    • 可以分步安装(某步失败可以重复再执行一下),或者
    # ansible-playbook 01.prepare.yml
    # ansible-playbook 02.etcd.yml
    # ansible-playbook 03.docker.yml
    # ansible-playbook 04.kube-master.yml
    # ansible-playbook 05.kube-node.yml
    # ansible-playbook 06.network.yml
    
    • 一步安装
    # ansible-playbook 90.setup.yml
    
    • 安装完毕后,需要重新连接或是新开终端才能使用快捷命令(不用全路径)
    • 查看集群状态
    [root@localhost ~]# kubectl get node
    NAME            STATUS   ROLES    AGE   VERSION
    10.25.151.100   Ready    master   16h   v1.13.4
    10.25.151.103   Ready    node     16h   v1.13.4
    10.25.151.104   Ready    node     16h   v1.13.4
    [root@localhost ~]# 
    [root@localhost ~]# kubectl get componentstatus
    NAME                 STATUS    MESSAGE              ERROR
    scheduler            Healthy   ok                   
    controller-manager   Healthy   ok                   
    etcd-0               Healthy   {"health": "true"}   
    [root@localhost ~]# kubectl cluster-info
    Kubernetes master is running at https://10.25.151.100:6443
    
    To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
    [root@localhost ~]# 
    [root@localhost ~]# kubectl get pod --all-namespaces
    NAMESPACE     NAME                          READY   STATUS    RESTARTS   AGE
    kube-system   kube-flannel-ds-amd64-dtsgm   1/1     Running   0          54m
    kube-system   kube-flannel-ds-amd64-hfnr6   1/1     Running   0          54m
    kube-system   kube-flannel-ds-amd64-pnh4m   1/1     Running   0          54m
    [root@localhost ~]# 
    [root@localhost ~]# kubectl get svc --all-namespaces
    NAMESPACE   NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
    default     kubernetes   ClusterIP   10.68.0.1    <none>        443/TCP   16h
    [root@localhost ~]# 
    
    • 查看flannel子网信息(以master为例)
    # cat /run/flannel/subnet.env 
    FLANNEL_NETWORK=172.20.0.0/16
    FLANNEL_SUBNET=172.20.0.1/24
    FLANNEL_MTU=1450
    FLANNEL_IPMASQ=true
    
    • 确认每个节点docker0/flannel的IP地址
    100上
    docker0 172.17.0.1/16
    flannel.1 172.20.0.0/32
    
    103上
    docker0 172.17.0.1/16
    flannel.1 172.20.1.0/32
    
    104上
    docker0 172.17.0.1/16
    flannel.1 172.20.2.0/32
    
    • flannel的使用说明文档的路径
    /etc/ansible/docs/setup/network-plugin/flannel.md
    

    验证网络

    节点上启动pod

    • 随便哪个节点上拉一个busybox的小镜像
    # docker pull busybox
    Using default tag: latest
    latest: Pulling from library/busybox
    697743189b6d: Pull complete 
    Digest: sha256:061ca9704a714ee3e8b80523ec720c64f6209ad3f97c0ff7cb9ec7d19f15149f
    Status: Downloaded newer image for busybox:latest
    [root@localhost ~]# 
    [root@localhost ~]# docker images
    REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
    busybox                              latest              d8233ab899d4        5 weeks ago         1.2MB
    jmgao1983/flannel                    v0.11.0-amd64       ff281650a721        8 weeks ago         52.6MB
    mirrorgooglecontainers/pause-amd64   3.1                 da86e6ba6ca1        15 months ago       742kB
    [root@localhost ~]#
    
    • 创建三个busybox的pod
    # kubectl run test --image=busybox --replicas=3 sleep 30000
    kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
    deployment.apps/test created
    # 
    # kubectl get pod --all-namespaces -o wide|head -n 4
    NAMESPACE     NAME                          READY   STATUS    RESTARTS   AGE   IP              NODE            NOMINATED NODE   READINESS GATES
    default       test-568866f478-6z87m         1/1     Running   0          29s   172.20.2.2      10.25.151.104   <none>           <none>
    default       test-568866f478-q7fft         1/1     Running   0          29s   172.20.1.2      10.25.151.103   <none>           <none>
    default       test-568866f478-sgb5l         1/1     Running   0          29s   172.20.0.2      10.25.151.100   <none>           <none>
    

    可以看到这三个pod启动在不同的节点上了

    默认容器跨节点通信使用UDP封装

    • 登录本地的pod,查看IP和路由
    [root@localhost ~]# docker ps | grep busybox
    cac2bc7afd61        busybox                                  "sleep 30000"            19 minutes ago      Up 19 minutes                           k8s_test_test-568866f478-sgb5l_default_3c4e60ff-5104-11e9-b02f-005056a921d2_0
    [root@localhost ~]# 
    [root@localhost ~]# 
    [root@localhost ~]# docker exec -ti  cac2bc7afd61   /bin/sh
    / # 
    / # ip addr
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
    3: eth0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue 
        link/ether 9e:c3:f5:14:6e:ac brd ff:ff:ff:ff:ff:ff
        inet 172.20.0.2/24 scope global eth0
           valid_lft forever preferred_lft forever
    / # 
    / # ip route
    default via 172.20.0.1 dev eth0 
    172.20.0.0/24 dev eth0 scope link  src 172.20.0.2 
    172.20.0.0/16 via 172.20.0.1 dev eth0
    / # 
    

    网关172.20.0.1是cni0的IP

    • 从这个容器去ping其它节点的容器,是通的
    / # ifconfig | grep 172
              inet addr:172.20.0.2  Bcast:0.0.0.0  Mask:255.255.255.0
    / # 
    / # ping 172.20.2.2 -s 1200
    PING 172.20.2.2 (172.20.2.2): 1200 data bytes
    1208 bytes from 172.20.2.2: seq=0 ttl=62 time=0.973 ms
    1208 bytes from 172.20.2.2: seq=1 ttl=62 time=0.581 ms
    
    • 跟踪路由,可以看到是container(master)->cni0(master)->flannel.1(node-2)->container(node-2)
    / # traceroute 172.20.2.2
    traceroute to 172.20.2.2 (172.20.2.2), 30 hops max, 46 byte packets
     1  172.20.0.1 (172.20.0.1)  0.017 ms  0.120 ms  0.009 ms
     2  172.20.2.0 (172.20.2.0)  0.970 ms  1.123 ms  0.339 ms
     3  172.20.2.2 (172.20.2.2)  0.411 ms  3.236 ms  2.842 ms
    / # 
    
    • 在master的物理口抓包,可以看到ICMP报文被封装为UDP报文
    # tcpdump -i ens160 -enn 'ip[2:2] > 1200 and  ip[2:2] < 1500'
    tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
    listening on ens160, link-type EN10MB (Ethernet), capture size 262144 bytes
    02:28:05.870909 00:50:56:a9:21:d2 > 00:50:56:a9:5e:81, ethertype IPv4 (0x0800), length 1292: 10.25.151.100.40238 > 10.25.151.104.8472: OTV, flags [I] (0x08), overlay 0, instance 1
    e2:70:a7:b2:ef:fc > 5a:b8:85:c1:65:2f, ethertype IPv4 (0x0800), length 1242: 172.20.0.2 > 172.20.2.2: ICMP echo request, id 6400, seq 888, length 1208
    02:28:05.871346 00:50:56:a9:5e:81 > 00:50:56:a9:21:d2, ethertype IPv4 (0x0800), length 1292: 10.25.151.104.34945 > 10.25.151.100.8472: OTV, flags [I] (0x08), overlay 0, instance 1
    5a:b8:85:c1:65:2f > e2:70:a7:b2:ef:fc, ethertype IPv4 (0x0800), length 1242: 172.20.2.2 > 172.20.0.2: ICMP echo reply, id 6400, seq 888, length 1208
    02:28:06.871170 00:50:56:a9:21:d2 > 00:50:56:a9:5e:81, ethertype IPv4 (0x0800), length 1292: 10.25.151.100.40238 > 10.25.151.104.8472: OTV, flags [I] (0x08), overlay 0, instance 1
    
    • 抓包后用wireshark解析,可以看到原始IP报文被封装在UDP,外层IP是node的IP
    flannel_udp.png

    相关文章

      网友评论

          本文标题:学习kubernetes(一):kubeasz部署基于Flann

          本文链接:https://www.haomeiwen.com/subject/jtvqbqtx.html