美文网首页dibo
k8s容器编排

k8s容器编排

作者: 驮着集装箱的鲸鱼 | 来源:发表于2019-07-29 22:43 被阅读157次

    1. k8s的架构

    图片来自老男孩云计算讲师-强哥
    除了核心组件,还有一些推荐的Add-ons:
    组件名称 说明
    kube-dns 负责为整个集群提供DNS服务
    Ingress Controller 为服务提供外网入口
    Heapster 提供资源监控
    Dashboard 提供GUI
    Federation 提供跨可用区的集群
    Fluentd-elasticsearch 提供集群日志采集、存储与查询

    1. k8s集群的安装

    1.1. 修改IP地址、主机和host解析

    IP 主机名
    10.0.0.11 k8s-master
    10.0.0.12 k8s-node-1
    10.0.0.13 k8s-node-2
    cat >> /etc/hosts << EOF
    10.0.0.11 k8s-master
    10.0.0.12 k8s-node-1
    10.0.0.13 k8s-node-2
    EOF
    
    scp /etc/hosts 10.0.0.12:/etc/hosts
    scp /etc/hosts 10.0.0.13:/etc/hosts
    

    1.2 master节点安装etcd

    yum install etcd -y
    
    vim /etc/etcd/etcd.conf
    6行:ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
    21行:ETCD_ADVERTISE_CLIENT_URLS="http://10.0.0.11:2379"
    (sed -ri "6s#(.*)localhost(.*)#\10.0.0.0\2#g" /etc/etcd/etcd.conf)
    (sed -ri "21s#(.*)localhost(.*)#\110.0.0.11\2#g" /etc/etcd/etcd.conf)
    
    systemctl start etcd.service
    systemctl enable etcd.service
    
    etcdctl set testdir/testkey0 0
    etcdctl get testdir/testkey0
    
    etcdctl -C http://10.0.0.11:2379 cluster-health
    

    1.3. master节点安装kubernetes

    yum install kubernetes-master.x86_64 -y
    
    vim /etc/kubernetes/apiserver 
    8行:  KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
    ( sed -ri "8s#(.*)127.0.0.1(.*)#\10.0.0.0\2#g" /etc/kubernetes/apiserver)
    
    11行:KUBE_API_PORT="--port=8080"
    (sed -ri "11s#\#(.*)#\1#g" /etc/kubernetes/apiserver
    )
    
    17行:KUBE_ETCD_SERVERS="--etcd-servers=http://10.0.0.11:2379"
    (sed -ri "17s#(.*)127.0.0.1(.*)#\110.0.0.11\2#g" /etc/kubernetes/apiserver)
    
    23行:KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
    (sed -ri "23s#(.*)ServiceAccount,(.*)#\1\2#g" /etc/kubernetes/apiserver)
    
    vim /etc/kubernetes/config
    22行:KUBE_MASTER="--master=http://10.0.0.11:8080"
    (sed -ri "22s#(.*)127.0.0.1(.*)#\110.0.0.11\2#g" /etc/kubernetes/config)
    
    systemctl enable kube-apiserver.service
    systemctl restart kube-apiserver.service
    systemctl enable kube-controller-manager.service
    systemctl restart kube-controller-manager.service
    systemctl enable kube-scheduler.service
    systemctl restart kube-scheduler.service
    
    检查状态
    [root@k8s-master ~]# kubectl get componentstatus 
    NAME                 STATUS    MESSAGE             ERROR
    scheduler            Healthy   ok                  
    controller-manager   Healthy   ok                  
    etcd-0               Healthy   {"health":"true"} 
    

    1.4. node节点安装kubernetes

    yum install kubernetes-node.x86_64 -y
    
    vim /etc/kubernetes/config 
    22行:KUBE_MASTER="--master=http://10.0.0.11:8080"
    (sed -ri "22s#(.*)127.0.0.1(.*)#\110.0.0.11\2#g" /etc/kubernetes/config)
    
    vim /etc/kubernetes/kubelet
    5行:KUBELET_ADDRESS="--address=0.0.0.0"
    (sed -ri "5s#(.*)127.0.0.1(.*)#\10.0.0.0\2#g" /etc/kubernetes/kubelet)
    
    8行:KUBELET_PORT="--port=10250"
    11行:KUBELET_HOSTNAME="--hostname-override=10.0.0.12"
    14行:KUBELET_API_SERVER="--api-servers=http://10.0.0.11:8080"
    
    systemctl enable kubelet.service
    systemctl restart kubelet.service
    systemctl enable kube-proxy.service
    systemctl restart kube-proxy.service
    
    master检查
    [root@k8s-master ~]# kubectl get nodes
    
    NAME        STATUS     AGE
    10.0.0.12   Ready      2m
    10.0.0.13   NotReady   9s
    

    1.5. 所有节点配置flannel网络,让不同宿主机上的容器可以通讯

    (1)批量安装软件包
    yum install flannel -y
    
    (2)批量修改配置文件
    sed -i 's#http://127.0.0.1:2379#http://10.0.0.11:2379#g' /etc/sysconfig/flanneld
    

    master节点操作:

    (1)master节点创建一个KEY
    [root@k8s-master ~]# etcdctl mk /atomic.io/network/config  '{ "Network": "172.16.0.0/16" }'
    验证
    [root@k8s-master ~]# etcdctl get /atomic.io/network/config
    { "Network": "172.16.0.0/16" }  #这是flannel网络必须依赖的一个值
    
    (2)安装docker,作为镜像仓库
    [root@k8s-master ~]# yum install docker -y
    
    (3)重启flannel网络
    [root@k8s-master ~]# systemctl enable flanneld.service 
    [root@k8s-master ~]# systemctl restart flanneld.service 
    
    [root@k8s-master ~]# systemctl start docker
    [root@k8s-master ~]# systemctl enable docker
    
    [root@k8s-master ~]# systemctl restart kube-apiserver.service
    [root@k8s-master ~]# systemctl restart kube-controller-manager.service
    [root@k8s-master ~]# systemctl restart kube-scheduler.service
    

    node节点:

    systemctl enable flanneld.service 
    systemctl restart flanneld.service 
    service docker restart
    systemctl restart kubelet.service
    systemctl restart kube-proxy.service
    

    部署后的测试-测试跨宿主机网络是否通畅:

    (1)所以有节点下载并导入镜像
    wget http://192.168.12.201/docker_image/docker_busybox.tar.gz
    
    docker load -i docker_busybox.tar.gz 
    
    (2)所有节点启动容器
    docker run -it docker.io/busybox:latest
    
    (3)所有节点的容器中查看ip地址
    ifconfig
    
    (4)检查网络是否能通(默认不能通讯)
    / # ping 172.16.101.2 -c2
    PING 172.16.101.2 (172.16.101.2): 56 data bytes
    
    --- 172.16.101.2 ping statistics ---
    2 packets transmitted, 0 packets received, 100% packet loss
    / # ping 172.16.43.2 -c2
    PING 172.16.43.2 (172.16.43.2): 56 data bytes
    
    --- 172.16.43.2 ping statistics ---
    2 packets transmitted, 0 packets received, 100% packet loss
    
    问题:当我们的docker版本为1.13时,它更改了IPTABLES中的规则,所以导致Ping不通。
    
    (5)所有节点解决网络通讯失败问题
    临时生效(所有节点):
    iptables -P FORWARD ACCEPT
    
    永久生效(所有节点):
    vim /usr/lib/systemd/system/docker.service
    在文件中18行的上面添加下面这一行配置
    ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT  #让docker启动时执行IPTABLES更改防火墙规则
    
    更改完后操作:
    systemctl daemon-reload
    systemctl restart docker
    
    (6)再次测试网络通讯情况
    / # ping 172.16.101.2 -c1
    PING 172.16.101.2 (172.16.101.2): 56 data bytes
    64 bytes from 172.16.101.2: seq=0 ttl=60 time=2.247 ms
    
    --- 172.16.101.2 ping statistics ---
    1 packets transmitted, 1 packets received, 0% packet loss
    round-trip min/avg/max = 2.247/2.247/2.247 ms
    / # ping 172.16.43.2 -c1
    PING 172.16.43.2 (172.16.43.2): 56 data bytes
    64 bytes from 172.16.43.2: seq=0 ttl=60 time=0.818 ms
    
    --- 172.16.43.2 ping statistics ---
    1 packets transmitted, 1 packets received, 0% packet loss
    round-trip min/avg/max = 0.818/0.818/0.818 ms
    

    1.6. 配置master为镜像仓库(K8S集群仓库)

    所有节点操作

    (1)编辑配置文件,更改镜像加速
    vim /etc/sysconfig/docker
    删除文件中第4行的OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false'
    
    然后加入下面这一行
    OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false --registry-mirror=https://registry.docker-cn.com --insecure-registry=10.0.0.11:5000'  #镜像加速
    
    (2)更改文件后重启服务
    systemctl restart docker
    

    master节点操作(启用私有仓库)

    (1)下载或直接RZ上传仓库镜像包
    wget http://192.168.12.201/docker_image/registry.tar.gz
    
    (2)导入镜像
    docker load -i registry.tar.gz
    
    (3)启动私有仓库
    docker run -d -p 5000:5000 --restart=always --name registry -v /opt/myregistry:/var/lib/registry  registry
    
    (4)查看私有仓库运行情况
    [root@k8s-master ~]# docker ps
    CONTAINER ID        IMAGE               COMMAND                  CREATED              STATUS              PORTS                    NAMES
    cbf413494565        registry            "/entrypoint.sh /e..."   About a minute ago   Up About a minute   0.0.0.0:5000->5000/tcp   registry
    

    测试镜像仓库是否可用(上传一个镜像)

    任意一个Node节点操作
    [root@k8s-node-2 ~]# docker tag docker.io/busybox 10.0.0.11:5000/busybox:latest
    [root@k8s-node-2 ~]# docker push 10.0.0.11:5000/busybox
    The push refers to a repository [10.0.0.11:5000/busybox]
    adab5d09ba79: Pushed 
    latest: digest: sha256:4415a904b1aca178c2450fd54928ab362825e863c0ad5452fd020e92f7a6a47e size: 527
    

    2. 什么是k8s,k8s有什么功能?

    k8s是一个docker集群的管理工具
    

    2.1 k8s的核心功能

    k8s官网:kubernetes.io
    
    下面介绍几个比较重要的功能:
    (1)自愈:  重新启动失败的容器,在节点不可用时,替换和重新调度节点上的容器,对用户定义的健康检查不响应的容器会被中止,并且在容器准备好服务之前不会把其向客户端广播。
    
    (2)弹性伸缩:  通过监控容器的cpu的负载值,如果这个平均高于80%,增加容器的数量,如果这个平均低于10%,减少容器的数量
    
    (3)服务的自动发现和负载均衡: 不需要修改您的应用程序来使用不熟悉的服务发现机制,Kubernetes 为容器提供了自己的 IP 地址和一组容器的单个 DNS 名称,并可以在它们之间进行负载均衡。
    
    (4)滚动升级和一键回滚:    Kubernetes 逐渐部署对应用程序或其配置的更改,同时监视应用程序运行状况,以确保它不会同时终止所有实例。 如果出现问题,Kubernetes会为您恢复更改,利用日益增长的部署解决方案的生态系统。
    

    2.2 k8s的历史

    (1)2014年,谷歌对外宣传要推出一个docker的管理平台,也就是容器的管理工具。
    
    (2)2015年7月,k8s正式发布了kubernetes1.0版本。
    
    (3)2016年开始,每年推出4个版本。2016年推出的4个版本为1.1、1.2、1.4。
    
    (4)至2019年,1.13、1.14、1.15。
    

    2.3 k8s安装方法

    (1)yum安装 ,只能安装1.5版本,但是安装成功率最高,最适合用来学习的版本,而且k8s的核心功能在1.5版中都存在。
        
    (2)源码编译安装,时间最长,难度最大  可以安装最新版。
    
    (3)二进制安装,步骤繁琐,不推荐新手使用,可以安装最新版,但是有一键部署可以使用:shell,ansible,saltstack。
    
    (4)kubeadm,安装最容易,但是需要网络,可以安装最新版。
    
    (5)minikube,单机版的k8s,适合开发人员体验。
    

    2.4 k8s的应用场景

    k8s最适合跑微服务项目!
    那么什么是微服务呢?在还没有微服务架构的时候,MVC开发架构使用的是最多的,所有的功能都放在一个站点上,不管有多少功能,使用的都是一个域名,一个域名代表使用的都是一个架构,一个数据库,这样迟早有一天,数据库会扛不住压力。
    
    微服务
    一个功能一个域名,一个架构,一个数据库。
    支持更大的并发,减少发布更新的时间,降低开发难度,集群健壮性更高。
    

    3. k8s常用的资源

    k8s中有很多资源,期中最小的资源单位为:pod。
    在k8s中,任何的资源都可以用yaml文件创建。
    

    3.1 创建pod资源

    k8s yaml的主要组成

    (1)apiVersion: v1  api版本
    (2)kind: pod   资源类型
    (3)metadata:   属性
    (4)spec:       详细
    
    组成详解:
    k8s_pod.yaml
    
    apiVersion: v1   #版本。
    kind: Pod  #资源类型。
    metadata:  #属性。
      name: nginx  #属性中的名称,也就是上面资源类型最终创建出来的名称。
      labels:  #标签,可以建立多个。
        app: web  #标签名。
    spec:  #标签web的详细。
      containers:  #容器。一个Pod中可以运行多个容器,最少运行一个。
        - name: nginx  #容器名。
          image: 10.0.0.11:5000/nginx:1.13  #容器使用的镜像。
          ports:  #端口。
            - containerPort: 80  #指定的端口号。
    

    练习:在master节点上创建一个yaml文件

    (1)目录规范
    [root@k8s-master ~]# mkdir -p k8s/pod
    [root@k8s-master ~]# cd k8s/pod
    
    (2)yaml文件编写
    [root@k8s-master ~/k8s/pod]# vim k8s_pod.yaml
    apiVersion: v1   #版本。
    kind: Pod  #资源类型。
    metadata:  #属性。
      name: nginx  #属性中的名称,也就是上面资源类型最终创建出来的名称。
      labels:  #标签,可以建立多个。
        app: web  #标签名。
    spec:  #标签web的详细。
      containers:  #容器。一个Pod中可以运行多个容器,最少运行一个。
        - name: nginx  #容器名。
          image: 10.0.0.11:5000/nginx:1.13  #容器使用的镜像。
          ports:  #端口。
            - containerPort: 80  #指定的端口号。
    
    (3)上传yaml需要使用的镜像(任意Node节点操作)
    下载镜像
    [root@k8s-node-2 ~]# wget http://192.168.12.201/docker_image/docker_nginx1.13.tar.gz
    
    导入镜像
    [root@k8s-node-2 ~]# docker load -i docker_nginx1.13.tar.gz
    
    打标签
    [root@k8s-node-2 ~]# docker tag docker.io/nginx:1.13 10.0.0.11:5000/nginx:1.13
    
    上传镜像到私有仓库
    [root@k8s-node-2 ~]# docker push  10.0.0.11:5000/nginx:1.13 
    
    
    (4)使用yaml创建pod(master节点操作)
    [root@k8s-master ~/k8s/pod]# kubectl create -f k8s_pod.yaml 
    pod "nginx" created
    
    (5)查看Pod资源列表
    [root@k8s-master ~/k8s/pod]# kubectl get pod
    NAME      READY     STATUS              RESTARTS   AGE
    nginx     0/1       ContainerCreating   0          49s #这就是用刚刚用yaml创建好的nginx容器
    
    (6)排错
    由于刚刚创建好的Nginx状态一直为ContainerCreating,所以这里使用k8s中使用的最多的排错命令来排错
    [root@k8s-master ~/k8s/pod]# kubectl describe pod nginx
    Name:       nginx
    Namespace:  default
    Node:       10.0.0.13/10.0.0.13
    Start Time: Mon, 29 Jul 2019 17:54:05 +0800
    Labels:     app=web
    Status:     Pending
    IP:     
    Controllers:    <none>
    Containers:
      nginx:
        Container ID:       
        Image:          10.0.0.11:5000/nginx:1.13  #使用的镜像
        Image ID:           
        Port:           80/TCP  #使用的端口
        State:          Waiting
          Reason:           ContainerCreating
        Ready:          False
        Restart Count:      0
        Volume Mounts:      <none>
        Environment Variables:  <none>
    Conditions:
      Type      Status
      Initialized   True 
      Ready     False 
      PodScheduled  True 
    No volumes.
    QoS Class:  BestEffort
    Tolerations:    <none>
    Events:   #重点看这里
      FirstSeen LastSeen    Count   From            SubObjectPath   Type        Reason      Message
      --------- --------    -----   ----            -------------   --------    ------      -------
      3m        3m      1   {default-scheduler }            Normal      Scheduled   Successfully assigned nginx to 10.0.0.13  #调度的节点
      3m        58s     5   {kubelet 10.0.0.13}         Warning     FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "POD" with ErrImagePull: "image pull failed for registry.access.redhat.com/rhel7/pod-infrastructure:latest, this may be because there are no credentials on this request.  details: (open /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt: no such file or directory)"
    
      3m    3s  12  {kubelet 10.0.0.13}     Warning FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "POD" with ImagePullBackOff: "Back-off pulling image \"registry.access.redhat.com/rhel7/pod-infrastructure:latest\""
    
    (7)报错原因
    因为在/etc/kubernetes/kubelet文件中的第17行,指定的镜像的下载位置是从红帽官方下载,所以会报错。解决方法,更改下载地址即可。
    
    (8)上传pod-infrastructure-latest.tar.gz压缩包到node节点上并导入
    [root@k8s-node-2 ~]# rz
    [root@k8s-node-2 ~]# docker load -i pod-infrastructure-latest.tar.gz
    
    (9)打标签并上传到私有仓库
    [root@k8s-node-2 ~]# docker tag docker.io/tianyebj/pod-infrastructure 10.0.0.11:5000/pod-infrastructure:latest
    rastructure:latest
    [root@k8s-node-2 ~]# docker push 10.0.0.11:5000/pod-infrastructure:latest
    
    (10)更改NODE节点配置文件
     vim /etc/kubernetes/kubelet 
    把文件中第17行:
    KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
    改成
    KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=10.0.0.11:5000/pod-infrastructure:latest"
    
    (11)重启kubelet服务
    [root@k8s-node-2 ~]# systemctl restart kubelet.service
    
    (12)再次到master节点查看报错情况
    [root@k8s-master ~]# kubectl describe pod nginx
    ……省略部分内容
    Events:
      FirstSeen LastSeen    Count   From            SubObjectPath   Type       Reason       Message
      --------- --------    -----   ----            -------------   --------   ------       -------
      2h        2m      20  {kubelet 10.0.0.13}         Warning    FailedSync   Error syncing pod, skipping: failed to "StartContainer" for "POD" with ErrImagePull: "image pull failed for registry.access.redhat.com/rhel7/pod-infrastructure:latest, this may be because there are no credentials on this request.  details: (open /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt: no such file or directory)"
    
      2h    2m  248 {kubelet 10.0.0.13}     Warning FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "POD" with ImagePullBackOff: "Back-off pulling image \"registry.access.redhat.com/rhel7/pod-infrastructure:latest\""
    
      2m    2m  2   {kubelet 10.0.0.13}             Warning MissingClusterDNS   kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy.
      2m    2m  1   {kubelet 10.0.0.13} spec.containers{nginx}  Normal  Pulled     Container image "10.0.0.11:5000/nginx:1.13" already present on machine
      2m    2m  1   {kubelet 10.0.0.13} spec.containers{nginx}  Normal  Created    Created container with docker id 3fd665497232; Security:[seccomp=unconfined]
      2m    2m  1   {kubelet 10.0.0.13} spec.containers{nginx}  Normal  Started    Started container with docker id 3fd665497232
    此显示为正常
    
    [root@k8s-master ~]# kubectl get pod
    NAME      READY     STATUS    RESTARTS   AGE
    nginx     1/1       Running   0          2h
    显示运行了
    

    4. 什么是Pod资源

    pod资源:至少由两个容器组成,pod基础容器和业务容器组成,pod是k8s最小的资源单位

    查看pod的详细
    [root@k8s-master ~]# kubectl get pod -o wide
    NAME      READY     STATUS    RESTARTS   AGE       IP            NODE
    nginx     1/1       Running   0          2h        172.16.43.2(容器IP)   10.0.0.13(node节点IP)
    
    测试访问
    [root@k8s-master ~]# curl -I 172.16.43.2
    HTTP/1.1 200 OK  #访问成功
    ……省略部分内容
    
    查看node节点运行容器数量
    root@k8s-node-2 ~]# docker ps
    CONTAINER ID        IMAGE                                      COMMAND                  CREATED             STATUS              PORTS               NAMES
    3fd665497232        10.0.0.11:5000/nginx:1.13                  "nginx -g 'daemon ..."   9 minutes ago       Up 9 minutes                            k8s_nginx.91390390_nginx_default_cceed9ed-b1e6-11e9-94aa-000c2958ddcf_05f79a8b
    bde8306b26f3        10.0.0.11:5000/pod-infrastructure:latest   "/pod"                   9 minutes ago       Up 9 minutes                            k8s_POD.177f01b0_nginx_default_cceed9ed-b1e6-11e9-94aa-000c2958ddcf_65e2d5fc
    #运行数量为2,pod资源:至少由两个容器组成,pod基础容器和业务容器组成,一个Pod中可以运行多个容器。
    

    5. Replication(副本)Controller(控制器)(简称rc)资源

    rc作用:启动多个相同的pod,并保证指定数量的pod始终存活,rc通过标签选择器来关联pod
    

    k8s资源的常见操作:

    (1)kubectl   create  -f   xxx.yaml #创建一个资源文件
    (2)kubectl   get  pod|rc  #查看资源类型(pod|rc)
    (3)kubectl  describe  pod  nginx  #查看详细描述
    (4)kubectl  delete   pod  nginx #删除资源
    (5)kubectl delete  -f  xxx.yaml  #删除资源
    (6)kubectl  edit  pod   nginx  #修改资源配置文件
    

    练习:master节点上创建一个rc

    (1)编写yaml文件
    [root@k8s-master ~/k8s/pod]# mkdir rc
    [root@k8s-master ~/k8s/pod]# cd rc
    [root@k8s-master ~/k8s/pod]# vim k8s_rc.yaml
    apiVersion: v1 #版本
    kind: ReplicationController  #资源类型为RC
    metadata:  #属性
      name: nginx  #属性名为nginx
    spec:  #详细
      replicas: 5  #自动创建5个pod
      selector:  #标签选择器
        app: myweb  #pod标签名
      template:  #模板
        metadata:
          labels:
            app: myweb
        spec:
          containers:
          - name: myweb  
            image: 10.0.0.11:5000/nginx:1.13
            ports:
            - containerPort: 80
    
    (2)创建rc
    [root@k8s-master ~/k8s/pod/rc]# kubectl create -f k8s_rc.yaml 
    replicationcontroller "nginx" created
    
    (6)查看rc创建情况
    [root@k8s-master ~/k8s/pod/rc]# kubectl get rc
    NAME      DESIRED   CURRENT   READY     AGE
    nginx     5         5         3         45s
    
    NAME:rc名称
    DESIRED:期望创建值
    CURRENT:当前启动Pod数量
    READY :准备好的pod数量
    AGE:创建时间
    
    (7)查看pod
    [root@k8s-master ~/k8s/pod/rc]# kubectl get pod
    NAME          READY     STATUS              RESTARTS   AGE
    nginx         0/1       ContainerCreating   0          9m
    nginx-c8bs1   0/1       ContainerCreating   0          3m
    nginx-d4wnt   1/1       Running             0          3m
    nginx-mcx08   1/1       Running             0          3m
    nginx-nr0gl   1/1       Running             0          3m
    nginx-q7m8f   0/1       ContainerCreating   0          3m
    
    (8)查看Pod分布情况
    [root@k8s-master ~/k8s/pod/rc]# kubectl get pod -o wide
    NAME          READY     STATUS              RESTARTS   AGE       IP            NODE
    nginx         0/1       ContainerCreating   0          10m       <none>        10.0.0.12
    nginx-c8bs1   0/1       ContainerCreating   0          4m        <none>        10.0.0.12
    nginx-d4wnt   1/1       Running             0          4m        172.16.43.3   10.0.0.13
    nginx-mcx08   1/1       Running             0          4m        172.16.43.2   10.0.0.13
    nginx-nr0gl   1/1       Running             0          4m        172.16.43.4   10.0.0.13
    nginx-q7m8f   0/1       ContainerCreating   0          4m        <none>        10.0.0.12
    这里由于没有在12节点上修改配置文件,所以状态栏显示为ContainerCreating。
    
    (9)修改12节点配置文件
    vim /etc/kubernetes/kubelet 
    把文件中第17行:
    KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
    改成
    KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=10.0.0.11:5000/pod-infrastructure:latest"
    
    (10)重启kubelet服务
    [root@k8s-node-2 ~]# systemctl restart kubelet.service
    
    (11)再次查看Pod分布情况
    [root@k8s-master ~/k8s/pod/rc]# kubectl get pod -o wide
    NAME          READY     STATUS    RESTARTS   AGE       IP             NODE
    nginx         1/1       Running   0          13m       172.16.101.3   10.0.0.12
    nginx-c8bs1   1/1       Running   0          7m        172.16.101.4   10.0.0.12
    nginx-d4wnt   1/1       Running   0          7m        172.16.43.3    10.0.0.13
    nginx-mcx08   1/1       Running   0          7m        172.16.43.2    10.0.0.13
    nginx-nr0gl   1/1       Running   0          7m        172.16.43.4    10.0.0.13
    nginx-q7m8f   1/1       Running   0          7m        172.16.101.2   10.0.0.12
    
    注意:前面在配置yaml中指定了Pod的数量为5,所以这里就算删掉一个pod,也会马上再次创建并启动这个被删掉的Pod,如果多了一个Pod,那么他会选择删除这个多余的POD。
    同时,如果有一台节点挂了,他会把剩余的POD全部转移到剩下的节点上。
    

    6. rc的滚动升级nginx升级到1.15

    (1)新建一个nginx-rc1.15.yaml

    [root@k8s-master ~/k8s/pod/rc]# cp k8s_rc.yaml k8s_rc2.yaml
    [root@k8s-master ~/k8s/pod/rc]# vim k8s_rc2.yaml
    apiVersion: v1
    kind: ReplicationController
    metadata:
      name: nginx2
    spec:
      replicas: 5
      selector:
        app: myweb2
      template:
        metadata:
          labels:
            app: myweb2
        spec:
          containers:
          - name: myweb2
            image: 10.0.0.11:5000/nginx:1.15
            ports:
            - containerPort: 80
    
    (2)上传nginx1.15镜像,任意NODE节点
    [root@k8s-node-2 ~]# rz
    
    (3)导入镜像并上传到私有仓库
    [root@k8s-node-2 ~]# docker load -i docker_nginx1.15.tar.gz 
    [root@k8s-node-2 ~]# docker tag docker.io/nginx:latest 10.0.0.11:5000/nginx:1.15
    [root@k8s-node-2 ~]# docker push 10.0.0.11:5000/nginx:1.15
    
    (4)滚动升级,master节点操作
    [root@k8s-master ~/k8s/pod/rc]# kubectl rolling-update nginx -f k8s_rc2.yaml --update-period=10s
    Created nginx2
    Scaling up nginx2 from 0 to 5, scaling down nginx from 5 to 0 (keep 5 pods available, don't exceed 6 pods)
    Scaling nginx2 up to 1
    Scaling nginx down to 4
    Scaling nginx2 up to 2
    Scaling nginx down to 3
    Scaling nginx2 up to 3
    Scaling nginx down to 2
    Scaling nginx2 up to 4
    Scaling nginx down to 1
    Scaling nginx2 up to 5
    Scaling nginx down to 0
    Update succeeded. Deleting nginx
    replicationcontroller "nginx" rolling updated to "nginx2"
    
    (5)查看pod IP
    [root@k8s-master ~/k8s/pod/rc]# kubectl get pod -o wide
    NAME           READY     STATUS    RESTARTS   AGE       IP             NODE
    nginx          1/1       Running   0          37m       172.16.101.3   10.0.0.12
    nginx2-4hvlw   1/1       Running   0          1m        172.16.43.5    10.0.0.13
    nginx2-6hldz   1/1       Running   0          2m        172.16.101.5   10.0.0.12
    nginx2-bwq2p   1/1       Running   0          1m        172.16.43.6    10.0.0.13
    nginx2-m0xp0   1/1       Running   0          1m        172.16.101.4   10.0.0.12
    nginx2-xh754   1/1       Running   0          1m        172.16.101.2   10.0.0.12
    
    (6)测试nginx版本升级是否成功
    [root@k8s-master ~/k8s/pod/rc]#  curl -I 172.16.43.5
    HTTP/1.1 200 OK
    Server: nginx/1.15.5
    ……省略部分内容
    
    (7)回滚,nginx1.15降级到1.13
    [root@k8s-master ~/k8s/pod/rc]# kubectl rolling-update nginx2 -f k8s_rc.yaml --update-period=1s
    Created nginx
    Scaling up nginx from 0 to 5, scaling down nginx2 from 5 to 0 (keep 5 pods available, don't exceed 6 pods)
    Scaling nginx up to 1
    Scaling nginx2 down to 4
    Scaling nginx up to 2
    Scaling nginx2 down to 3
    Scaling nginx up to 3
    Scaling nginx2 down to 2
    Scaling nginx up to 4
    Scaling nginx2 down to 1
    Scaling nginx up to 5
    Scaling nginx2 down to 0
    Update succeeded. Deleting nginx2
    replicationcontroller "nginx2" rolling updated to "nginx"
    
    (8)查看nginx回滚情况
    [root@k8s-master ~/k8s/pod/rc]# kubectl get pod -o wide
    NAME          READY     STATUS    RESTARTS   AGE       IP             NODE
    nginx         1/1       Running   0          42m       172.16.101.3   10.0.0.12
    nginx-43tsz   1/1       Running   0          59s       172.16.101.2   10.0.0.12
    nginx-n3l28   1/1       Running   0          57s       172.16.43.4    10.0.0.13
    nginx-rk8xb   1/1       Running   0          1m        172.16.43.2    10.0.0.13
    nginx-wcp12   1/1       Running   0          1m        172.16.101.6   10.0.0.12
    nginx-xgjk3   1/1       Running   0          1m        172.16.43.3    10.0.0.13
    
    [root@k8s-master ~/k8s/pod/rc]# curl -I 172.16.101.2
    HTTP/1.1 200 OK
    Server: nginx/1.13.12  #回滚成功
    ……省略部分内容
    

    7. service资源

    由于前面创建的rc是不能被外网访问的,所以这里需要创建service帮助pod暴露端口。

    (1)创建一个service
    [root@k8s-master ~/k8s/pod/rc]# cd ../../
    [root@k8s-master ~/k8s]# mkdir svc
    [root@k8s-master ~/k8s]# cd svc
    [root@k8s-master ~/k8s/svc]# vim nginx_svc.yaml
    apiVersion: v1  #版本
    kind: Service  #资源类型
    metadata:
      name: myweb  #svc名称
    spec:  #详细
      type: NodePort  #端口映射类型
      ports:
        - port: 80 #clusterIP,VIP端口
          nodePort: 30000 #nodeport,宿主机 端口
          targetPort: 80 #podport,容器端口
      selector:
        app: myweb2
    
    (2)创建
    [root@k8s-master ~/k8s/svc]# kubectl create -f nginx_svc.yaml 
    service "myweb" created
    
    (3)查看资源所有创建情况
    [root@k8s-master ~/k8s/svc]# kubectl get all
    NAME       DESIRED   CURRENT   READY     AGE
    rc/nginx   5         5         5         18m
    
    NAME             CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
    svc/kubernetes   10.254.0.1      <none>        443/TCP        8h
    svc/myweb        10.254.191.87   <nodes>       80:30000/TCP   1m
    
    NAME             READY     STATUS    RESTARTS   AGE
    po/nginx         1/1       Running   0          59m
    po/nginx-43tsz   1/1       Running   0          18m
    po/nginx-n3l28   1/1       Running   0          18m
    po/nginx-rk8xb   1/1       Running   0          18m
    po/nginx-wcp12   1/1       Running   0          18m
    po/nginx-xgjk3   1/1       Running   0          18m
    
    [root@k8s-master ~/k8s/svc]# kubectl get all -o wide
    NAME       DESIRED   CURRENT   READY     AGE       CONTAINER(S)   IMAGE(S)                    SELECTOR
    rc/nginx   5         5         5         18m       myweb          10.0.0.11:5000/nginx:1.13   app=myweb
    
    NAME             CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE       SELECTOR
    svc/kubernetes   10.254.0.1      <none>        443/TCP        8h        <none>
    svc/myweb        10.254.191.87   <nodes>       80:30000/TCP   1m        app=myweb2
    
    NAME             READY     STATUS    RESTARTS   AGE       IP             NODE
    po/nginx         1/1       Running   0          59m       172.16.101.3   10.0.0.12
    po/nginx-43tsz   1/1       Running   0          18m       172.16.101.2   10.0.0.12
    po/nginx-n3l28   1/1       Running   0          18m       172.16.43.4    10.0.0.13
    po/nginx-rk8xb   1/1       Running   0          18m       172.16.43.2    10.0.0.13
    po/nginx-wcp12   1/1       Running   0          18m       172.16.101.6   10.0.0.12
    po/nginx-xgjk3   1/1       Running   0          18m       172.16.43.3    10.0.0.13
    
    (4)查看svc详细
    [root@k8s-master ~/k8s/svc]# kubectl describe svc myweb 
    Name:           myweb
    Namespace:      default
    Labels:         <none>
    Selector:       app=myweb2
    Type:           NodePort
    IP:         10.254.191.87
    Port:           <unset> 80/TCP
    NodePort:       <unset> 30000/TCP
    Endpoints:      <none>  #此处为None,是因为上面的标签选择器app=mmyweb2
    Session Affinity:   None
    No events.
    
    (5)更改myweb配置文件
    [root@k8s-master ~/k8s/svc]# kubectl edit svc myweb
    把22行的myweb2改成myweb
    
    (6)再次查看节点信息
    [root@k8s-master ~/k8s/svc]# kubectl describe svc myweb 
    ……省略部分内容
    Endpoints:      172.16.101.2:80,172.16.101.6:80,172.16.43.2:80 + 2 more...  #这就代表了5个pod
    ……省略部分内容
    
    (7)测试访问任意一个node节点,非master
    [root@k8s-master ~/k8s/svc]# curl -I 10.0.0.12:30000
    HTTP/1.1 200 OK
    
    [root@k8s-master ~/k8s/svc]# curl -I 10.0.0.13:30000
    HTTP/1.1 200 OK
    此时代表k8s中的应用就可以被外界访问了!
    

    8. svc资源扩展

    nodePort端口默认范围是30000-32767
    svcip地址范围是由node节点上的/etc/kubernetes/apiserver文件指定的

    (1)修改nodePort范围,master节点操作
    [root@k8s-master ~/k8s/svc]# vim /etc/kubernetes/apiserver
    把第26行KUBE_API_ARGS=""
    改成
    KUBE_API_ARGS="--service-node-port-range=3000-50000"
    
    (2)重启服务生效
    [root@k8s-master ~/k8s/svc]# systemctl restart kube-apiserver.service
    

    总结:

    在K8S中,service默认使用iptables来实现负载均衡, k8s 1.8新版本中推荐使用lvs(四层负载均衡)

    相关文章

      网友评论

        本文标题:k8s容器编排

        本文链接:https://www.haomeiwen.com/subject/aiilrctx.html