美文网首页Kubernetes
二、Kubernetes 环境搭建

二、Kubernetes 环境搭建

作者: Suny____ | 来源:发表于2020-03-23 23:40 被阅读0次

    1、Kubernetes 安装方式

    Kubernetes 安装有很多种方式,有极其复杂的,也有相对复杂的,当然也有相对简单的,不过简单的是企业级的解决方案,是收费的,这里举几个例子来安装 Kubernetes!

    本章只演示 MinikubeKubeadm 两种安装方式

    2、安装

    2.1、Minikube 搭建方式

    • 安装kubectl

      • 根据官网步骤下载

      • 直接下载

      • kubectl&minikube 百度盘下载,提取码: pap8

      • 配置 kubectl.exe 环境变量,使得cmd窗口可以直接使用kubectl命令

      • 检查是否配置成功

        • kubectl version

          C:\Users\32731>kubectl version
          Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T19:18:23Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"windows/amd64"}
          
          # k8s还没安装, 所以这里连不上
          Unable to connect to the server: dial tcp [::1]:8080: connectex: No connection could be made because the target machine actively refused it.
          
    • 安装minikube

    • 配置 minikube.exe 环境变量,使得cmd窗口可以直接使用minikube命令

    • 检查是否配置成功

      • minikube version

        C:\Users\32731>minikube version
        minikube version: v1.5.2
        commit: 792dbf92a1de583fcee76f8791cff12e0c9440ad-dirty
        
    • 安装K8S

      • 由于需要 科学上网,这里就不再继续演示了

        # 指定VM驱动, 其实就是通过minikube创建一个虚拟机
        C:\Users\32731>minikube start --vm-driver=virtualbox
        ! Microsoft Windows 10 Pro 10.0.17763 Build 17763 上的 minikube v1.5.2
        * 正在下载 VM boot image...
        
    • 常用命令

      # 创建K8S
      minikube start
      # 删除K8S
      minikube delete
      # 进入到K8S的机器中
      minikube ssh
      # 查看状态
      minikube status
      # 进入dashboard
      minikube dashboard
      

    其他系统下使用Minikube的操作这里就不演示了,可以去官网上看

    2.2、 Kubeadm 安装方式(无需科学上网)

    官网安装 Kubeadm 步骤

    2.2.1、准备环境
    • 版本统一

      • 这里采用旧版本,新版本据说有问题,我没去试过,就按下面的版本搭建
      • Docker 18.09.0
      • kubeadm-1.14.0-0
      • kubelet-1.14.0-0
      • kubectl-1.14.0-0
        • k8s.gcr.io/kube-apiserver:v1.14.0
        • k8s.gcr.io/kube-controller-manager:v1.14.0
        • k8s.gcr.io/kube-scheduler:v1.14.0
        • k8s.gcr.io/kube-proxy:v1.14.0
        • k8s.gcr.io/pause:3.1
        • k8s.gcr.io/etcd:3.3.10
        • k8s.gcr.io/coredns:1.3.1
      • calico:v3.9
    • 系统

      • win10
    • 虚拟化技术

      • Virtual Box
      • 采用vagrant + virtual box配合使用搭建centos7系统
    • 配置要求

      • 每台机器 2 GB 或更多的 RAM (如果少于这个数字将会影响您应用的运行内存)
      • 2核 CPU 或更多
      • 集群中的所有机器的网络彼此均能相互连接(公网和内网都可以)
    • vagrant安装方式

      • 可以参考之前写的 一、Docker环境准备

      • 这里仅提供一次安装多个虚拟机的Vagrantfile

        boxes = [
            {
                # 虚拟机名称
                :name => "master-kubeadm-k8s",
                # ip地址, 需要与win10的内网地址在同一个网段
                :eth1 => "192.168.50.111",
                # 分配2G内存
                :mem => "2048",
                # 分配2核CPU
                :cpu => "2",
                :sshport => 22230
            },
            {
                :name => "worker01-kubeadm-k8s",
                :eth1 => "192.168.50.112",
                :mem => "2048",
                :cpu => "2",
                :sshport => 22231
            },
            {
                :name => "worker02-kubeadm-k8s",
                :eth1 => "192.168.50.113",
                :mem => "2048",
                :cpu => "2",
                :sshport => 22232
            }
        ]
        Vagrant.configure(2) do |config|
            config.vm.box = "centos/7"
            boxes.each do |opts|
                config.vm.define opts[:name] do |config|
                    config.vm.hostname = opts[:name]
                    config.vm.network :public_network, ip: opts[:eth1]
                    config.vm.network "forwarded_port", guest: 22, host: 2222, id: "ssh", disabled: "true"
                config.vm.network "forwarded_port", guest: 22, host: opts[:sshport]
                    config.vm.provider "vmware_fusion" do |v|
                        v.vmx["memsize"] = opts[:mem]
                        v.vmx["numvcpus"] = opts[:cpu]
                    end
                    config.vm.provider "virtualbox" do |v|
                        v.customize ["modifyvm", :id, "--memory", opts[:mem]]
                    v.customize ["modifyvm", :id, "--cpus", opts[:cpu]]
                        v.customize ["modifyvm", :id, "--name", opts[:name]]
                    end
                end
            end
        end
        
    • 安装效果

    image.png
    2.2.2、安装依赖,更改配置
    • 更新 yum 源,3台虚拟机都要更新

      yum -y update
      yum install -y conntrack ipvsadm ipset jq sysstat curl iptables libseccomp
      
    image.png
    • 安装Docker

      # 1、卸载之前安装的docker
      sudo yum remove docker docker latest docker-latest-logrotate \
      docker-logrotate docker-engine docker-client docker-client-latest docker-common
      
      # 2、安装必要依赖
      sudo yum install -y yum-utils device-mapper-persistent-data lvm2
      
      # 3、设置docker仓库
      sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
      
      # 4、设置阿里云镜像加速,这里的镜像地址大家可以去自己的阿里云镜像仓库复制,可能不一样
      sudo mkdir -p /etc/docker
      sudo tee /etc/docker/daemon.json <<-'EOF'
      {
        "registry-mirrors": ["https://rrpa5ijo.mirror.aliyuncs.com"]
      }
      EOF
      sudo systemctl daemon-reload
      
      # 5、更新yum缓存
      sudo yum makecache fast
      
      # 6、安装 18.09.0版本 docker
      sudo yum install -y docker-ce-18.09.0 docker-ce-cli-18.09.0 containerd.io
      
      # 7、启动docker并设置开机启动
      sudo systemctl start docker && sudo systemctl enable docker
      
      # 8、测试docker安装是否成功
      sudo docker run hello-world
      
    • 修改 hosts文件,配置 hostname

      # 1、设置master的hostname
      [root@master-kubeadm-k8s ~]# sudo hostnamectl set-hostname master
      
      # 2、设置worker01/02的hostname
      [root@worker01-kubeadm-k8s ~]# sudo hostnamectl set-hostname worker01
      [root@worker02-kubeadm-k8s ~]# sudo hostnamectl set-hostname worker02
      
      # 3、修改3台机器的 hosts 文件
      vi /etc/hosts
      
      192.168.50.111 master
      192.168.50.112 worker01
      192.168.50.113 worker02
      
      # 永久修改hostname,需要重启
      sudo vi /etc/sysconfig/network
      # 添加内容
      hostname=master/worker01/worker02
      
      # 4、在每台机器上 ping 测试一下,保证每台都可以 ping 通即可
      [root@master-kubeadm-k8s ~]# ping worker01
      PING worker01 (192.168.50.112) 56(84) bytes of data.
      64 bytes from worker01 (192.168.50.112): icmp_seq=1 ttl=64 time=0.840 ms
      64 bytes from worker01 (192.168.50.112): icmp_seq=2 ttl=64 time=0.792 ms
      64 bytes from worker01 (192.168.50.112): icmp_seq=3 ttl=64 time=0.806 ms
      .....
      
    • 系统基础前提配置

      # 1、关闭防火墙
      systemctl stop firewalld && systemctl disable firewalld
      
      # 2、关闭selinux
      setenforce 0
      sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
      
      # 3、关闭swap
      swapoff -a
      sed -i '/swap/s/^\(.*\)$/#\1/g' /etc/fstab
      
      # 4、配置iptables的ACCEPT规则
      iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT
      
      # 5、设置系统参数
      cat <<EOF >  /etc/sysctl.d/k8s.conf
      net.bridge.bridge-nf-call-ip6tables = 1
      net.bridge.bridge-nf-call-iptables = 1
      EOF
      sysctl --system
      
    2.2.3、安装 Kubeadm、Kubelet 和 Kubectl
    • 配置 yum 源

      cat <<EOF > /etc/yum.repos.d/kubernetes.repo
      [kubernetes]
      name=Kubernetes
      baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
      enabled=1
      gpgcheck=0
      repo_gpgcheck=0
      gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
             http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
      EOF
      
    • 安装kubeadm、kubelet、kubectl

      yum install -y kubeadm-1.14.0-0 kubelet-1.14.0-0 kubectl-1.14.0-0
      
    • docker和k8s设置同一个cgroup

      # docker
      vi /etc/docker/daemon.json
      # 添加下面这个到首行,逗号别丢了
      "exec-opts": ["native.cgroupdriver=systemd"],
      
      # 重启docker,一定要执行
      systemctl restart docker
      
      # kubelet,这边如果发现输出 No such file or directory,说明是没问题的,继续往下进行即可
      sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
      
      # 重启kubelet,一定要执行
      systemctl enable kubelet && systemctl start kubelet
      
    2.2.4、拉取 Kubeadm 必备的几个镜像
    • 查看 kubeadm 使用的镜像

      [root@master-kubeadm-k8s ~]# kubeadm config images list
      ...
      
      # 这几个就是运行 Kubeadm 必备的几个镜像,但是都是国外镜像,没有科学上网不好直接拉取
      k8s.gcr.io/kube-apiserver:v1.14.0
      k8s.gcr.io/kube-controller-manager:v1.14.0
      k8s.gcr.io/kube-scheduler:v1.14.0
      k8s.gcr.io/kube-proxy:v1.14.0
      k8s.gcr.io/pause:3.1
      k8s.gcr.io/etcd:3.3.10
      k8s.gcr.io/coredns:1.3.1
      
    • 解决国外镜像不能访问的问题

      可以通过国内镜像仓库下载所需镜像,然后修改镜像名称

      • 创建 kubeadm.sh 脚本,用于拉取镜像、打tag、删除原有镜像

        • 创建 kubeadm.sh 文件
        #!/bin/bash
        set -e
        KUBE_VERSION=v1.14.0
        KUBE_PAUSE_VERSION=3.1
        ETCD_VERSION=3.3.10
        CORE_DNS_VERSION=1.3.1
        GCR_URL=k8s.gcr.io
        ALIYUN_URL=registry.cn-hangzhou.aliyuncs.com/google_containers
        images=(kube-proxy:${KUBE_VERSION}
        kube-scheduler:${KUBE_VERSION}
        kube-controller-manager:${KUBE_VERSION}
        kube-apiserver:${KUBE_VERSION}
        pause:${KUBE_PAUSE_VERSION}
        etcd:${ETCD_VERSION}
        coredns:${CORE_DNS_VERSION})
        
        for imageName in ${images[@]} ; do
            docker pull $ALIYUN_URL/$imageName
            docker tag  $ALIYUN_URL/$imageName $GCR_URL/$imageName
            docker rmi $ALIYUN_URL/$imageName
        done
        
      • 运行脚本和查看镜像

        # 运行脚本
        sh ./kubeadm.sh
        
        # 可以看到 Kubeadm 需要的镜像都下载好了
        [root@master-kubeadm-k8s ~]# docker images
        REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
        k8s.gcr.io/kube-proxy                v1.14.0             5cd54e388aba        12 months ago       82.1MB
        k8s.gcr.io/kube-apiserver            v1.14.0             ecf910f40d6e        12 months ago       210MB
        k8s.gcr.io/kube-controller-manager   v1.14.0             b95b1efa0436        12 months ago       158MB
        k8s.gcr.io/kube-scheduler            v1.14.0             00638a24688b        12 months ago       81.6MB
        k8s.gcr.io/coredns                   1.3.1               eb516548c180        14 months ago       40.3MB
        hello-world                          latest              fce289e99eb9        14 months ago       1.84kB
        k8s.gcr.io/etcd                      3.3.10              2c4adeb21b4f        15 months ago       258MB
        k8s.gcr.io/pause                     3.1                 da86e6ba6ca1        2 years ago         742kB
        
    2.2.5、kube init初始化master
    • 初始化master节点

      • 官网步骤
      • 注意此操作是在 Master 节点上进行
      # 若要重新初始化集群状态:kubeadm reset,然后再进行 init 操作
      # 指定 Kubernetes 的版本,指定主节点的 ip,指定网段的ip(可以不不指定)
      kubeadm init --kubernetes-version=1.14.0 --apiserver-advertise-address=192.168.50.111 --pod-network-cidr=10.244.0.0/16
      
      # 执行 init 完成后给出的提示
      Your Kubernetes control-plane has initialized successfully!
      
      To start using your cluster, you need to run the following as a regular user:
      
      # ================根据下面的提示继续再主节点执行========================
        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config
      # ================================================================
      You should now deploy a pod network to the cluster.
      Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
        https://kubernetes.io/docs/concepts/cluster-administration/addons/
      
      Then you can join any number of worker nodes by running the following on each as root:
      
      # kubeadm join 这里要先自己保存好,后面操作 worker节点要用它来加入集群
      kubeadm join 192.168.50.111:6443 --token se5kqz.roc626v5x1jzv2mp \
          --discovery-token-ca-cert-hash sha256:de8685c390d0f2addsdf86468fea9e02622705fb5eed84daa5b5ca667df29dff
      
      • 执行完上面的 3 个命令后查看pod验证一下

        [root@master-kubeadm-k8s ~]# kubectl get pods -n kube-system
        NAME                                         READY   STATUS    RESTARTS   AGE
        coredns-fb8b8dccf-gwqj9                      0/1     Pending   0          4m55s
        coredns-fb8b8dccf-lj92j                      0/1     Pending   0          4m55s
        etcd-master-kubeadm-k8s                      1/1     Running   0          4m13s
        kube-apiserver-master-kubeadm-k8s            1/1     Running   0          4m2s
        kube-controller-manager-master-kubeadm-k8s   1/1     Running   0          3m59s
        kube-proxy-hhnmc                             1/1     Running   0          4m55s
        kube-scheduler-master-kubeadm-k8s            1/1     Running   0          4m24s
        

        注意:coredns没有启动,是因为还需要安装网络插件

      • 健康检查

        [root@master-kubeadm-k8s ~]# curl -k https://localhost:6443/healthz
        
      • kubeadm init流程

        不需要执行,这里只是说明 Kubeadm init的流程

        # 1、进行一系列检查,以确定这台机器可以部署kubernetes
        # 2、生成kubernetes对外提供服务所需要的各种证书可对应目录
        ls /etc/kubernetes/pki/*
        
        # 3、为其他组件生成访问kube-ApiServer所需的配置文件
        ls /etc/kubernetes/
            
            # admin.conf 
            # controller-manager.conf 
            # kubelet.conf 
            # scheduler.conf
            
        # 4、为 Master组件生成Pod配置文件。
        ls /etc/kubernetes/manifests/*.yaml
        
            # kube-apiserver.yaml 
            # kube-controller-manager.yaml
            # kube-scheduler.yaml
            
        # 5、生成etcd的Pod YAML文件。
        ls /etc/kubernetes/manifests/*.yaml
        
            # kube-apiserver.yaml 
            # kube-controller-manager.yaml
            # kube-scheduler.yaml
            # etcd.yaml
            
        # 6、一旦这些 YAML 文件出现在被 kubelet 监视的/etc/kubernetes/manifests/目录下,kubelet就会自动创建这些yaml文件定义的pod,即master组件的容器。master容器启动后,kubeadm会通过检查localhost:6443/healthz这个master组件的健康状态检查URL,等待master组件完全运行起来
        
        # 7、为集群生成一个bootstrap token
        
        # 8、将ca.crt等 Master节点的重要信息,通过ConfigMap的方式保存在etcd中,工后续部署node节点使用
        
        # 9、最后一步是安装默认插件,kubernetes默认kube-proxy和DNS两个插件是必须安装的
        
    • 部署calico网络插件

      # 同样在master节点上操作
      
      # 如果网速够快的话,可以直接安装calico,不需要单独去拉取镜像,这里只是把步骤单独提取出来执行了
      # 可以先手动拉取 calico 的 yml 文件,查看需要哪些镜像
      [root@master-kubeadm-k8s ~]# curl https://docs.projectcalico.org/v3.9/manifests/calico.yaml | grep image
        % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                       Dload  Upload   Total   Spent    Left  Speed
        0 20674    0     0    0     0      0      0 --:--:--  0:00:05 --:--:--     0 
      100 20674  100 20674    0     0   3216      0  0:00:06  0:00:06 --:--:--  4955
      
              # 版本会变化,需要根据实际情况拉取镜像
                image: calico/cni:v3.9.5
                image: calico/pod2daemon-flexvol:v3.9.5
                image: calico/node:v3.9.5
                image: calico/kube-controllers:v3.9.5
      
      # 拉取 calico 所需镜像, 可能会比较慢
      [root@master-kubeadm-k8s ~]# docker pull calico/cni:v3.9.5
      [root@master-kubeadm-k8s ~]# docker pull calico/pod2daemon-flexvol:v3.9.5
      [root@master-kubeadm-k8s ~]# docker pull calico/node:v3.9.5
      [root@master-kubeadm-k8s ~]# docker pull calico/kube-controllers:v3.9.5
      
      # 安装 calico
      [root@master-kubeadm-k8s ~]# kubectl apply -f https://docs.projectcalico.org/v3.9/manifests/calico.yaml
      
      # -w 监控所有的Pods的状态变化
      [root@master-kubeadm-k8s ~]# kubectl get pods --all-namespaces -w
      

      不动的话取消重新执行,当所有 pod 的状态都是 Running 表示完成

    image.png
    2.2.6、worker节点加入集群
    • kube join

      复制之前保存的 初始化master节点时最后打印的 Kubeadm Join 信息到worker节点执行

      # worker01 节点
      [root@worker01-kubeadm-k8s ~]# kubeadm join 192.168.50.111:6443 --token se5kqz.roc626v5x1jzv2mp \
      >     --discovery-token-ca-cert-hash sha256:de8685c390d0f2addsdf86468fea9e02622705fb5eed84daa5b5ca667df29dff
      [preflight] Running pre-flight checks
      [preflight] Reading configuration from the cluster...
      [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
      [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
      [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
      [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
      [kubelet-start] Activating the kubelet service
      [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
      
      This node has joined the cluster:
      * Certificate signing request was sent to apiserver and a response was received.
      * The Kubelet was informed of the new secure connection details.
      
      Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
      
      # worker02 节点
      [root@worker02-kubeadm-k8s ~]# kubeadm join 192.168.50.111:6443 --token se5kqz.roc626v5x1jzv2mp \
      >     --discovery-token-ca-cert-hash sha256:de8685c390d0f2addsdf86468fea9e02622705fb5eed84daa5b5ca667df29dff
      [preflight] Running pre-flight checks
      [preflight] Reading configuration from the cluster...
      [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
      [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
      [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
      [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
      [kubelet-start] Activating the kubelet service
      [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
      
      This node has joined the cluster:
      * Certificate signing request was sent to apiserver and a response was received.
      * The Kubelet was informed of the new secure connection details.
      
      Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
      
      
    • 检查集群信息

      #在 Master 节点执行
      [root@master-kubeadm-k8s ~]# kubectl get nodes
      NAME                   STATUS     ROLES    AGE   VERSION
      master-kubeadm-k8s     Ready      master   37m   v1.14.0
      # 这里还是 NotReady, 等待完成即可
      worker01-kubeadm-k8s   NotReady   <none>   84s   v1.14.0
      worker02-kubeadm-k8s   NotReady   <none>   79s   v1.14.0
      
      # 等一会再次执行,可以看到所有节点都是 Ready 状态, 表示集群已经搭建完成了!
      [root@master-kubeadm-k8s ~]# kubectl get nodes
      NAME                   STATUS   ROLES    AGE     VERSION
      master-kubeadm-k8s     Ready    master   40m     v1.14.0
      worker01-kubeadm-k8s   Ready    <none>   3m48s   v1.14.0
      worker02-kubeadm-k8s   Ready    <none>   3m43s   v1.14.0
      

    2.3、初体验 Pod

    • 定义 pod.yml 文件

      • 建立文件夹
      [root@master-kubeadm-k8s ~]# mkdir pod_nginx_rs
      [root@master-kubeadm-k8s ~]# cd pod_nginx_rs/
      
      • 编写 yml 文件

        # 编写 yml 文件, yml、yaml都可以识别
        cat > pod_nginx_rs.yaml <<EOF
        apiVersion: apps/v1
        kind: ReplicaSet
        metadata:
          name: nginx
          labels:
            tier: frontend
        spec:
          replicas: 3
          selector:
            matchLabels:
              tier: frontend
          template:
            metadata:
              name: nginx
              labels:
                tier: frontend
            spec:
              containers:
              - name: nginx
                image: nginx
                ports:
                - containerPort: 80
        EOF
        
      • 根据pod_nginx_rs.yml文件创建pod

        [root@master-kubeadm-k8s pod_nginx_rs]# kubectl apply -f pod_nginx_rs.yaml
        replicaset.apps/nginx created
        
      • 查看 Pod

        • kubectl get pods

          # 现在还没有准备好,等会可以再次执行查看
          [root@master-kubeadm-k8s pod_nginx_rs]# kubectl get pods
          NAME          READY   STATUS              RESTARTS   AGE
          nginx-hdz6w   0/1     ContainerCreating   0          27s
          nginx-kbqxx   0/1     ContainerCreating   0          27s
          nginx-xtttc   0/1     ContainerCreating   0          27s
          
          # 已经完成了
          [root@master-kubeadm-k8s pod_nginx_rs]# kubectl get pods
          NAME          READY   STATUS    RESTARTS   AGE
          nginx-hdz6w   1/1     Running   0          3m10s
          nginx-kbqxx   1/1     Running   0          3m10s
          nginx-xtttc   1/1     Running   0          3m10s
          
        • kubectl get pods -o wide

          # 查看 pods 详情,可以看到 worker01 节点有两个 pod, worker02 有一个 pod
          # 注意: 这里面的 IP 是网络插件帮助生成的, 并不是指 宿主机的IP
          [root@master-kubeadm-k8s pod_nginx_rs]# kubectl get pods -o wide
          NAME          READY   STATUS    RESTARTS   AGE     IP               NODE                   NOMINATED NODE   READINESS GATES
          nginx-hdz6w   1/1     Running   0          3m26s   192.168.14.2     worker01-kubeadm-k8s   <none>           <none>
          nginx-kbqxx   1/1     Running   0          3m26s   192.168.221.65   worker02-kubeadm-k8s   <none>           <none>
          nginx-xtttc   1/1     Running   0          3m26s   192.168.14.1     worker01-kubeadm-k8s   <none>           <none>
          
          # worker01 是有 2 个Nginx的, 下面的 pause 是不算的,原因后面章节再解释
          [root@worker01-kubeadm-k8s ~]# docker ps | grep nginx
          acf671c4b9e5        nginx                  "nginx -g 'daemon of…"   3 minutes ago       Up 3 minutes 
          4109bd09f0a1        nginx                  "nginx -g 'daemon of…"   4 minutes ago       Up 4 minutes 
          3e5dcc552287        k8s.gcr.io/pause:3.1   "/pause"                 6 minutes ago       Up 6 minutes 
          9e0d36cb813c        k8s.gcr.io/pause:3.1   "/pause"                 6 minutes ago       Up 6 minutes
          
          # worker02 只有一个 Nginx
          [root@worker02-kubeadm-k8s ~]# docker ps | grep nginx
          c490e8d291d3        nginx                  "nginx -g 'daemon of…"   6 minutes ago       Up 6 minutes
          b5ab5b408063        k8s.gcr.io/pause:3.1   "/pause"                 8 minutes ago       Up 8 minutes
          
        • kubectl describe pod nginx

          # 查看 pod 的详情描述,包含了创建过程、yml文件内容、镜像拉取信息等等。。。。
          [root@master-kubeadm-k8s pod_nginx_rs]# kubectl describe pod nginx
          Name:               nginx-hdz6w
          Namespace:          default
          Priority:           0
          PriorityClassName:  <none>
          Node:               worker01-kubeadm-k8s/10.0.2.15
          Start Time:         Tue, 24 Mar 2020 15:14:43 +0000
          Labels:             tier=frontend
          Annotations:        cni.projectcalico.org/podIP: 192.168.14.2/32
          Status:             Running
          IP:                 192.168.14.2
          Controlled By:      ReplicaSet/nginx
          Containers:
            nginx:
              Container ID:   docker://4109bd09f0a11c0de77f411258e2cd18cc7ea624ad733a2e9c16f6468aadd448
              Image:          nginx
              Image ID:       docker-pullable://nginx@sha256:2539d4344dd18e1df02be842ffc435f8e1f699cfc55516e2cf2cb16b7a9aea0b
              Port:           80/TCP
              Host Port:      0/TCP
              State:          Running
                Started:      Tue, 24 Mar 2020 15:16:21 +0000
              Ready:          True
              Restart Count:  0
              Environment:    <none>
              Mounts:
                /var/run/secrets/kubernetes.io/serviceaccount from default-token-xggf5 (ro)
          Conditions:
            Type              Status
            Initialized       True
            Ready             True
            ContainersReady   True
            PodScheduled      True
          Volumes:
            default-token-xggf5:
              Type:        Secret (a volume populated by a Secret)
              SecretName:  default-token-xggf5
              Optional:    false
          QoS Class:       BestEffort
          Node-Selectors:  <none>
          Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                           node.kubernetes.io/unreachable:NoExecute for 300s
          Events:
            Type    Reason     Age    From                           Message
            ----    ------     ----   ----                           -------
            Normal  Scheduled  3m56s  default-scheduler              Successfully assigned default/nginx-hdz6w to worker01-kubeadm-k8s
            Normal  Pulling    3m52s  kubelet, worker01-kubeadm-k8s  Pulling image "nginx"
            Normal  Pulled     2m20s  kubelet, worker01-kubeadm-k8s  Successfully pulled image "nginx"
            Normal  Created    2m18s  kubelet, worker01-kubeadm-k8s  Created container nginx
            Normal  Started    2m18s  kubelet, worker01-kubeadm-k8s  Started container nginx
          
      • pod 扩容

        # 将 nginx 扩容为 5 个 pod
        [root@master-kubeadm-k8s pod_nginx_rs]# kubectl scale rs nginx --replicas=5
        replicaset.extensions/nginx scaled
        
        # 查看 pod, 新增的 2个 pod 正在创建
        [root@master-kubeadm-k8s pod_nginx_rs]# kubectl get pods -o wide
        NAME          READY   STATUS              RESTARTS   AGE   IP               NODE                   NOMINATED NODE   READINESS GATES
        nginx-7xf8m   0/1     ContainerCreating   0          5s    <none>           worker01-kubeadm-k8s   <none>           <none>
        nginx-hdz6w   1/1     Running             0          14m   192.168.14.2     worker01-kubeadm-k8s   <none>           <none>
        nginx-kbqxx   1/1     Running             0          14m   192.168.221.65   worker02-kubeadm-k8s   <none>           <none>
        nginx-qw2dh   0/1     ContainerCreating   0          5s    <none>           worker02-kubeadm-k8s   <none>           <none>
        nginx-xtttc   1/1     Running             0          14m   192.168.14.1     worker01-kubeadm-k8s   <none>           <none>
        
      • 测试

        [root@master-kubeadm-k8s pod_nginx_rs]# ping 192.168.14.2
        PING 192.168.14.2 (192.168.14.2) 56(84) bytes of data.
        64 bytes from 192.168.14.2: icmp_seq=1 ttl=63 time=1.64 ms
        64 bytes from 192.168.14.2: icmp_seq=2 ttl=63 time=1.03 ms
        ^C
        --- 192.168.14.2 ping statistics ---
        2 packets transmitted, 2 received, 0% packet loss, time 1002ms
        rtt min/avg/max/mdev = 1.033/1.337/1.641/0.304 ms
        
        # 访问任意 pod 的IP,访问成功
        [root@master-kubeadm-k8s pod_nginx_rs]# curl 192.168.14.2
        <!DOCTYPE html>
        <html>
        <head>
        <title>Welcome to nginx!</title>
        <style>
            body {
                width: 35em;
                margin: 0 auto;
                font-family: Tahoma, Verdana, Arial, sans-serif;
            }
        </style>
        </head>
        <body>
        <h1>Welcome to nginx!</h1>
        <p>If you see this page, the nginx web server is successfully installed and
        working. Further configuration is required.</p>
        
        <p>For online documentation and support please refer to
        <a href="http://nginx.org/">nginx.org</a>.<br/>
        Commercial support is available at
        <a href="http://nginx.com/">nginx.com</a>.</p>
        
        <p><em>Thank you for using nginx.</em></p>
        </body>
        </html>
        
      • 删除 pod

        [root@master-kubeadm-k8s pod_nginx_rs]# kubectl delete -f pod_nginx_rs.yaml
        replicaset.apps "nginx" deleted
        

    Kubernetes 集群搭建已经全部完成,Kubeadm 方式搭建还是比较麻烦的,如果公司有需要搭建 K8S 集群,也是完全可以通过这种方式来搭建的!

    相关文章

      网友评论

        本文标题:二、Kubernetes 环境搭建

        本文链接:https://www.haomeiwen.com/subject/siszyhtx.html