美文网首页
kubernetes1.15 install for kubea

kubernetes1.15 install for kubea

作者: star1988007 | 来源:发表于2019-10-08 13:19 被阅读0次

    kubernetes1.15 install for kubeadm

    安装条件

    • 阿里云镜像服务器访问畅通
    • dockerhub可以访问
    • Centos7.7系统 Base-server方式安装
    • 必要的docker k8s基础知识
    集群创建前的准备
    ip地址 节点角色 CPU 内存 主机名 存储
    192.168.100.101.111 master 2c 4G k8s-master001 200GB
    192.168.100.101.121 worker 2c 4G k8s-node001 200GB
    192.168.100.101.122 worker 2c 4G k8s-node002 200GB
    192.168.100.101.123 worker 2c 4G k8s-node003 200GB
    192.168.100.101.124 worker 2c 4G k8s-node004 200GB

    部署环境为nat模式下的内网服务器,即家庭网络环境、办公室网络环境

    一、服务器环境初始化
    • 设置hostname 在不同的机器上执行以下命令
    hostnamectl set-hostname k8s-master001  
    hostnamectl set-hostname k8s-node001  
    hostnamectl set-hostname k8s-node002 
    hostnamectl set-hostname k8s-node003  
    hostnamectl set-hostname k8s-node004
    

    修改/etc/sysconfig/network增加HOSTNAME=k8s-master001,注意更改为相应的主机名喔

    • 关闭防火墙、swap、selinux
    systemctl stop firewalld
    systemctl disable firewalld
    setenforce 0
    sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
    swapoff -a
    sed -i 's/.*swap.*/#&/' /etc/fstab
    
    • 修改内核参数、将桥接网卡的ipv4流量转发到iptables的链表
    cat /etc/sysctl.d/k8s.conf
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    net.ipv4.ip_forward = 1
    
    • 执行sysctl 载入配置文件参数
    modprobe br_netfilter
    sysctl --system
    
    • 增加ipvs的开启参数
    cat /etc/sysconfig/modules/ipvs.modules
    #!/bin/bash
    modprobe -- ip_vs
    modprobe -- ip_vs_rr
    modprobe -- ip_vs_wrr
    modprobe -- ip_vs_sh
    modprobe -- nf_conntrack_ipv4
    
    • 增加ipvs.modules文件可执行权限
    chmod 755 /etc/sysconfig/modules/ipvs.modules 
    /bin/bash -x /etc/sysconfig/modules/ipvs.modules 
    lsmod | grep "ip_vs" 
    
    • 增加阿里云epelyum源

    CentOS-Base有163的源,速度很快不用更换,如果是阿里云服务器建议使用阿里云CentOS-Base内网更快

    yum install -y wget
    #wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
    wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
    yum clean all && yum makecache
    
    • 增加k8s``docker-ceyum源
    cat /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    

    该内容可以通过 opsx.alibaba.com搜索kubernetes``帮助获取
    增加docker-ceyum源

    wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
    
    
    • 安装依赖包
    yum  -y install ipset ipvsadm yum-utils device-mapper-persistent-data lvm2
    
    • 查看yum仓库中Docker版本 一定要加上--showduplicates参数,不然只会出现最新版本的rpm包
    [root@k8s-master001 ~]# yum list docker-ce.x86_64  --showduplicates |sort -r
    Loading mirror speeds from cached hostfile
    Loaded plugins: fastestmirror
    Installed Packages
    docker-ce.x86_64            3:19.03.2-3.el7                    docker-ce-stable 
    docker-ce.x86_64            3:19.03.1-3.el7                    docker-ce-stable 
    docker-ce.x86_64            3:19.03.0-3.el7                    docker-ce-stable 
    docker-ce.x86_64            3:18.09.9-3.el7                    docker-ce-stable 
    docker-ce.x86_64            3:18.09.8-3.el7                    docker-ce-stable 
    docker-ce.x86_64            3:18.09.7-3.el7                    docker-ce-stable 
    docker-ce.x86_64            3:18.09.7-3.el7                    @docker-ce-stable
    docker-ce.x86_64            3:18.09.6-3.el7                    docker-ce-stable 
    docker-ce.x86_64            3:18.09.5-3.el7                    docker-ce-stable 
    docker-ce.x86_64            3:18.09.4-3.el7                    docker-ce-stable 
    docker-ce.x86_64            3:18.09.3-3.el7                    docker-ce-stable 
    docker-ce.x86_64            3:18.09.2-3.el7                    docker-ce-stable 
    docker-ce.x86_64            3:18.09.1-3.el7                    docker-ce-stable 
    docker-ce.x86_64            3:18.09.0-3.el7                    docker-ce-stable 
    docker-ce.x86_64            18.06.3.ce-3.el7                   docker-ce-stable 
    docker-ce.x86_64            18.06.2.ce-3.el7                   docker-ce-stable 
    docker-ce.x86_64            18.06.1.ce-3.el7                   docker-ce-stable 
    docker-ce.x86_64            18.06.0.ce-3.el7                   docker-ce-stable 
    docker-ce.x86_64            18.03.1.ce-1.el7.centos            docker-ce-stable 
    docker-ce.x86_64            18.03.0.ce-1.el7.centos            docker-ce-stable 
    docker-ce.x86_64            17.12.1.ce-1.el7.centos            docker-ce-stable 
    docker-ce.x86_64            17.12.0.ce-1.el7.centos            docker-ce-stable 
    docker-ce.x86_64            17.09.1.ce-1.el7.centos            docker-ce-stable 
    docker-ce.x86_64            17.09.0.ce-1.el7.centos            docker-ce-stable 
    docker-ce.x86_64            17.06.2.ce-1.el7.centos            docker-ce-stable 
    docker-ce.x86_64            17.06.1.ce-1.el7.centos            docker-ce-stable 
    docker-ce.x86_64            17.06.0.ce-1.el7.centos            docker-ce-stable 
    docker-ce.x86_64            17.03.3.ce-1.el7                   docker-ce-stable 
    docker-ce.x86_64            17.03.2.ce-1.el7.centos            docker-ce-stable 
    docker-ce.x86_64            17.03.1.ce-1.el7.centos            docker-ce-stable 
    docker-ce.x86_64            17.03.0.ce-1.el7.centos            docker-ce-stable 
    
    • Install Docker
    yum makecache fast
    yum install -y --setopt=obsoletes=0 docker-ce-18.09.8-3.el7.x86_64
    systemctl start docker
    systemctl enable docker
    

    增加--setopt=obsoletes=0
    obsoletes=value
    …where value is one of:
    0 — Disable yum's obsoletes processing logic when performing updates.
    1 — Enable yum's obsoletes processing logic when performing updates. When one package declares in its spec file that it obsoletes another package, the latter package will be replaced by the former package when the former package is installed. Obsoletes are declared, for example, when a package is renamed. obsoletes=1 the default.

    查看iptables filter表中FOWARD链表中策略是否为ACCEPT放行状态。如果不是执行iptables -P FORWARD ACCEPT

    [root@k8s-master001 k8s-yum]# iptables -nvL
    Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
     pkts bytes target     prot opt in     out     source               destination         
    
    Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
     pkts bytes target     prot opt in     out     source               destination         
    
    Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
     pkts bytes target     prot opt in     out     source               destination         
    

    查看Docker cgroup driver状态是否为systemd,如果不是systemd请修改配置文件改回systemd

    [root@k8s-master001 ~]# docker info | grep Cgroup
     Cgroup Driver: systemd
     cat /etc/docker/daemon.json #如果不是请修改配置文件
    {
      "exec-opts": ["native.cgroupdriver=systemd"]
    }
    
    • 重启docker 使配置生效
    systemctl restart docker
    
    
    • Install kubeadm、kubelet
    yum install -y kubelet-1.15.3-0.x86_64 kubeadm-1.15.3-0.x86_64 kubelet-1.15.3-0.x86_64
    
    • kubelet启动并设置开机启动
    systemctl start kubelet.service
    systemctl enable kubelet.service
    
    • 查看集群默认的初始化配置信息kubeadm config print init-defaults
    apiVersion: kubeadm.k8s.io/v1beta2
    bootstrapTokens:
    - groups:
      - system:bootstrappers:kubeadm:default-node-token
      token: abcdef.0123456789abcdef
      ttl: 24h0m0s
      usages:
      - signing
      - authentication
    kind: InitConfiguration
    localAPIEndpoint:
      advertiseAddress: 1.2.3.4
      bindPort: 6443
    nodeRegistration:
      criSocket: /var/run/dockershim.sock
      name: k8s-master001
      taints:
      - effect: NoSchedule
        key: node-role.kubernetes.io/master
    ---
    apiServer:
      timeoutForControlPlane: 4m0s
    apiVersion: kubeadm.k8s.io/v1beta2
    certificatesDir: /etc/kubernetes/pki
    clusterName: kubernetes
    controllerManager: {}
    dns:
      type: CoreDNS
    etcd:
      local:
        dataDir: /var/lib/etcd
    imageRepository: k8s.gcr.io
    kind: ClusterConfiguration
    kubernetesVersion: v1.15.0
    networking:
      dnsDomain: cluster.local
      serviceSubnet: 10.96.0.0/12
    scheduler: {}
    

    其中advertiseAddress是指api-server的ip地址,也就是k8s-master001内网ip地址192.168.101.111。
    serviceSubnet是集群将要采用的网段地址,我们使用10.244.0.0/16。

    • 初始化k8s集群在k8s-master001上操作
    kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.101.111 --ignore-preflight-errors=Swap
    

    执行后发现会报以下错误,这是因为无法访问k8s.gcr.io网站。

    [root@k8s-master001 ~]# kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.101.111 --ignore-preflight-errors=Swap
    W0913 00:40:38.345859   31641 version.go:98] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: dial tcp: lookup dl.k8s.io on [::1]:53: read udp [::1]:60967->[::1]:53: read: connection refused
    W0913 00:40:38.346081   31641 version.go:99] falling back to the local client version: v1.15.3
    [init] Using Kubernetes version: v1.15.3
    [preflight] Running pre-flight checks
            [WARNING Hostname]: hostname "k8s-master001" could not be reached
            [WARNING Hostname]: hostname "k8s-master001": lookup k8s-master001 on [::1]:53: read udp [::1]:45558->[::1]:53: read: connection refused
    [preflight] Pulling images required for setting up a Kubernetes cluster
    [preflight] This might take a minute or two, depending on the speed of your internet connection
    [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
    error execution phase preflight: [preflight] Some fatal errors occurred:
            [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.15.3: output: Error response from daemon: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on [::1]:53: read udp [::1]:51990->[::1]:53: read: connection refused
    , error: exit status 1
            [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.15.3: output: Error response from daemon: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on [::1]:53: read udp [::1]:43086->[::1]:53: read: connection refused
    , error: exit status 1
            [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.15.3: output: Error response from daemon: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on [::1]:53: read udp [::1]:55885->[::1]:53: read: connection refused
    , error: exit status 1
            [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.15.3: output: Error response from daemon: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on [::1]:53: read udp [::1]:48386->[::1]:53: read: connection refused
    , error: exit status 1
            [ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on [::1]:53: read udp [::1]:43804->[::1]:53: read: connection refused
    , error: exit status 1
            [ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.3.10: output: Error response from daemon: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on [::1]:53: read udp [::1]:49557->[::1]:53: read: connection refused
    , error: exit status 1
            [ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.3.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on [::1]:53: read udp [::1]:51683->[::1]:53: read: connection refused
    , error: exit status 1
    [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
    
    • 换源设置改为阿里云,查看需要的容器镜像kubeadm config images list
    [root@k8s-master001 ~]# kubeadm config images list
    W0913 00:44:21.703361   32321 version.go:98] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: dial tcp: lookup dl.k8s.io on [::1]:53: read udp [::1]:37790->[::1]:53: read: connection refused
    W0913 00:44:21.703519   32321 version.go:99] falling back to the local client version: v1.15.3
    k8s.gcr.io/kube-apiserver:v1.15.3
    k8s.gcr.io/kube-controller-manager:v1.15.3
    k8s.gcr.io/kube-scheduler:v1.15.3
    k8s.gcr.io/kube-proxy:v1.15.3
    k8s.gcr.io/pause:3.1
    k8s.gcr.io/etcd:3.3.10
    k8s.gcr.io/coredns:1.3.1
    
    • pull阿里云镜像Images更改tag 温馨提示:虽然是国内源但pull过程依然有些慢,建议冲杯咖啡、奶茶
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.15.3
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.15.3
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.15.3
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.15.3
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.3.10
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.3.1
    
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.15.3 k8s.gcr.io/kube-apiserver:v1.15.3
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.15.3 k8s.gcr.io/kube-controller-manager:v1.15.3
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.15.3 k8s.gcr.io/kube-scheduler:v1.15.3
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.15.3 k8s.gcr.io/kube-proxy:v1.15.3
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
    
    • 再次执行集群初始化操作
    [root@k8s-master001 tmp]# kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.101.111 --ignore-preflight-errors=Swap
    [init] Using Kubernetes version: v1.15.3
    [preflight] Running pre-flight checks
    [preflight] Pulling images required for setting up a Kubernetes cluster
    [preflight] This might take a minute or two, depending on the speed of your internet connection
    [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Activating the kubelet service
    [certs] Using certificateDir folder "/etc/kubernetes/pki"
    [certs] Generating "etcd/ca" certificate and key
    [certs] Generating "etcd/server" certificate and key
    [certs] etcd/server serving cert is signed for DNS names [k8s-master001 localhost] and IPs [192.168.101.111 127.0.0.1 ::1]
    [certs] Generating "etcd/peer" certificate and key
    [certs] etcd/peer serving cert is signed for DNS names [k8s-master001 localhost] and IPs [192.168.101.111 127.0.0.1 ::1]
    [certs] Generating "etcd/healthcheck-client" certificate and key
    [certs] Generating "apiserver-etcd-client" certificate and key
    [certs] Generating "ca" certificate and key
    [certs] Generating "apiserver" certificate and key
    [certs] apiserver serving cert is signed for DNS names [k8s-master001 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.101.111]
    [certs] Generating "apiserver-kubelet-client" certificate and key
    [certs] Generating "front-proxy-ca" certificate and key
    [certs] Generating "front-proxy-client" certificate and key
    [certs] Generating "sa" key and public key
    [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
    [kubeconfig] Writing "admin.conf" kubeconfig file
    [kubeconfig] Writing "kubelet.conf" kubeconfig file
    [kubeconfig] Writing "controller-manager.conf" kubeconfig file
    [kubeconfig] Writing "scheduler.conf" kubeconfig file
    [control-plane] Using manifest folder "/etc/kubernetes/manifests"
    [control-plane] Creating static Pod manifest for "kube-apiserver"
    [control-plane] Creating static Pod manifest for "kube-controller-manager"
    [control-plane] Creating static Pod manifest for "kube-scheduler"
    [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
    [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
    [kubelet-check] Initial timeout of 40s passed.
    [apiclient] All control plane components are healthy after 44.009526 seconds
    [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
    [kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
    [upload-certs] Skipping phase. Please see --upload-certs
    [mark-control-plane] Marking the node k8s-master001 as control-plane by adding the label "node-role.kubernetes.io/master=''"
    [mark-control-plane] Marking the node k8s-master001 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
    [bootstrap-token] Using token: f3xr55.iv29dsas70lrf0jo
    [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
    [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
    [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
    [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
    [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
    [addons] Applied essential addon: CoreDNS
    [addons] Applied essential addon: kube-proxy
    
    Your Kubernetes control-plane has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    Then you can join any number of worker nodes by running the following on each as root:
    
    kubeadm join 192.168.101.111:6443 --token f3xr55.iv29dsas70lrf0jo \
        --discovery-token-ca-cert-hash sha256:2f939d02abb31a087b4c3f4b1202c4efeaa6f9ee165abb705f8e2b19d41e132c 
    
    • 根据输出提示进行以下操作
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    • 保存记录好节点加入集群命令
    kubeadm join 192.168.101.111:6443 --token f3xr55.iv29dsas70lrf0jo \
    --discovery-token-ca-cert-hash sha256:2f939d02abb31a087b4c3f4b1202c4efeaa6f9ee165abb705f8e2b19d41e132c
    
    • 注意除了集群初始化操作在k8s-master001上执行,其它的操作每个节点都要执行
    二、node节点加入集群
    • 执行kubeadm join 输出以下信息即为成功,其中的WARNING是因为/etc/hosts没有配置,忽略即可没有影响
    [root@k8s-node001 tmp]# kubeadm join 192.168.101.111:6443 --token f3xr55.iv29dsas70lrf0jo \
    --discovery-token-ca-cert-hash sha256:2f939d02abb31a087b4c3f4b1202c4efeaa6f9ee165abb705f8e2b19d41e132c
    [preflight] Running pre-flight checks
            [WARNING Hostname]: hostname "k8s-node001" could not be reached
            [WARNING Hostname]: hostname "k8s-node001": lookup k8s-node001 on 192.168.100.37:53: no such host
    [preflight] Reading configuration from the cluster...
    [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
    [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Activating the kubelet service
    [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
    
    This node has joined the cluster:
    * Certificate signing request was sent to apiserver and a response was received.
    * The Kubelet was informed of the new secure connection details.
    
    Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
    
    • Tips:第三天k8s-node003加入集群报错
    error execution phase preflight: couldn't validate the identity of the API Server: abort connecting to API servers after timeout of 5m0s
    

    集群运行一段时间后 后续加入节点的时候提示上面这个错误,是mastertoken证书过期,好像有效期只有24小时,创建新的token即可解决,方法如下。

    [root@k8s-master001 ~]# kubeadm token create
    2cwxr3.01v9qv5kbt69fzxa
    [root@k8s-master001 ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | awk {'print $2'}
    2f939d02abb31a087b4c3f4b1202c4efeaa6f9ee165abb705f8e2b19d41e132c
    
    • 使用新token添加新节点
    [root@k8s-node003 ~]# kubeadm join 192.168.101.111:6443 --token 2cwxr3.01v9qv5kbt69fzxa --discovery-token-ca-cert-hash sha256:2f939d02abb31a087b4c3f4b1202c4efeaa6f9ee165abb705f8e2b19d41e132c
    
    • 查看集群以及nodes状态
    root@k8s-master001 ~]# kubectl get cs    
    NAME                 STATUS    MESSAGE             ERROR
    scheduler            Healthy   ok                  
    controller-manager   Healthy   ok                  
    etcd-0               Healthy   {"health":"true"} 
    [root@k8s-master001 ~]# kubectl get nodes
    NAME            STATUS     ROLES    AGE   VERSION
    k8s-master001   NotReady   master   2h   v1.15.3
    k8s-node001     NotReady   <none>   1h   v1.15.3
    

    可以看到集群是健康的,但是node节点状态是notready这是因为我们仅仅创建了集群还没有安装网络插件

    二、安装必要的插件、网络插件
    • Installflannel网络插件
    [root@k8s-master001 k8s]# curl -O https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
    [root@k8s-master001 k8s]# cat kube-flannel.yml  | grep '"Network"'
         "Network": "10.244.0.0/16"
    kubectl create -f  kube-flannel.yml
    

    配置文件网段和创建集群是的网段一致,此处不需要修改.

    • 查看插件启动状态
    [root@k8s-master001 ~]# kubectl get pod -n kube-system
    NAME                                    READY   STATUS    RESTARTS   AGE
    coredns-5c98db65d4-6v8zm                1/1     Running   0          1h
    coredns-5c98db65d4-7xdsn                1/1     Running   0          1h
    etcd-k8s-master001                      1/1     Running   0          1h
    kube-apiserver-k8s-master001            1/1     Running   0          1h
    kube-controller-manager-k8s-master001   1/1     Running   0          1h
    kube-proxy-97jz2                        1/1     Running   0          1h
    kube-proxy-lftz2                        1/1     Running   0          1h
    kube-scheduler-k8s-master001            1/1     Running   0          1h
    
    • kube-proxy更改为ipvs模式mode: ""更改为mode: "ipvs"
       kubectl edit cm kube-proxy -n kube-system
    
    • Install dashboard插件
    wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
    kubectl create -f kubernetes-dashboard.yaml
    root@k8s-master001 k8s]# kubectl get svc kubernetes-dashboard -n kube-system  
    NAME                   TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
    kubernetes-dashboard   NodePort   10.111.189.89   <none>        443:32576/TCP   21s
    

    其状态一直是ImagePullBackOff

    [root@k8s-master001 tmp]# kubectl get pods,svc -n kube-system   | grep kubernetes-dashboard       
    pod/kubernetes-dashboard-7d75c474bb-ljdhp   0/1     ImagePullBackOff   0          14m
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
    
    • 生成token dashboard登录支持Kubeconfigtoken两种认证方式Kubeconfig中也依赖token
    kubectl create serviceaccount  dashboard-admin -n kube-system
    kubectl create clusterrolebinding  dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
    
    • 获取 Dashboardtoken
    [root@k8s-master001 tmp]# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
    
    Name:         dashboard-admin-token-hqj4v
    Namespace:    kube-system
    Labels:       <none>
    Annotations:  kubernetes.io/service-account.name: dashboard-admin
                  kubernetes.io/service-account.uid: 56931c3b-ce9d-456c-9e12-3597163fff47
    
    Type:  kubernetes.io/service-account-token
    
    Data
    ====
    ca.crt:     1025 bytes
    namespace:  11 bytes
    token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQv
    c2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4taHFqNHYiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY
    291bnQudWlkIjoiNTY5MzFjM2ItY2U5ZC00NTZjLTllMTItMzU5NzE2M2ZmZjQ3Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.sYIp7MzgFw6jSBRN63GX4y3j9yfyvYnmAw1BN4FsoiU5rdaSDle1lGWwleMGh
    Y5lNHTpoQwUXAzl6uQmrhFCQPFsZZEIaPaSWEzHxuwmYO3uiaLcJB5_0wtC_DiKpLx_JV8NQwYmgAlgL2s2HxLeuweSHcMcxwIccA5CHdRPoh0_r6NjHc1yf4s6vzNQUpfNMj3k34_Oe7YmpU6eGFONvxzDigy5kWG4QDE4m3g4ceeYqjydJ_gRMjfu86E_VxGgbeILBK3OQd
    GsqK8i5GZz8IziDOd3sYicHqbwDgGXiGZfPuqtu4FbDSKV1lZ8VWK0sRuuww3idCGuG7gzMkB2hw
    
    • Install Helm Client
     wget https://get.helm.sh/helm-v2.14.3-linux-amd64.tar.gz
     tar -zxf helm-v2.14.3-linux-amd64.tar.gz
     cp helm  /usr/local/bin/
     [root@k8s-master001 linux-amd64]# helm version
     Client: &version.Version{SemVer:"v2.14.3",   GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
     Error: could not find tiller
    
    • Install Helm servertiller
     helm init --upgrade --tiller-image registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.14.3
     [root@k8s-master001 k8s]#  kubectl get pod -n kube-system -l app=helm
     NAME                             READY   STATUS    RESTARTS   AGE
     tiller-deploy-6867df9fc6-xh6hh   1/1     Running   0          112s
     root@k8s-master001 k8s]# helm version
     Client: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
     Server: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
    
    • Tiller增加ServiceAccount
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: tiller
      namespace: kube-system
    ---
    apiVersion: rbac.authorization.k8s.io/v1beta1
    kind: ClusterRoleBinding
    metadata:
      name: tiller
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-admin
    subjects:
      - kind: ServiceAccount
        name: tiller
        namespace: kube-system
    
    END

    至此最小集群已经安装完成了,这里只安装了最核心的k8s服务。

    参考文献
    https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/deployment_guide/sec-configuring_yum_and_yum_repositories
    https://kubernetes.io/zh/docs/reference/setup-tools/kubeadm/kubeadm

    相关文章

      网友评论

          本文标题:kubernetes1.15 install for kubea

          本文链接:https://www.haomeiwen.com/subject/yorbpctx.html