美文网首页云原生
16-Kubernetes-kubeadm方式安装

16-Kubernetes-kubeadm方式安装

作者: 紫荆秋雪_文 | 来源:发表于2021-12-20 13:33 被阅读0次

    安装 kubeadm

    一、环境准备

    在阿里云购买2台服务器(master、node)

    image.png

    二、基础环境

    1、修改 hostname

    hostnamectl set-hostname 名称
    

    2、查看修改结果

    hostnamectl status
    

    3、设置 hostname 解析

    echo "127.0.0.1   $(hostname)" >> /etc/hosts
    

    4、关闭 selinux

    sed -i 's/enforcing/disabled/' /etc/selinux/config
    setenforce 0
    

    5、关闭 swap

    swapoff -a
    sed -ri 's/.*swap.*/#&/' /etc/fstab
    

    6、允许 iptables 检查桥接流量

    • 开启 br_netfilter
    sudo modprobe br_netfilter
    
    • 确认 br_netfilter 是否被加载
    lsmod | grep br_netfilter
    
    cat > /etc/sysctl.d/k8s.conf << EOF 
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1 
    EOF
    
    sysctl --system # 生效
    

    三、安装docker

    1、移除docker

    sudo yum remove docker*
    

    2、安装yum

    sudo yum install -y yum-utils
    

    3、配置 docker yum源

    sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    

    4、安装docker

    • 查看版本
    yum list docker-ce --showduplicates | sort -r
    
    • 直接安装
    yum -y install docker-ce
    
    • 安装指定版本 19.03.9
    yum install -y docker-ce-3:19.03.9-3.el7.x86_64  docker-ce-cli-3:19.03.9-3.el7.x86_64 containerd.io
    
    或
    
    yum install -y docker-ce-19.03.9-3  docker-ce-cli-19.03.9 containerd.io
    

    5、启动服务

    systemctl start docker
    // 开机启动
    systemctl enable docker 
    

    6、配置镜像加速

    mkdir -p /etc/docker
    vim  /etc/docker/daemon.json
    
    #在daemon.json中添加下面任意一个国内镜像源
    #登录阿里云后,搜索“镜像”找到镜像控制台后填写自己的编码
    #网易云
    {"registry-mirrors": ["http://hub-mirror.c.163.com"] }
     #阿里云
    {
      "registry-mirrors": ["https://{自已的编码}.mirror.aliyuncs.com"]
    }
    
    systemctl daemon-reload
    systemctl restart docker
    
    ###我的阿里云
    sudo mkdir -p /etc/docker
    sudo tee /etc/docker/daemon.json <<-'EOF'
    {
      "registry-mirrors": ["https://knk5i905.mirror.aliyuncs.com"]
    }
    EOF
    sudo systemctl daemon-reload
    sudo systemctl restart docker
    

    四、安装 k8s 核心(所有节点)

    1、配置k8s的yum源

    cat <<EOF > /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=0
    repo_gpgcheck=0
    gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
    http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF
    

    2、卸载旧版本(kubelet kubeadm kubectl)

    yum remove -y kubelet kubeadm kubectl
    

    3、查看可以安装的版本

    yum list kubelet --showduplicates | sort -r
    

    4、安装 kubelet kubeadm kubectl 指定版本

    yum install -y kubelet-1.21.0 kubeadm-1.21.0 kubectl-1.21.0
    

    5、开机启动 kubelet

    systemctl enable kubelet && systemctl start kubelet
    

    五、初始化 Master 节点(只在 master 节点执行)

    1、封装 images.sh 文件

    #!/bin/bash
    images=(
      kube-apiserver:v1.21.0
      kube-proxy:v1.21.0
      kube-controller-manager:v1.21.0
      kube-scheduler:v1.21.0
      coredns:v1.8.0
      etcd:3.4.13-0
      pause:3.4.1
    )
    for imageName in ${images[@]} ; do
      docker pull registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/$imageName
    done
    

    2、images.sh 设置权限

    chmod +x images.sh && ./images.sh
    

    3、特殊处理 1.21.0版本的k8s

    ##注意1.21.0版本的k8s coredns镜像比较特殊,结合阿里云需要特殊处理,重新打标签
    docker tag registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/coredns:v1.8.0 registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/coredns/coredns:v1.8.0
    

    4、kubeadm init 一个 master

    kubeadm init \
    --apiserver-advertise-address=172.20.173.235 \
    --image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \
    --kubernetes-version v1.21.0 \
    --service-cidr=10.96.0.0/16 \
    --pod-network-cidr=192.169.0.0/16
    
    • 注意:pod-cidr 与 service-cidr

      • cidr 无类别域间路由(Classless Inter-Domain Routing、CIDR)
      • 指定一个网络可达范围 Pod 的子网范围 + service 负载均衡网络的子网范围 + 本机ip的子网范围不能有重复域
      • --apiserver-advertise-address=master节点 私有ip(不能随便写)
      • --pod-network-cidr=192.169.0.0/16:设置 Pod 的 ip 范围(可以随便)
      • --service-cidr=10.96.0.0/16 :设置 Service 的 ip 范围(可以随便写,不能与pod的ip有交集)
    • 输出log

    kubeadm init \
    > --apiserver-advertise-address=172.20.173.235 \
    > --image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \
    > --kubernetes-version v1.21.0 \
    > --service-cidr=10.96.0.0/16 \
    > --pod-network-cidr=192.169.0.0/16
    [init] Using Kubernetes version: v1.21.0
    [preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
    [preflight] Pulling images required for setting up a Kubernetes cluster
    [preflight] This might take a minute or two, depending on the speed of your internet connection
    [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
    [certs] Using certificateDir folder "/etc/kubernetes/pki"
    [certs] Generating "ca" certificate and key
    [certs] Generating "apiserver" certificate and key
    [certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.20.173.235]
    [certs] Generating "apiserver-kubelet-client" certificate and key
    [certs] Generating "front-proxy-ca" certificate and key
    [certs] Generating "front-proxy-client" certificate and key
    [certs] Generating "etcd/ca" certificate and key
    [certs] Generating "etcd/server" certificate and key
    [certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [172.20.173.235 127.0.0.1 ::1]
    [certs] Generating "etcd/peer" certificate and key
    [certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [172.20.173.235 127.0.0.1 ::1]
    [certs] Generating "etcd/healthcheck-client" certificate and key
    [certs] Generating "apiserver-etcd-client" certificate and key
    [certs] Generating "sa" key and public key
    [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
    [kubeconfig] Writing "admin.conf" kubeconfig file
    [kubeconfig] Writing "kubelet.conf" kubeconfig file
    [kubeconfig] Writing "controller-manager.conf" kubeconfig file
    [kubeconfig] Writing "scheduler.conf" kubeconfig file
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Starting the kubelet
    [control-plane] Using manifest folder "/etc/kubernetes/manifests"
    [control-plane] Creating static Pod manifest for "kube-apiserver"
    [control-plane] Creating static Pod manifest for "kube-controller-manager"
    [control-plane] Creating static Pod manifest for "kube-scheduler"
    [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
    [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
    [kubelet-check] Initial timeout of 40s passed.
    [apiclient] All control plane components are healthy after 103.002092 seconds
    [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
    [kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
    [upload-certs] Skipping phase. Please see --upload-certs
    [mark-control-plane] Marking the node k8s-master as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
    [mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
    [bootstrap-token] Using token: dziwnj.mcff9vw1zuk4smdr
    [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
    [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
    [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
    [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
    [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
    [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
    [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
    [addons] Applied essential addon: CoreDNS
    [addons] Applied essential addon: kube-proxy
    
    Your Kubernetes control-plane has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    Alternatively, if you are the root user, you can run:
    
      export KUBECONFIG=/etc/kubernetes/admin.conf
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    Then you can join any number of worker nodes by running the following on each as root:
    
    kubeadm join 172.20.173.235:6443 --token dziwnj.mcff9vw1zuk4smdr \
        --discovery-token-ca-cert-hash sha256:e18494f29c0a1f4368fce603112044cb0984d6710733218f6293627f861122a5
    

    更具输出log安装

    • To start using your cluster, you need to run the following as a regular user:
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    • 导出环境变量
    export KUBECONFIG=/etc/kubernetes/admin.conf
    
    • 部署一个网络插件
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    • 推荐网络插件
    kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
    

    5、在其他 node 节点执行以下命令来加入该 master

    kubeadm join 172.20.173.235:6443 --token dziwnj.mcff9vw1zuk4smdr \
        --discovery-token-ca-cert-hash sha256:e18494f29c0a1f4368fce603112044cb0984d6710733218f6293627f861122a5
    

    6、token过期,在master节点执行以下命令,重新生成token

    kubeadm token create --print-join-command
    或
    kubeadm token create --ttl 0 --print-join-command
    

    7、获取所有节点

    kubectl get node
    

    8、设置 ipvs 模式

    k8s整个集群为了访问通,默认是用 iptables 性能下降(kube-proxy 在集群之间同步 iptables 的内容),需要修改 kube-proxy 的配置文件,修改 mode 为 ipvs

    • 查看默认 kube-proxy 使用的模式
    kubectl edit cm kube-proxy -n kube-system
    
    image.png
    • 修改了 kube-proxy 的配置,为了让生效,需要杀掉以前的 kube-proxy
    kubectl get pod -A|grep kube-proxy
    
    kubectl delete pod kube-proxy-v6gg4(上述命令列出的 kube-proxy) -n kube-system
    
    • 查看是否修改成功
    kubectl logs 【上述命令查出的 kube-proxy-fb22z】 -n kube-system
    
    使用 ipvs 模式.png

    相关文章

      网友评论

        本文标题:16-Kubernetes-kubeadm方式安装

        本文链接:https://www.haomeiwen.com/subject/qoaafrtx.html