美文网首页k8s
K8S集群相关(v1.20.1)

K8S集群相关(v1.20.1)

作者: 那一瞬间_a565 | 来源:发表于2021-01-06 10:09 被阅读0次

    K8S集群安装(v1.20.1)

    虚拟机配置: centos7(节点CPU核数必须是 :>= 2核 ,否则k8s无法启动)
    IP:
    master:10.211.55.25
    node1:10.211.55.26
    node2:10.211.55.27

    1. 机器环境配置

    • 1.1 给每一台机器设置主机名
    hostnamectl set-hostname k8s-01
    hostnamectl set-hostname k8s-02
    hostnamectl set-hostname k8s-03
    
    • 1.2 配置IP host映射关系
    vi /etc/hosts
    #######
    10.211.55.25 k8s-01
    10.211.55.26 k8s-02
    10.211.55.27 k8s-03
    
    • 1.3 安装依赖环境,注意:每一台机器都需要安装此依赖环境
    yum install-y conntrack ntpdate ntp ipvsadm ipset jq iptablescurl sysstat libseccompwget vim net-toolsgit iproute lrzsz bash-completion tree bridge- utils unzip bind-utilsgcc
    

    1.4 防火墙相关

    安装iptables,启动iptables,设置开机自启,清空iptables规则,保存当前规则到默认规则

    • 关闭防火墙
    systemctlstop firewalld && systemctl disable firewalld
    
    • 置空规则
    iptablesyum-y install iptables-services && systemctlstart iptables && systemctl enable iptables && iptables-F &&service iptables save
    
    • 关闭swap分区【虚拟内存】并且永久关闭虚拟内存
    swapoff -a && sed -i '/ swap / s/^\(.*\)$/#\1/g'/etc/fstab
    
    • 关闭selinux(Linux安全内核模块)
    setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/'/etc/selinux/config
    

    1.5 升级Linux内核为4.44版本

    • 获取升级包
    rpm-Uvh http://www.elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm
    
    • 可选:添加rpm源
    rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
    
    • 查看可用版本( 长期支持版本lt 稳定主线版本ml ) 找到4.4版本
    yum --enablerepo="elrepo-kernel" list --showduplicates | sort -r | grep kernel-lt.x86_64
    yum --enablerepo="elrepo-kernel" list --showduplicates | sort -r | grep kernel-ml.x86_64
    
    image
    • 安装指定版本内核
    yum --enablerepo="elrepo-kernel" install kernel-lt-4.4.249-1.el7.elrepo.x86_64 -y
    
    • 查看安装完后所有内核
    grep 'menuentry' /etc/grub2.cfg
    
    • 设置开机从新内核启动
    grub2-set-default 'CentOS Linux (4.4.249-1.el7.elrepo.x86_64) 7 (Core)'
    

    ** 注意 设置完内核后,需要重启服务器才会生效 **

    • 查询当前内核
    uname-r
    4.4.249-1.el7.elrepo.x86_64
    

    1.6 修改内核参数

    cat > kubernetes.conf <<EOF 
    net.bridge.bridge-nf-call-iptables=1
    net.bridge.bridge-nf-call-ip6tables=1
    net.ipv4.ip_forward=1
    net.ipv4.tcp_tw_recycle=0
    vm.swappiness=0
    vm.overcommit_memory=1
    vm.panic_on_oom=0
    fs.inotify.max_user_instances=8192
    fs.inotify.max_user_watches=1048576
    fs.file-max=52706963
    fs.nr_open=52706963
    net.ipv6.conf.all.disable_ipv6=1
    net.netfilter.nf_conntrack_max=2310720
    EOF
    
    • 将优化内核文件拷贝到/etc/sysctl.d/文件夹下,这样优化文件开机的时候能够被调用
    cp kubernetes.conf /etc/sysctl.d/kubernetes.conf 
    
    • 手动刷新,让优化文件立即生效
    sysctl-p /etc/sysctl.d/kubernetes.conf
    
    • 调整系统临时区(如果已经设置时区,可略过 )
    • 设置系统时区为中国/上海
    timedatectl set-timezone Asia/Shanghai
    
    • 将当前的 UTC 时间写入硬件时钟
    timedatectl set-local-rtc0 
    
    • 重启依赖于系统时间的服务
    systemctlrestart rsyslog systemctlrestart crond
    
    • 关闭系统不需要的服务
    systemctlstop postfix && systemctl disable postfix
    
    • 设置日志保存方式
    • 修改打开文件数调整
    echo "* soft nofile 65536" >> /etc/security/limits.conf 
    echo "* hard nofile 65536" >> /etc/security/limits.conf
    

    1.7 设置日志保存方式

    • 创建保存日志的目录
    mkdir /var/log/journal 
    
    • 创建配置文件存放目录
    mkdir /etc/systemd/journald.conf.d 
    
    • 创建配置文件
    cat > /etc/systemd/journald.conf.d/99-prophet.conf <<EOF 
    [Journal]
    Storage=persistent
    Compress=yes
    SyncIntervalSec=5m
    RateLimitInterval=30s
    RateLimitBurst=1000
    SystemMaxUse=10G
    SystemMaxFileSize=200M
    MaxRetentionSec=2week
    ForwardToSyslog=no
    EOF
    
    • 重启systemd journald的配置
    systemctl restart systemd-journald
    

    1.8 kube-proxy 开启 ipvs 前置条件

    modprobe br_netfilter
    
    cat > /etc/sysconfig/modules/ipvs.modules <<EOF
    #!/bin/bash
    modprobe -- ip_vs
    modprobe -- ip_vs_rr
    modprobe -- ip_vs_wrr
    modprobe -- ip_vs_sh
    modprobe -- nf_conntrack_ipv4
    EOF
    
    • 使用lsmod命令查看这些文件是否被引导
    chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
    

    2. docker安装

    2.1安装docker

    • yum安装
    yum install -y yum-utils device-mapper-persistent-data lvm2
    
    • 紧接着配置一个稳定的仓库、仓库配置会保存到/etc/yum.repos.d/docker-ce.repo文件中
    yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
    
    • 更新Yum安装的相关Docker软件包&安装Docker CE
    yum update -y && yum install docker-ce
    
    • 设置docker daemon文件 创建/etc/docker目录
    mkdir /etc/docker 
    
    • 更新daemon.json文件
    cat > /etc/docker/daemon.json <<EOF
    {"exec-opts":["native.cgroupdriver=systemd"],"log-driver":"json-file","log-opts":{"max-size":"100m"}}
    EOF
    
    • 创建,存储docker配置文件
    mkdir -p /etc/systemd/system/docker.service.d
    
    • 重启docker服务
    systemctl daemon-reload && systemctl restart docker && systemctl enable docker
    

    4. 集群安装

    4.1在线安装 kubelet kubeadm kubectl

    • yum阿里源
    cat <<EOF > /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=0
    repo_gpgcheck=0
    gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
           http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF
    
    • 安装kubeadm、kubelet、kubectl
    yum install -y kubeadm-1.15.1 kubelet-1.15.1 kubectl-1.15.1
    
    • 启动 kubelet
    systemctl enable kubelet && systemctl start kubelet
    

    4.2准备k8s镜像

    • 初始化配置文件
    kubeadm config print init-defaults > kubeadm.conf
    

    由于国内无法访问k8s.gcr.io, 采用从阿里云中下载后,重新修改tag的方式

    • 查看kubeadm所需的镜像列表
    kubeadm config images list
    
    image.png
    • 从国内阿里云下载对应版本号的镜像
    docker image pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.20.1
    docker image pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.20.1
    docker image pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.20.1
    docker image pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.20.1
    docker image pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2
    docker image pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0
    docker image pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0
    
    • 镜像更名
    docker image tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.20.1 k8s.gcr.io/kube-apiserver:v1.20.1
    docker image tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.20.1 k8s.gcr.io/kube-controller-manager:v1.20.1
    docker image tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.20.1 k8s.gcr.io/kube-scheduler:v1.20.1
    docker image tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.20.1 k8s.gcr.io/kube-proxy:v1.20.1
    docker image tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 k8s.gcr.io/pause:3.2
    docker image tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0 k8s.gcr.io/etcd:3.4.13-0
    docker image tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0 k8s.gcr.io/coredns:1.7.0
    
    • 删除之前的下载的镜像
    docker image rm -f registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.20.1
    docker image rm -f registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.20.1
    docker image rm -f registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.20.1
    docker image rm -f registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.20.1 
    docker image rm -f registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0
    docker image rm -f registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0
    docker image rm -f registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2
    

    4.3 初始化集群

    4.3.1 初始化主节点 --- 只需要在主节点执行
    • 获取yaml资源配置文件
    kubeadm config print init-defaults > kubeadm-config.yaml
    
    • 修改yaml资源文件
    localAPIEndpoint:
        advertiseAddress: 10.211.55.25 # 注意:修改配置IP为master节点
    kubernetesVersion: v1.20.1 #修改版本号,必须和kubectl版本保持一致
    
    #添加以下配置
    #指定使用ipvs网络进行通信
    ---
    apiVersion: kubeproxy.config.k8s.io/v1alpha1 
    kind: kubeProxyConfiguration
    featureGates:
          SupportIPVSProxyMode: true
    mode: ipvs
    
    • 初始化主节点,开始部署

    执行此命令,CPU核心数量必须大于1核,否则无法执行成功

    kubeadm init --config=kubeadm-config.yaml --experimental-upload-certs | tee kubeadm-init.log
    

    执行完成后,继续执行日志输出中的后续配置命令

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    export KUBECONFIG=/etc/kubernetes/admin.conf
    
    • 查看node节点,

    此时应该只有master,且使用ipvs+flannel的方式进行网络通信,但是flannel网络插件还没有部署,因此节点状态 此时为NotReady

    kubectl get node
    
    4.3 flannel插件 (只需要在主节点执行)
    #1 下载flannel网络插件
    wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
    #2 部署flannel
    kubectl create -f kube-flannel.yml
    #也可进行部署网络
    kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
    
    • 查看pod
     kubectl get pod -n kube-system
    
    4.4 将两个node节点加入集群
    • 从主节点的安装日志中,获取节点添加命令
    cat kubeadm-init.log
    -----
    kubeadm join 10.211.55.25:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:2395b257958a5d583b9d8059df71b751f0929f5a91eb6247649193a81c0af841
    
    • 在两个node节点上分别执行一次
    4.5 验证最终状态
    [root@k8s-01 ~]#  kubectl get node
    NAME     STATUS   ROLES                  AGE   VERSION
    k8s-01   Ready    control-plane,master   23h   v1.20.1
    k8s-02   Ready    <none>                 22h   v1.20.1
    k8s-03   Ready    <none>                 22h   v1.20.1
    [root@k8s-01 ~]# kubectl get pod -n kube-system
    NAME                             READY   STATUS              RESTARTS   AGE
    coredns-74ff55c5b-mbh7t          0/1     ContainerCreating   0          23h
    coredns-74ff55c5b-qzfqq          0/1     ContainerCreating   0          23h
    etcd-k8s-01                      1/1     Running             1          23h
    kube-apiserver-k8s-01            1/1     Running             1          23h
    kube-controller-manager-k8s-01   1/1     Running             1          23h
    kube-flannel-ds-amd64-bzjvn      0/1     CrashLoopBackOff    19         22h
    kube-flannel-ds-amd64-fl5qt      0/1     CrashLoopBackOff    19         22h
    kube-flannel-ds-amd64-qvkw6      0/1     CrashLoopBackOff    26         22h
    kube-proxy-2vljc                 1/1     Running             1          23h
    kube-proxy-ptwd9                 1/1     Running             0          22h
    kube-proxy-rjwps                 1/1     Running             0          22h
    kube-scheduler-k8s-01            1/1     Running             1          23h
    

    相关文章

      网友评论

        本文标题:K8S集群相关(v1.20.1)

        本文链接:https://www.haomeiwen.com/subject/eoxnoktx.html