美文网首页
openEuler 22.03 欧拉系统安装k8s 1.28

openEuler 22.03 欧拉系统安装k8s 1.28

作者: mini鱼 | 来源:发表于2023-09-28 01:12 被阅读0次

    操作系统版本: openEuler 22.03 LTS SP2

    一、安装操作系统

    1. 下载iso

    下载地址: https://www.openeuler.org/zh/download/?version=openEuler%2022.03%20LTS%20SP2

    2. 系统安装步骤记录

    2.1 选择安装系统

    image.png

    2.2 选择安装语言

    image.png

    2.3 选择安装位置

    image.png

    选择自定义


    image.png

    选择标准分区,然后添加/boot、/ 等分区


    image.png image.png

    2.4 使能root用户,并设置密码,密码需要一定复杂度(大小写+数字+特殊字符)

    image.png

    最小化安装方便做基础镜像

    2.5 点击开始安装

    image.png

    等待片刻,点击重启

    image.png

    重启完后,安装常用的工具做成模板

    二、 安装docker

    1. 下载官方repo

    cd /etc/yum.repos.d/
    curl -O https://download.docker.com/linux/centos/docker-ce.repo
    

    不太确认openEuler 22.03基于哪一版改的,就用centos8吧,目前看能用

    sed -i 's/$releasever/8/g' docker-ce.repo
    

    2. 安装docker

    yum install -y docker-ce
    

    3. 设置国内镜像加速

    sudo mkdir -p /etc/docker
    sudo tee /etc/docker/daemon.json <<-'EOF'
    {
        "registry-mirrors": [
            "https://dockerproxy.com",
            "https://hub-mirror.c.163.com",
            "https://mirror.baidubce.com",
            "https://ccr.ccs.tencentyun.com"
        ]
    }
    EOF
    

    4. 启动docker

    systemctl start docker
    systemctl enable docker
    

    验证docker

    image.png

    三、安装cri-dockerd

    1. 下载最新版cri-dockerd rpm包

    网络条件好的话直接使用wget下载,网络条件一般的话可以在github上面先下载再上传到虚拟机

    下载地址:Releases · Mirantis/cri-dockerd (github.com)

    https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.4/cri-dockerd-0.3.4-3.el8.x86_64.rpm

    2、安装cri-docker

    rpm -ivh cri-dockerd-0.3.4-3.el8.x86_64.rpm
    

    3、启动cri-docker服务

    systemctl start cri-docker
    systemctl enable cri-docker
    

    4、cri-dockerd设置国内镜像加速

    $ vi /usr/lib/systemd/system/cri-docker.service # 找到第10行ExecStart= 
    
    # 修改为ExecStart=/usr/bin/cri-dockerd --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9
    # 重启Docker组件
    $ systemctl daemon-reload && systemctl restart docker cri-docker.socket cri-docker 
    # 检查Docker组件状态
    $ systemctl status docker cir-docker.socket cri-docker
    
    image.png

    四、安装kubernetes组件

    0. 环境准备

    0.1 修改主机名

    hostnamectl set-hostname k8s-1
    exec bash
    

    使用静态IP,并将IP、主机名写入/etc/hosts

    0.2 关闭swap分区

    # 如果有的话,关闭swap分区
    swapoff -a
    vi /etc/fstab # 永久关闭swap分区,注释掉fstab中包含swap的这一行
    # /dev/mapper/centos-swap swap                    swap    defaults        0 0
    

    0.3 关闭firewalld,selinux

    sudo systemctl stop firewalld
    sudo systemctl disable firewalld
    sudo setenforce 0
    sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
    

    0.4 转发 IPv4 并让 iptables 看到桥接流

    cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
    overlay
    br_netfilter
    EOF
    
    sudo modprobe overlay
    sudo modprobe br_netfilter
    
    # 设置所需的 sysctl 参数,参数在重新启动后保持不变
    cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
    net.bridge.bridge-nf-call-iptables  = 1
    net.bridge.bridge-nf-call-ip6tables = 1
    net.ipv4.ip_forward                 = 1
    EOF
    
    # 应用 sysctl 参数而不重新启动
    sudo sysctl --system
    
    lsmod | grep br_netfilter
    lsmod | grep overlay
    
    sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward
    
    # 如果init时仍提示iptables错误请执行
    echo "1">/proc/sys/net/bridge/bridge-nf-call-iptables
    echo "1">/proc/sys/net/ipv4/ip_forward
    

    重启服务器

    如果selinux之前是开的,需要重启服务器

    1、配置kubernetes源

    # 此操作会覆盖 /etc/yum.repos.d/kubernetes.repo 中现存的所有配置
    cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=https://pkgs.k8s.io/core:/stable:/v1.28/rpm/
    enabled=1
    gpgcheck=1
    gpgkey=https://pkgs.k8s.io/core:/stable:/v1.28/rpm/repodata/repomd.xml.key
    #exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
    EOF
    

    2、安装kubelet、kubeadm、kubectl、kubernetes-cni

    yum install -y kubelet kubeadm kubectl kubernetes-cni
    systemctl enable kubelet.service
    

    3、初始化集群

    替换 --apiserver-advertise-address 参数为节点IP

    kubeadm init --node-name=k8s-1 \
    --image-repository=registry.aliyuncs.com/google_containers \
    --cri-socket=unix:///var/run/cri-dockerd.sock \
    --apiserver-advertise-address=192.168.58.66 \
    --pod-network-cidr=10.244.0.0/16 \
    --service-cidr=10.96.0.0/12
    

    有如下输出就部署成功了


    image.png

    按照输出,执行下面的命令

    配置环境变量
    # 非root用户请执行
    $ mkdir -p $HOME/.kube
    $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    $ sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    # root用户直接执行
    # 临时生效,重启后失效,不推荐。
    $ export KUBECONFIG=/etc/kubernetes/admin.conf 
    # 永久生效,执行kubeadm reset后再次init也无需再次执行这条命令
    $ echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >>  ~/.bash_profile 
    $ source ~/.bash_profile
    kubectl  get nodes
    

    如果初始化失败需要重新初始化是,对集群进行重置

    kubeadm reset --cri-socket=unix:///var/run/cri-dockerd.sock
    

    4、安装网络组件

    以上步骤执行完后,使用 kubectl get nodes 查看,此时节点状态为NotReady状态,需要安装网络插件
    在此处下载 kube-flannel.yml
    https://github.com/flannel-io/flannel/releases
    直接apply: kubectl apply -f kube-flannel.yml
    也可直接复制下面的命令:

    cat >  kube-flannel.yml << EOF
    ---
    kind: Namespace
    apiVersion: v1
    metadata:
      name: kube-flannel
      labels:
        k8s-app: flannel
        pod-security.kubernetes.io/enforce: privileged
    ---
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      labels:
        k8s-app: flannel
      name: flannel
    rules:
    - apiGroups:
      - ""
      resources:
      - pods
      verbs:
      - get
    - apiGroups:
      - ""
      resources:
      - nodes
      verbs:
      - get
      - list
      - watch
    - apiGroups:
      - ""
      resources:
      - nodes/status
      verbs:
      - patch
    - apiGroups:
      - networking.k8s.io
      resources:
      - clustercidrs
      verbs:
      - list
      - watch
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      labels:
        k8s-app: flannel
      name: flannel
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: flannel
    subjects:
    - kind: ServiceAccount
      name: flannel
      namespace: kube-flannel
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      labels:
        k8s-app: flannel
      name: flannel
      namespace: kube-flannel
    ---
    kind: ConfigMap
    apiVersion: v1
    metadata:
      name: kube-flannel-cfg
      namespace: kube-flannel
      labels:
        tier: node
        k8s-app: flannel
        app: flannel
    data:
      cni-conf.json: |
        {
          "name": "cbr0",
          "cniVersion": "0.3.1",
          "plugins": [
            {
              "type": "flannel",
              "delegate": {
                "hairpinMode": true,
                "isDefaultGateway": true
              }
            },
            {
              "type": "portmap",
              "capabilities": {
                "portMappings": true
              }
            }
          ]
        }
      net-conf.json: |
        {
          "Network": "10.244.0.0/16",
          "Backend": {
            "Type": "vxlan"
          }
        }
    ---
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: kube-flannel-ds
      namespace: kube-flannel
      labels:
        tier: node
        app: flannel
        k8s-app: flannel
    spec:
      selector:
        matchLabels:
          app: flannel
      template:
        metadata:
          labels:
            tier: node
            app: flannel
        spec:
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                - matchExpressions:
                  - key: kubernetes.io/os
                    operator: In
                    values:
                    - linux
          hostNetwork: true
          priorityClassName: system-node-critical
          tolerations:
          - operator: Exists
            effect: NoSchedule
          serviceAccountName: flannel
          initContainers:
          - name: install-cni-plugin
            image: docker.io/flannel/flannel-cni-plugin:v1.2.0
            command:
            - cp
            args:
            - -f
            - /flannel
            - /opt/cni/bin/flannel
            volumeMounts:
            - name: cni-plugin
              mountPath: /opt/cni/bin
          - name: install-cni
            image: docker.io/flannel/flannel:v0.22.3
            command:
            - cp
            args:
            - -f
            - /etc/kube-flannel/cni-conf.json
            - /etc/cni/net.d/10-flannel.conflist
            volumeMounts:
            - name: cni
              mountPath: /etc/cni/net.d
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          containers:
          - name: kube-flannel
            image: docker.io/flannel/flannel:v0.22.3
            command:
            - /opt/bin/flanneld
            args:
            - --ip-masq
            - --kube-subnet-mgr
            resources:
              requests:
                cpu: "100m"
                memory: "50Mi"
            securityContext:
              privileged: false
              capabilities:
                add: ["NET_ADMIN", "NET_RAW"]
            env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: EVENT_QUEUE_DEPTH
              value: "5000"
            volumeMounts:
            - name: run
              mountPath: /run/flannel
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
            - name: xtables-lock
              mountPath: /run/xtables.lock
          volumes:
          - name: run
            hostPath:
              path: /run/flannel
          - name: cni-plugin
            hostPath:
              path: /opt/cni/bin
          - name: cni
            hostPath:
              path: /etc/cni/net.d
          - name: flannel-cfg
            configMap:
              name: kube-flannel-cfg
          - name: xtables-lock
            hostPath:
              path: /run/xtables.lock
              type: FileOrCreate
    EOF
    

    flannel 的pod启动正常后,如果节点还是Notready,需要安装kubernetes-cni

    yum install -y kubernetes-cni
    ls -lh /opt/cni/bin
    

    五、添加节点

    1、 节点安装docker、cri-dockerd、kubelet、kubeadm

    2、 加入集群

    根据init的输出,复制命令,添加命令参数--cri-socket=unix:///var/run/cri-dockerd.sock

    kubeadm join 192.168.58.10:6443 \
    --token 9k1tot.8hetamn6mlkndrw2         \
    --discovery-token-ca-cert-hash sha256:be3d47cf5e5cd1db36e63b18855a371588e0669d6141a727894d3ff91ed2d48a \
    --cri-socket=unix:///var/run/cri-dockerd.sock
    

    相关文章

      网友评论

          本文标题:openEuler 22.03 欧拉系统安装k8s 1.28

          本文链接:https://www.haomeiwen.com/subject/qfxlbdtx.html