美文网首页
ansible部署K8S集群

ansible部署K8S集群

作者: heichong | 来源:发表于2022-11-02 13:34 被阅读0次

准备工作

  • 准备四台机器
ansible:10.3.23.191
K8STest0001:10.3.23.207
K8STest0002: 10.3.23.208
K8STest0003: 10.3.23.209
  • 版本
docker:20.10.9-3.el7
k8s:1.23.13-0

k8s 从1.24开始默认就不支持docker了,所以这里选择1.23

[root@ansible ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.3.23.207 K8STest0001
10.3.23.208 K8STest0002
10.3.23.209 K8STest0003

一、安装Ansible

参考 https://www.jianshu.com/p/f5ba99305c0d

二、更新k8s hosts文件

  • 编辑ansible清单 vi /etc/ansible/hosts
[k8s:children]
k8s_master
k8s_node


[k8s_master]
10.3.23.207


[k8s_node]
10.3.23.208
10.3.23.209

测试清单文件

[root@KSSYSDEV ansible]# ansible k8s --list-hosts
  hosts (3):
    10.3.23.207
    10.3.23.208
    10.3.23.209
  • 创建playbook,让k8s集群各节点同步/etc/hosts文件
cat <<EOF >  ./playbook-k8s-hosts.yml
---
- hosts: k8s
  remote_user: root
 
  tasks:
    - name: backup /etc/hosts
      shell: mv /etc/hosts /etc/host_bak
    - name: copy localhosts file to remote
      copy: src=/etc/hosts dest=/etc/ owner=root group=root mode=0644
EOF
ansible-playbook playbook-k8s-hosts.yml

三、安装Docker

所有节点安装docker

cat <<EOF > ./playbook-k8s-install-docker.yml 
---
- hosts: k8s
  remote_user: root
  vars: 
    docker_version: 20.10.9-3.el7

  tasks:
    - name: install dependencies
      shell:  yum install -y yum-utils device-mapper-persistent-data lvm2
    - name: docker-repo
      shell: yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    - name: install docker
      yum: name=docker-ce-{{docker_version}} state=present
    - name: start docker
      shell: systemctl start docker && systemctl enable docker
      
EOF
ansible-playbook playbook-k8s-install-docker.yml 

四、部署k8s master

  • 编写初始化脚本
    所有K8S机器,开始部署之前,需要做一些初始化处理:关闭防火墙、关闭selinux、禁用swap、配置k8s阿里云yum源等,所有操作放在脚本 k8s-os-init.sh,并在下面的playbook中通过script模块执行
cat <<EEE > ./k8s-os-init.sh
#!/bin/bash
#防火墙
systemctl disable firewalld
systemctl stop firewalld
setenforce 0

#禁用swap
swapoff -a

#修改内核参数
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

#重新加载配置文件
sysctl --system

#配置阿里k8s yum源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

#更新缓存
yum clean all -y && yum makecache -y && yum repolist -y


# k8s 的cgroup driver是 systemd ,而 docker 是cgroupfs,两个不一致会导致kubeadm init失败,所以要修改docker的cgroup driver为systemd
cat << EOF > /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

# 重启docker
systemctl restart docker

EEE

repo_gpgcheck=0 这里为0,如果改为1,则kubeadmin init时就会报错:Failure talking to yum: failure: repodata/repomd.xml from kubernetes: [Errno 256] No more mirrors to try.\nhttps://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/repodata/repomd.xml: [Errno -1] repomd.xml signature could not be verified for kubernetes

  • 创建master playbook文件
cat << EOF > ./playbook-k8s-install-master.yml 
---
- hosts: k8s_master
  remote_user: root
  vars:
    kube_version: 1.23.13-0
    k8s_version: v1.23.13
    k8s_master: 10.3.23.207
  tasks: 
    - name: k8s-os-init
      script: ./k8s-os-init.sh
    - name: install kube***
      yum: 
        name:
          - kubectl-{{kube_version}}
          - kubeadm-{{kube_version}}
          - kubelet-{{kube_version}}
        state: present
    - name: start k8s
      shell: systemctl enable kubelet && systemctl start kubelet
    - name: init k8s
      shell: kubeadm reset -f && kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version {{k8s_version}} --apiserver-advertise-address {{k8s_master}}  --pod-network-cidr=10.244.0.0/16 --token-ttl 0
    - name: config kube
      shell: rm -rf $HOME/.kube && mkdir -p $HOME/.kube && cp -i /etc/kubernetes/admin.conf $HOME/.kube/config && chown $(id -u):$(id -g) $HOME/.kube/config
    - name: copy flannel yaml file
      copy: src=./kube-flannel.yml dest=/tmp/kube-flannel.yml
    - name: install flannel
      shell: kubectl apply -f /tmp/kube-flannel.yml
    - name: get join command
      shell: kubeadm token create --print-join-command 
      register: join_command
    - name: show join command
      debug: var=join_command verbosity=0

EOF

这里就需要使用准备工作阶段手动下载kube-flannel.yml

执行master安装

[root@KSSYSDEV ansible]# ansible-playbook playbook-k8s-install-master.yml

PLAY [k8s_master] ********************************************************************************************************************************

TASK [Gathering Facts] ***************************************************************************************************************************
ok: [10.3.23.207]

TASK [start k8s] *********************************************************************************************************************************
changed: [10.3.23.207]

TASK [init k8s] **********************************************************************************************************************************
changed: [10.3.23.207]

TASK [config kube] *******************************************************************************************************************************
[WARNING]: Consider using the file module with state=absent rather than running 'rm'.  If you need to use command because file is insufficient
you can add 'warn: false' to this command task or set 'command_warnings=False' in ansible.cfg to get rid of this message.
changed: [10.3.23.207]

TASK [copy flannel yaml file] ********************************************************************************************************************
ok: [10.3.23.207]

TASK [install flannel] ***************************************************************************************************************************
changed: [10.3.23.207]

TASK [get join command] **************************************************************************************************************************
changed: [10.3.23.207]

TASK [show join command] *************************************************************************************************************************
ok: [10.3.23.207] => {
    "join_command": {
        "changed": true,
        "cmd": "kubeadm token create --print-join-command",
        "delta": "0:00:00.048605",
        "end": "2022-11-02 10:35:46.974724",
        "failed": false,
        "rc": 0,
        "start": "2022-11-02 10:35:46.926119",
        "stderr": "",
        "stderr_lines": [],
        "stdout": "kubeadm join 10.3.23.207:6443 --token nioc2c.ycnrz4gj54vmxnl5 --discovery-token-ca-cert-hash sha256:52f8ebbe8926cfb8b17459e5b1fb4fcdd50283e870af6f61cf9b43c880b638b8 ",
        "stdout_lines": [
            "kubeadm join 10.3.23.207:6443 --token nioc2c.ycnrz4gj54vmxnl5 --discovery-token-ca-cert-hash sha256:52f8ebbe8926cfb8b17459e5b1fb4fcdd50283e870af6f61cf9b43c880b638b8 "
        ]
    }
}

PLAY RECAP ***************************************************************************************************************************************
10.3.23.207                : ok=8    changed=5    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

  • 因为我分两次执行,所以日志不全
  • 注意日志的join_command,后续会使用到

五、部署k8s node

cat <<EOF > ./playbook-k8s-install-node.yml 
- hosts: k8s_node
  remote_user: root
  vars:
    kube_version: 1.23.13-0
  tasks:
    - name: k8s-os-init
      script: ./k8s-os-init.sh
    - name: install kube***
      yum: 
        name:
          - kubeadm-{{kube_version}}
          - kubelet-{{kube_version}}
        state: present
    - name: start kubelet
      shell: systemctl enable kubelet && systemctl start kubelet
    - name: join cluster
      shell: kubeadm join 10.3.23.207:6443 --token nioc2c.ycnrz4gj54vmxnl5 --discovery-token-ca-cert-hash sha256:52f8ebbe8926cfb8b17459e5b1fb4fcdd50283e870af6f61cf9b43c880b638b8
EOF

kubeadm join ......这个命令来自于上一步安装master的返回结果

ansible-playbook playbook-k8s-install-node.yml

[root@KSSYSDEV ansible]# ansible-playbook playbook-k8s-install-node.yml

PLAY [k8s_node] **********************************************************************************************************************************

TASK [Gathering Facts] ***************************************************************************************************************************
ok: [10.3.23.209]
ok: [10.3.23.208]

TASK [k8s-os-init] *******************************************************************************************************************************
changed: [10.3.23.209]
changed: [10.3.23.208]

TASK [install kube***] ***************************************************************************************************************************
changed: [10.3.23.208]
changed: [10.3.23.209]

TASK [start kubelet] *****************************************************************************************************************************
changed: [10.3.23.208]
changed: [10.3.23.209]

TASK [join cluster] ******************************************************************************************************************************
changed: [10.3.23.208]
changed: [10.3.23.209]

PLAY RECAP ***************************************************************************************************************************************
10.3.23.208                : ok=5    changed=4    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
10.3.23.209                : ok=5    changed=4    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

在master机器上查看nodes

[root@K8STest0001 ~]# kubectl get nodes
NAME          STATUS     ROLES                  AGE     VERSION
k8stest0001   Ready      control-plane,master   63m     v1.23.13
k8stest0002   NotReady   <none>                 4m30s   v1.23.13
k8stest0003   NotReady   <none>                 4m30s   v1.23.13

可以看到两台node还是NotReady 状态,需要等待几十分钟,再次查看如下:

[root@K8STest0001 ~]# kubectl get nodes
NAME          STATUS   ROLES                  AGE    VERSION
k8stest0001   Ready    control-plane,master   138m   v1.23.13
k8stest0002   Ready    <none>                 80m    v1.23.13
k8stest0003   Ready    <none>                 80m    v1.23.13

附录

  • kube-flannel.yml
---
kind: Namespace
apiVersion: v1
metadata:
  name: kube-flannel
  labels:
    pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
       #image: flannelcni/flannel-cni-plugin:v1.1.0 for ppc64le and mips64le (dockerhub limitations may apply)
        image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
       #image: flannelcni/flannel:v0.20.0 for ppc64le and mips64le (dockerhub limitations may apply)
        image: docker.io/rancher/mirrored-flannelcni-flannel:v0.20.0
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
       #image: flannelcni/flannel:v0.20.0 for ppc64le and mips64le (dockerhub limitations may apply)
        image: docker.io/rancher/mirrored-flannelcni-flannel:v0.20.0
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
        - name: xtables-lock
          mountPath: /run/xtables.lock
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
      - name: xtables-lock
        hostPath:
          path: /run/xtables.lock
          type: FileOrCreate

相关文章

  • 相关技术文档网址

    二进制部署k8s集群node地址:https://github.com/lizhenliang/ansible-i...

  • 本次k8s集群采用ansible自动化部署,所有组件部署都是采用二进制的形式,该项目的源码均在gjmzj项目上面修...

  • 部署k8s 1.22.2 集群 && Euler部署k8s 1

    部署k8s 1.22.2 集群 Euler部署k8s 1.22.2 集群 一、基础环境 主机名IP地址角色系统ma...

  • ceph-ansible 部署

    利用ceph-ansible工程,快速部署ceph集群Centos 7ansible 2.7ceph lumino...

  • ansible部署K8S集群

    准备工作 准备四台机器 版本 k8s 从1.24开始默认就不支持docker了,所以这里选择1.23 下载kube...

  • 02. kubeadm部署k8s

    1. k8s安装部署介绍 1.1 部署工具 使用批量部署工具, 如(ansible/saltstack) 手动二进...

  • 一文学会 K8S故障处理

    1 集群故障概述 在k8s集群的部署过程中,大家可能会遇到很多问题。这也是本地部署k8s集群遇到的最大挑战,因此本...

  • k8s node快速扩展

    接着上篇k8s ansible role快速部署一个小型集群后,开发基友又催我如何快速加节点,我只想对他说:麻辣烫...

  • k8s-访问外网服务的两种方式

    需求 k8s集群内的pod需要访问mysql,由于mysql的性质,不适合部署在k8s集群内,故k8s集群内的应用...

  • ansible+docker+jenkins+gitlib初探

    ansible 概述:集群部署,执行命令。只提供命令行,GUI可以使用官方saas:ansible-tower。 ...

网友评论

      本文标题:ansible部署K8S集群

      本文链接:https://www.haomeiwen.com/subject/lwsrtdtx.html