美文网首页k8s
centos 安装k8s集群

centos 安装k8s集群

作者: 木木111314 | 来源:发表于2021-01-06 14:28 被阅读0次

1 服务器

三台linux centos服务器

2 安装docker环境

2.1 更新yum

yum update

2.2 设置yum源

*Sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo*
image

可以查看所有仓库中所有docker版本,并选择特定版本安装:

yum list docker-ce --showduplicates | sort -r

2.3 安装

sudo yum install docker-ce-18.06.0.ce

2.4 启动、设置开启开机启动

sudo systemctl start docker
sudo systemctl enable docker

验证安装是否成功(有client和service两部分表示docker安装启动都成功了):

docker version

3 安装k8s集群

3.1 关闭防火墙

systemctl stop firewalld

systemctl disable firewalld

3.2 关闭selinux

setenforce 0  # 临时关闭

sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config # 永久关闭

1.3.3 关闭swap

swapoff -a    # 临时关闭;关闭swap主要是为了性能考虑

free            # 可以通过这个命令查看swap是否关闭了

sed -ri 's/.*swap.*/#&/' /etc/fstab  # 永久关闭

3.4 添加主机名与IP对应的关系

$ vim /etc/hosts

添加如下内容:

192.168.190.128 k8s-master

192.168.190.129 k8s-node1

192.168.190.130 k8s-node2

3.5 将桥接的IPV4流量传递到iptables 的链

$ cat > /etc/sysctl.d/k8s.conf << EOF

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

EOF
$ sysctl --system

3.6 添加阿里云YUM软件源

cat > /etc/yum.repos.d/kubernetes.repo << EOF

[k8s]

name=k8s

enabled=1

gpgcheck=0

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/

EOF

3.7 安装kubeadm,kubelet和kubectl

kubelet # 运行在 Cluster 所有节点上,负责启动 Pod 和容器。

kubeadm # 用于初始化 Cluster。

kubectl # 是 Kubernetes 命令行工具。通过 kubectl 可以部署和管理应用,查看各种资源,创建、删除和更新各种组件。

在部署kubernetes时,要求master node和worker node上的版本保持一致,否则会出现版本不匹配导致奇怪的问题出现。本文将介绍如何在CentOS系统上,使用yum安装指定版本的Kubernetes。

我们需要安装指定版本的kubernetes。那么如何做呢?在进行yum安装时,可以使用下列的格式来进行安装:

yum install -y kubelet-<version> kubectl-<version> kubeadm-<version>

yum install kubelet-1.19.6 kubeadm-1.19.6 kubectl-1.19.6 -y

输出

Installed:

  kubeadm.x86_64 0:1.20.1-0  kubectl.x86_64 0:1.20.1-0 

  kubelet.x86_64 0:1.20.1-0 

3.8 设置自启动kubelet

此时,还不能启动kubelet,因为此时配置还不能,现在仅仅可以设置开机自启动

systemctl enable kubelet

3.9 部署Kubernetes Master

3.9.1 初始化kubeadm

kubeadm init \

  --apiserver-advertise-address=192.168.190.128 \

  --image-repository registry.aliyuncs.com/google_containers \

  --kubernetes-version v1.19.6 \

  --service-cidr=10.1.0.0/16 \

  --pod-network-cidr=10.244.0.0/16
# –image-repository string:    这个用于指定从什么位置来拉取镜像(1.13版本才有的),默认值是k8s.gcr.io,我们将其指定为国内镜像地址:registry.aliyuncs.com/google_containers

# –kubernetes-version string:  指定kubenets版本号,默认值是stable-1,会导致从https://dl.k8s.io/release/stable-1.txt下载最新的版本号,我们可以将其指定为固定版本(v1.15.1)来跳过网络请求。

# –apiserver-advertise-address  指明用 Master 的哪个 interface 与 Cluster 的其他节点通信。如果 Master 有多个 interface,建议明确指定,如果不指定,kubeadm 会自动选择有默认网关的 interface。

# –pod-network-cidr            指定 Pod 网络的范围。Kubernetes 支持多种网络方案,而且不同网络方案对  –pod-network-cidr有自己的要求,这里设置为10.244.0.0/16 是因为我们将使用 flannel 网络方案,必须设置成这个 CIDR。

输出:

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.190.128:6443 --token 09eyru.hv3d4mkq2rxgculb \

    --discovery-token-ca-cert-hash sha256:d0141a8504afef937cc77bcac5a67669f42ff32535356953b5ab217d767f85aa

稍后在其他两台服务器中执行以上命令。

3.9.2 使用kubectl工具

复制如下命令直接执行(master)

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

下面就可以直接使用kubectl命令了(master)

3.9.3 安装Pod网络插件(CNI)(master)

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

或者:

kubectl apply -f kube-flannel.yml

通过命令查看安装服务状态

kubectl get pods -n kube-system

4 设置NFS存储

4.1 集群所有节点安装nfs客户端

yum install -y nfs-utils

4.2 安装插件

kubectl apply -f deploymentnfs.yaml
apiVersion: apps/v1

kind: Deployment

metadata:

  name: nfs-client-provisioner

  labels:

    app: nfs-client-provisioner

  # replace with namespace where provisioner is deployed

  namespace: arcgis

spec:

  replicas: 1

  strategy:

    type: Recreate

  selector:

    matchLabels:

      app: nfs-client-provisioner

  template:

    metadata:

      labels:

        app: nfs-client-provisioner

    spec:

      serviceAccountName: nfs-client-provisioner

      containers:

        - name: nfs-client-provisioner

          image: quay.io/external_storage/nfs-client-provisioner:latest

          volumeMounts:

            - name: nfs-client-root

              mountPath: /persistentvolumes

          env:

            - name: PROVISIONER_NAME

              value: mynfs

            - name: NFS_SERVER

              value: 192.168.190.81

            - name: NFS_PATH

              value: /gis

      volumes:

        - name: nfs-client-root

          nfs:

            server: 192.168.190.81

            path: /gis
kubectl apply –f rbacnfs.yaml
apiVersion: v1

kind: ServiceAccount

metadata:

  name: nfs-client-provisioner

  # replace with namespace where provisioner is deployed

  namespace: arcgis

---

kind: ClusterRole

apiVersion: rbac.authorization.k8s.io/v1

metadata:

  name: nfs-client-provisioner-runner

rules:

  - apiGroups: [""]

    resources: ["persistentvolumes"]

    verbs: ["get", "list", "watch", "create", "delete"]

  - apiGroups: [""]

    resources: ["persistentvolumeclaims"]

    verbs: ["get", "list", "watch", "update"]

  - apiGroups: ["storage.k8s.io"]

    resources: ["storageclasses"]

    verbs: ["get", "list", "watch"]

  - apiGroups: [""]

    resources: ["events"]

    verbs: ["create", "update", "patch"]

---

kind: ClusterRoleBinding

apiVersion: rbac.authorization.k8s.io/v1

metadata:

  name: run-nfs-client-provisioner

subjects:

  - kind: ServiceAccount

    name: nfs-client-provisioner

    # replace with namespace where provisioner is deployed

    namespace: arcgis

roleRef:

  kind: ClusterRole

  name: nfs-client-provisioner-runner

  apiGroup: rbac.authorization.k8s.io

---

kind: Role

apiVersion: rbac.authorization.k8s.io/v1

metadata:

  name: leader-locking-nfs-client-provisioner

  # replace with namespace where provisioner is deployed

  namespace: arcgis

rules:

  - apiGroups: [""]

    resources: ["endpoints"]

    verbs: ["get", "list", "watch", "create", "update", "patch"]

---

kind: RoleBinding

apiVersion: rbac.authorization.k8s.io/v1

metadata:

  name: leader-locking-nfs-client-provisioner

  # replace with namespace where provisioner is deployed

  namespace: arcgis

subjects:

  - kind: ServiceAccount

    name: nfs-client-provisioner

    # replace with namespace where provisioner is deployed

    namespace: arcgis

roleRef:

  kind: Role

  name: leader-locking-nfs-client-provisioner

  apiGroup: rbac.authorization.k8s.io

4.3 添加存储类

kubectl apply –f classnfs.yaml
apiVersion: storage.k8s.io/v1

kind: StorageClass

metadata:

  name: storage-class-default

provisioner: mynfs # or choose another name, must match deployment's env PROVISIONER_NAME'

parameters:

  archiveOnDelete: "false"

5 安装ingress

kubectl apply -f mandatory-ingress.yaml
apiVersion: v1

kind: Namespace

metadata:

  name: ingress-nginx

---

apiVersion: apps/v1

kind: Deployment

metadata:

  name: default-http-backend

  labels:

    app.kubernetes.io/name: default-http-backend

    app.kubernetes.io/part-of: ingress-nginx

  namespace: ingress-nginx

spec:

  replicas: 1

  selector:

    matchLabels:

      app.kubernetes.io/name: default-http-backend

      app.kubernetes.io/part-of: ingress-nginx

  template:

    metadata:

      labels:

        app.kubernetes.io/name: default-http-backend

        app.kubernetes.io/part-of: ingress-nginx

    spec:

      terminationGracePeriodSeconds: 60

      containers:

        - name: default-http-backend

          # Any image is permissible as long as:

          # 1\. It serves a 404 page at /

          # 2\. It serves 200 on a /healthz endpoint

          image: 192.168.190.126:5000/defaultbackend-amd64:1.5

          livenessProbe:

            httpGet:

              path: /healthz

              port: 8080

              scheme: HTTP

            initialDelaySeconds: 30

            timeoutSeconds: 5

          ports:

            - containerPort: 8080

          resources:

            limits:

              cpu: 10m

              memory: 20Mi

            requests:

              cpu: 10m

              memory: 20Mi

---

apiVersion: v1

kind: Service

metadata:

  name: default-http-backend

  namespace: ingress-nginx

  labels:

    app.kubernetes.io/name: default-http-backend

    app.kubernetes.io/part-of: ingress-nginx

spec:

  ports:

    - port: 80

      targetPort: 8080

  selector:

    app.kubernetes.io/name: default-http-backend

    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap

apiVersion: v1

metadata:

  name: nginx-configuration

  namespace: ingress-nginx

  labels:

    app.kubernetes.io/name: ingress-nginx

    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap

apiVersion: v1

metadata:

  name: tcp-services

  namespace: ingress-nginx

  labels:

    app.kubernetes.io/name: ingress-nginx

    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap

apiVersion: v1

metadata:

  name: udp-services

  namespace: ingress-nginx

  labels:

    app.kubernetes.io/name: ingress-nginx

    app.kubernetes.io/part-of: ingress-nginx

---

apiVersion: v1

kind: ServiceAccount

metadata:

  name: nginx-ingress-serviceaccount

  namespace: ingress-nginx

  labels:

    app.kubernetes.io/name: ingress-nginx

    app.kubernetes.io/part-of: ingress-nginx

---

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRole

metadata:

  name: nginx-ingress-clusterrole

  labels:

    app.kubernetes.io/name: ingress-nginx

    app.kubernetes.io/part-of: ingress-nginx

rules:

  - apiGroups:

      - ""

    resources:

      - configmaps

      - endpoints

      - nodes

      - pods

      - secrets

    verbs:

      - list

      - watch

  - apiGroups:

      - ""

    resources:

      - nodes

    verbs:

      - get

  - apiGroups:

      - ""

    resources:

      - services

    verbs:

      - get

      - list

      - watch

  - apiGroups:

      - "extensions"

    resources:

      - ingresses

    verbs:

      - get

      - list

      - watch

  - apiGroups:

      - ""

    resources:

      - events

    verbs:

      - create

      - patch

  - apiGroups:

      - "extensions"

    resources:

      - ingresses/status

    verbs:

      - update

---

apiVersion: rbac.authorization.k8s.io/v1

kind: Role

metadata:

  name: nginx-ingress-role

  namespace: ingress-nginx

  labels:

    app.kubernetes.io/name: ingress-nginx

    app.kubernetes.io/part-of: ingress-nginx

rules:

  - apiGroups:

      - ""

    resources:

      - configmaps

      - pods

      - secrets

      - namespaces

    verbs:

      - get

  - apiGroups:

      - ""

    resources:

      - configmaps

    resourceNames:

      # Defaults to "<election-id>-<ingress-class>"

      # Here: "<ingress-controller-leader>-<nginx>"

      # This has to be adapted if you change either parameter

      # when launching the nginx-ingress-controller.

      - "ingress-controller-leader-nginx"

    verbs:

      - get

      - update

  - apiGroups:

      - ""

    resources:

      - configmaps

    verbs:

      - create

  - apiGroups:

      - ""

    resources:

      - endpoints

    verbs:

      - get

---

apiVersion: rbac.authorization.k8s.io/v1

kind: RoleBinding

metadata:

  name: nginx-ingress-role-nisa-binding

  namespace: ingress-nginx

  labels:

    app.kubernetes.io/name: ingress-nginx

    app.kubernetes.io/part-of: ingress-nginx

roleRef:

  apiGroup: rbac.authorization.k8s.io

  kind: Role

  name: nginx-ingress-role

subjects:

  - kind: ServiceAccount

    name: nginx-ingress-serviceaccount

    namespace: ingress-nginx

---

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRoleBinding

metadata:

  name: nginx-ingress-clusterrole-nisa-binding

  labels:

    app.kubernetes.io/name: ingress-nginx

    app.kubernetes.io/part-of: ingress-nginx

roleRef:

  apiGroup: rbac.authorization.k8s.io

  kind: ClusterRole

  name: nginx-ingress-clusterrole

subjects:

  - kind: ServiceAccount

    name: nginx-ingress-serviceaccount

    namespace: ingress-nginx

---

apiVersion: apps/v1

kind: Deployment

metadata:

  name: nginx-ingress-controller

  namespace: ingress-nginx

  labels:

    app.kubernetes.io/name: ingress-nginx

    app.kubernetes.io/part-of: ingress-nginx

spec:

  replicas: 1

  selector:

    matchLabels:

      app.kubernetes.io/name: ingress-nginx

      app.kubernetes.io/part-of: ingress-nginx

  template:

    metadata:

      labels:

        app.kubernetes.io/name: ingress-nginx

        app.kubernetes.io/part-of: ingress-nginx

      annotations:

        prometheus.io/port: "10254"

        prometheus.io/scrape: "true"

    spec:

      hostNetwork: true

      serviceAccountName: nginx-ingress-serviceaccount

      containers:

        - name: nginx-ingress-controller

          image: 192.168.190.126:5000/kubernetes-ingress-controller/nginx-ingress-controller:0.20.0

          args:

            - /nginx-ingress-controller

            - --default-backend-service=$(POD_NAMESPACE)/default-http-backend

            - --configmap=$(POD_NAMESPACE)/nginx-configuration

            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services

            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services

            - --publish-service=$(POD_NAMESPACE)/ingress-nginx

            - --annotations-prefix=nginx.ingress.kubernetes.io

          securityContext:

            capabilities:

              drop:

                - ALL

              add:

                - NET_BIND_SERVICE

            # www-data -> 33

            runAsUser: 33

          env:

            - name: POD_NAME

              valueFrom:

                fieldRef:

                  fieldPath: metadata.name

            - name: POD_NAMESPACE

              valueFrom:

                fieldRef:

                  fieldPath: metadata.namespace

          ports:

            - name: http

              containerPort: 80

            - name: https

              containerPort: 443

          livenessProbe:

            failureThreshold: 3

            httpGet:

              path: /healthz

              port: 10254

              scheme: HTTP

            initialDelaySeconds: 10

            periodSeconds: 10

            successThreshold: 1

            timeoutSeconds: 1

          readinessProbe:

            failureThreshold: 3

            httpGet:

              path: /healthz

              port: 10254

              scheme: HTTP

            periodSeconds: 10

            successThreshold: 1

            timeoutSeconds: 1

---

相关文章

网友评论

    本文标题:centos 安装k8s集群

    本文链接:https://www.haomeiwen.com/subject/mwbnoktx.html