美文网首页linux运维
k8s部署-17-网络插件和dns解析服务配置

k8s部署-17-网络插件和dns解析服务配置

作者: 运维家 | 来源:发表于2022-03-22 11:38 被阅读0次

    经过前几篇的文章,我们通过二进制方式部署的k8s集群,差不多所有的组件都部署好了,接下来就差我们的网络插件和dns了,在这里我们的网络插件使用calico,dns解析使用coredns。

    网络插件安装

    PS:该步骤只需要在node1上执行即可,即master节点执行。

    下面的地址是官网地址,感兴趣的可以看下介绍:

    https://projectcalico.docs.tigera.io/getting-started/kubernetes/self-managed-onprem/onpremises

    如果你的节点数量是50以下,选择以下命令下载:

    [root@node1 ~]# curl https://projectcalico.docs.tigera.io/manifests/calico.yaml -O[root@node1 ~]# ls calico.yaml calico.yaml[root@node1 ~]#

    如果是50个节点以上,请执行如下命令下载:

    [root@node1 ~]# curl https://projectcalico.docs.tigera.io/manifests/calico-typha.yaml -o calico.yaml[root@node1 ~]# ls calico.yaml calico.yaml[root@node1 ~]#

    修改IP自动发现:

    为啥要修改呢?因为如果当你的服务器上存在多个IP地址,或者说有很多虚拟网卡的时候,他有时候就会获取到错误的IP地址,所以要进行如下配置。

    [root@node1 ~]# vim calico.yaml  # 有两个配置需要修改# 修改前- name: IP  value: "autodetect"  # 修改成- name: IP  valueFrom:    fieldRef:      fieldPath: status.hostIP      # 修改前是被注释的状态# - name: CALICO_IPV4POOL_CIDR#   value: "192.168.0.0/16"# 修改成- name: CALICO_IPV4POOL_CIDR  value: "10.200.0.0/16"[root@node1 ~]# 

    让其生效:

    [root@node1 ~]# kubectl apply -f calico.yaml

    等几分钟之后,查看状态如下,如果status不是running,可能是镜像还没有下载完,再稍等几分钟就行了。

    [root@node1 ~]# kubectl get nodesNAME    STATUS   ROLES    AGE   VERSIONnode2   Ready    <none>   15h   v1.20.2node3   Ready    <none>   15h   v1.20.2[root@node1 ~]#[root@node1 ~]# kubectl get pod -n kube-systemNAME                                       READY   STATUS    RESTARTS   AGEcalico-kube-controllers-858c9597c8-6gzd5   1/1     Running   0          43mcalico-node-6k479                          1/1     Running   0          43mcalico-node-bnbxx                          1/1     Running   0          43mnginx-proxy-node3                          1/1     Running   1          15h[root@node1 ~]# 

    DNS解析

    设置nds的cluster-ip地址:

    # 这个IP要和之前安装apiserver的时候配置中的一个哈[root@node1 ~]# COREDNS_CLUSTER_IP=10.233.0.10

    创建coredns.yaml的配置

    # 以下配置文件不需要修改任何信息[root@node1 ~]# vim coredns.yaml ---apiVersion: v1kind: ConfigMapmetadata:  name: coredns  namespace: kube-system  labels:      addonmanager.kubernetes.io/mode: EnsureExistsdata:  Corefile: |    .:53 {        errors        health {            lameduck 5s        }        ready        kubernetes cluster.local in-addr.arpa ip6.arpa {          pods insecure          fallthrough in-addr.arpa ip6.arpa        }        prometheus :9153        forward . /etc/resolv.conf {          prefer_udp        }        cache 30        loop        reload        loadbalance    }---apiVersion: v1kind: ServiceAccountmetadata:  name: coredns  namespace: kube-system  labels:    addonmanager.kubernetes.io/mode: Reconcile---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata:  labels:    kubernetes.io/bootstrapping: rbac-defaults    addonmanager.kubernetes.io/mode: Reconcile  name: system:corednsrules:  - apiGroups:      - ""    resources:      - endpoints      - services      - pods      - namespaces    verbs:      - list      - watch  - apiGroups:      - ""    resources:      - nodes    verbs:      - get---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:  annotations:    rbac.authorization.kubernetes.io/autoupdate: "true"  labels:    kubernetes.io/bootstrapping: rbac-defaults    addonmanager.kubernetes.io/mode: EnsureExists  name: system:corednsroleRef:  apiGroup: rbac.authorization.k8s.io  kind: ClusterRole  name: system:corednssubjects:  - kind: ServiceAccount    name: coredns    namespace: kube-system---apiVersion: v1kind: Servicemetadata:  name: coredns  namespace: kube-system  labels:    k8s-app: kube-dns    kubernetes.io/name: "coredns"    addonmanager.kubernetes.io/mode: Reconcile  annotations:    prometheus.io/port: "9153"    prometheus.io/scrape: "true"spec:  selector:    k8s-app: kube-dns  clusterIP: ${COREDNS_CLUSTER_IP}  ports:    - name: dns      port: 53      protocol: UDP    - name: dns-tcp      port: 53      protocol: TCP    - name: metrics      port: 9153      protocol: TCP---apiVersion: apps/v1kind: Deploymentmetadata:  name: "coredns"  namespace: kube-system  labels:    k8s-app: "kube-dns"    addonmanager.kubernetes.io/mode: Reconcile    kubernetes.io/name: "coredns"spec:  replicas: 2  strategy:    type: RollingUpdate    rollingUpdate:      maxUnavailable: 0      maxSurge: 10%  selector:    matchLabels:      k8s-app: kube-dns  template:    metadata:      labels:        k8s-app: kube-dns      annotations:        seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'    spec:      priorityClassName: system-cluster-critical      nodeSelector:        kubernetes.io/os: linux      serviceAccountName: coredns      tolerations:        - key: node-role.kubernetes.io/master          effect: NoSchedule      affinity:        podAntiAffinity:          requiredDuringSchedulingIgnoredDuringExecution:          - topologyKey: "kubernetes.io/hostname"            labelSelector:              matchLabels:                k8s-app: kube-dns        nodeAffinity:          preferredDuringSchedulingIgnoredDuringExecution:          - weight: 100            preference:              matchExpressions:              - key: node-role.kubernetes.io/master                operator: In                values:                - ""      containers:      - name: coredns        image: "docker.io/coredns/coredns:1.6.7"        imagePullPolicy: IfNotPresent        resources:          # TODO: Set memory limits when we've profiled the container for large          # clusters, then set request = limit to keep this container in          # guaranteed class. Currently, this container falls into the          # "burstable" category so the kubelet doesn't backoff from restarting it.          limits:            memory: 170Mi          requests:            cpu: 100m            memory: 70Mi        args: [ "-conf", "/etc/coredns/Corefile" ]        volumeMounts:        - name: config-volume          mountPath: /etc/coredns        ports:        - containerPort: 53          name: dns          protocol: UDP        - containerPort: 53          name: dns-tcp          protocol: TCP        - containerPort: 9153          name: metrics          protocol: TCP        securityContext:          allowPrivilegeEscalation: false          capabilities:            add:            - NET_BIND_SERVICE            drop:            - all          readOnlyRootFilesystem: true        livenessProbe:          httpGet:            path: /health            port: 8080            scheme: HTTP          timeoutSeconds: 5          successThreshold: 1          failureThreshold: 10        readinessProbe:          httpGet:            path: /ready            port: 8181            scheme: HTTP          timeoutSeconds: 5          successThreshold: 1          failureThreshold: 10      dnsPolicy: Default      volumes:        - name: config-volume          configMap:            name: coredns            items:            - key: Corefile              path: Corefile[root@node1 ~]# sed -i "s/\${COREDNS_CLUSTER_IP}/${COREDNS_CLUSTER_IP}/g" coredns.yaml[root@node1 ~]#

    使其生效:

    [root@node1 ~]# kubectl apply -f coredns.yaml

    部署NodeLocal DNSCache:

    剩余内容请转至VX公众号 “运维家” ,回复 “124” 查看。

    相关文章

      网友评论

        本文标题:k8s部署-17-网络插件和dns解析服务配置

        本文链接:https://www.haomeiwen.com/subject/djlfjrtx.html