美文网首页Amazing Arch
锻骨境—第1层: 搭建k8s 环境

锻骨境—第1层: 搭建k8s 环境

作者: 一笑醉红颜zh | 来源:发表于2019-10-12 18:16 被阅读0次

    硬货来了!

    天下武功,唯快不破!

    这里,使用kubeAdm 搭建一套k8s 环境。

    准备工作

    准备了3台虚拟机,k8s-master ,k8s-node1,k8s-node2, 这里的名称是按照 hosts 配置的。

    这里,最少需要设置的虚拟机资源为每台2核2G,然而,搭建完成之后发现内存太少,跑不了比如java之类的项目,所以,给每个节点变成了2核4G的配置。好了,准备三台虚拟机,本人电脑为8核16G。假设设置的静态IP是 192.168.10.134,192.168.10.135,192.168.10.135

    image.png

    虚拟机的系统为centos7 64 版本的。

    [root@k8s-node1 ~]# uname -a
    Linux k8s-node1 3.10.0-957.el7.x86_64 #1 SMP Thu Nov 8 23:39:32 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
    
    

    关闭防火墙和内存交换

    [root@localhost ~]# systemctl stop firewalld & systemctl disable firewalld
    [1] 10341
    Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
    Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
    
    关闭内存交互的话2种方式:
    1.执行swapoff -a可临时关闭,但系统重启后恢复
    2.编辑/etc/fstab,注释掉包含swap的那一行即可,重启后可永久关闭,如下:
    /dev/mapper/centos-root /                       xfs     defaults        0 0
    UUID=20ca01ff-c5eb-47bc-99a0-6527b8cb246e /boot                   xfs     defaults        0 0
    # /dev/mapper/centos-swap swap 
    
    
    

    使用top 命令查看swap 是否关闭


    image.png

    swapoff -a
    
    # 永久禁用,打开/etc/fstab注释掉swap那一行。
    sed -i 's/.*swap.*/#&/' /etc/fstab
    

    关闭安全策略 Selinux

    # 临时禁用selinux
    setenforce 0
    # 永久关闭 修改/etc/sysconfig/selinux文件设置
    sed -i 's/SELINUX=permissive/SELINUX=disabled/' /etc/sysconfig/selinux
    sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
    

    执行 setenforce 0

    修改内核 参数

    cat <<EOF >  /etc/sysctl.d/k8s.conf
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    EOF
    sysctl --system
    

    第一步: 安装Docker 服务

    这里用的是root用户,执行:

    yum  update
    
    yum install docker-ce   -y 
    
    #### 查看安装后的docker 信息
    [root@k8s-node1 ~]# docker info
    Containers: 148
     Running: 105
     Paused: 0
     Stopped: 43
    Images: 148
    Server Version: 18.09.0
    Storage Driver: overlay2
     Backing Filesystem: xfs
     Supports d_type: true
     Native Overlay Diff: true
    Logging Driver: json-file
    Cgroup Driver: cgroupfs
    Plugins:
     Volume: local
     Network: bridge host macvlan null overlay
     Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
    Swarm: inactive
    Runtimes: runc
    Default Runtime: runc
    Init Binary: docker-init
    containerd version: c4446665cb9c30056f4998ed953e6d4ff22c7c39
    runc version: 4fc53a81fb7c994640722ac585fa9ca548971871
    init version: fec3683
    Security Options:
     seccomp
      Profile: default
    Kernel Version: 3.10.0-957.el7.x86_64
    Operating System: CentOS Linux 7 (Core)
    OSType: linux
    Architecture: x86_64
    CPUs: 4
    Total Memory: 3.782GiB
    Name: k8s-node1
    ID: IP66:6LM2:EOXF:ZORV:E42W:HGJB:OKSB:LRZJ:BR5T:ITFQ:ECIF:K2BU
    Docker Root Dir: /var/lib/docker
    Debug Mode (client): false
    Debug Mode (server): false
    Registry: https://index.docker.io/v1/
    Labels:
    Experimental: false
    Insecure Registries:
     127.0.0.0/8
    Registry Mirrors:
     https://uob2vbvb.mirror.aliyuncs.com/
    Live Restore Enabled: false
    Product License: Community Engine
    
    
    

    设置开机启动docker 服务

    设置开机启动
    systemctl enable docker 
    启动docker服务
    systemctl start docker 
    重启docker 服务
    systemctl restart docker
    

    顺便提一嘴docker ,是典型的c-s 架构,安装完之后通过docker info 可以看到有client 和server 的信息,其利用了内核的Crgoup 进行容器化资源管理,这是他的根本。

    验证:

    docker run hello-world
    
    如果能看到控制台打印出来了:
    hello from Docker !
    

    这三个节点点全部需要安装Docker.

    第二步; 安装kubeAdm,kubectl,kubelet

    kubeAdm 是一个安装k8s集群的工具,和minikube 是一样的。

    由于墙的原因,使用aliyun 源安装一下工具和组件

    //配置kubenetes yum 源
    cat <<EOF > /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=0
    repo_gpgcheck=0
    gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
           http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF
    // 所有node 安装kubectl,kubeadm ,kubelet 
    yum install -y   kubectl-1.13.1
    yum install -y   kubelet-1.13.1
    yum install -y   kubeadm-1.13.1
     如果yum无法安装,可以使用下载二进制客户端的方式安装
    [参考文章](https://blog.csdn.net/faryang/article/details/79427573 )
    [官网](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-kubeadm-kubelet-and-kubectl)
    
    

    号外:

    如果使用yum 安装的话,也可以查下源里kubectl,kubeadm,kubelet 有哪些版本,如下:

    *   yum --showduplicates list kubeadm | expand
    
    *    yum --showduplicates list kubectl | expand
    
    *   yum --showduplicates list kubelet | expand
    
    安装语法 : yum install <package name>-<version info>
    
    

    配置docker 的cgroup driver 和 kubelet 的cgroup driver一致

    docker info | grep -i cgroup
    
    cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
    
    

    如果要修改docker 的cgroupDriver 的话

    vim /usr/lib/systemd/system/docker.service
    ExecStart=/usr/bin/dockerd –exec-opt native.cgroupdriver=systemd
    
    或在 /etc/docker/daemon.json , 增加以下内容
    {
    "exec-opts": ["native.cgroupdriver=systemd"]
    }
    
    
    

    如果不一样

    sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
    systemctl daemon-reload
    

    启动 kubelet
    systemctl enable kubelet && systemctl start kubelet

    第三步:导入k8s 组件镜像

    所有的组件都使用容器化去运行的,我自己打的镜像有这些

    基础组件: k8s_1.13.tar
    https://pan.baidu.com/s/1Y4_Lu7vGQIQ9GlfGj3rlYg
    网络组件flannel :
    https://pan.baidu.com/s/12tACetmu99R-OT4mslzAxA

    使用docker load -i 导入上述的包,三个节点都导入。

    第四步:初始化集群

    在master 节点使用以下命令初始化集群

    kubeadm init  --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.13.0 --apiserver-advertise-address=192.168.100.207
    
    

    如果要重置或重新安装:

    kubeadm reset
    

    init 信息

    [init] Using Kubernetes version: v1.10.0
    [init] Using Authorization modes: [Node RBAC]
    [preflight] Running pre-flight checks.
        [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.03.1-ce. Max validated version: 17.03
        [WARNING FileExisting-crictl]: crictl not found in system path
    Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
    [certificates] Generated ca certificate and key.
    [certificates] Generated apiserver certificate and key.
    [certificates] apiserver serving cert is signed for DNS names [k8s-node1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.56.101]
    [certificates] Generated apiserver-kubelet-client certificate and key.
    [certificates] Generated etcd/ca certificate and key.
    [certificates] Generated etcd/server certificate and key.
    [certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1]
    [certificates] Generated etcd/peer certificate and key.
    [certificates] etcd/peer serving cert is signed for DNS names [k8s-node1] and IPs [192.168.56.101]
    [certificates] Generated etcd/healthcheck-client certificate and key.
    [certificates] Generated apiserver-etcd-client certificate and key.
    [certificates] Generated sa key and public key.
    [certificates] Generated front-proxy-ca certificate and key.
    [certificates] Generated front-proxy-client certificate and key.
    [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
    [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
    [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
    [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
    [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
    [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
    [init] This might take a minute or longer if the control plane images have to be pulled.
    [apiclient] All control plane components are healthy after 24.006116 seconds
    [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
    [markmaster] Will mark node k8s-node1 as master by adding a label and a taint
    [markmaster] Master k8s-node1 tainted and labelled with key/value: node-role.kubernetes.io/master=""
    [bootstraptoken] Using token: kt62dw.q99dfynu1kuf4wgy
    [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
    [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
    [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
    [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
    [addons] Applied essential addon: kube-dns
    [addons] Applied essential addon: kube-proxy
    
    Your Kubernetes master has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    You can now join any number of machines by running the following on each node
    as root:
    
      kubeadm join 192.168.10.134:6443 --token kt62dw.q99dfynu1kuf4wgy --discovery-token-ca-cert-hash sha256:5404bcccc1ade37e9d80831ce82590e6079c1a3ea52a941f3077b40ba19f2c68
    
    
    

    注意看最后的一行 kubeadm join 信息,后面的非master 节点通过这个加入的k8s 集群作为node 节点

    初始化flannel网络组件:

    把下面的内容保存下来,例如flannel.yml。
    这里有rbac 权限的设置以及如何启动flannel 镜像。

    ---
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1beta1
    metadata:
      name: flannel
    rules:
      - apiGroups:
          - ""
        resources:
          - pods
        verbs:
          - get
      - apiGroups:
          - ""
        resources:
          - nodes
        verbs:
          - list
          - watch
      - apiGroups:
          - ""
        resources:
          - nodes/status
        verbs:
          - patch
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1beta1
    metadata:
      name: flannel
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: flannel
    subjects:
    - kind: ServiceAccount
      name: flannel
      namespace: kube-system
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: flannel
      namespace: kube-system
    ---
    kind: ConfigMap
    apiVersion: v1
    metadata:
      name: kube-flannel-cfg
      namespace: kube-system
      labels:
        tier: node
        app: flannel
    data:
      cni-conf.json: |
        {
          "name": "cbr0",
          "plugins": [
            {
              "type": "flannel",
              "delegate": {
                "hairpinMode": true,
                "isDefaultGateway": true
              }
            },
            {
              "type": "portmap",
              "capabilities": {
                "portMappings": true
              }
            }
          ]
        }
      net-conf.json: |
        {
          "Network": "10.244.0.0/16",
          "Backend": {
            "Type": "vxlan"
          }
        }
    ---
    apiVersion: extensions/v1beta1
    kind: DaemonSet
    metadata:
      name: kube-flannel-ds-amd64
      namespace: kube-system
      labels:
        tier: node
        app: flannel
    spec:
      template:
        metadata:
          labels:
            tier: node
            app: flannel
        spec:
          hostNetwork: true
          nodeSelector:
            beta.kubernetes.io/arch: amd64
          tolerations:
          - operator: Exists
            effect: NoSchedule
          serviceAccountName: flannel
          initContainers:
          - name: install-cni
            image: quay.io/coreos/flannel:v0.10.0-amd64
            command:
            - cp
            args:
            - -f
            - /etc/kube-flannel/cni-conf.json
            - /etc/cni/net.d/10-flannel.conflist
            volumeMounts:
            - name: cni
              mountPath: /etc/cni/net.d
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          containers:
          - name: kube-flannel
            image: quay.io/coreos/flannel:v0.10.0-amd64
            command:
            - /opt/bin/flanneld
            args:
            - --ip-masq
            - --kube-subnet-mgr
            resources:
              requests:
                cpu: "100m"
                memory: "50Mi"
              limits:
                cpu: "100m"
                memory: "50Mi"
            securityContext:
              privileged: true
            env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            volumeMounts:
            - name: run
              mountPath: /run
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          volumes:
            - name: run
              hostPath:
                path: /run
            - name: cni
              hostPath:
                path: /etc/cni/net.d
            - name: flannel-cfg
              configMap:
                name: kube-flannel-cfg
    
    
    
    

    创建网络,使用kubectl 命令在主节点执行。DaemonSet 的服务是默认会在node 运行的。

    #在kube-system 命名空间下创建网络组件
    kubectl apply -f flannel.yml -n kube-system
    
    

    第五步: 其他节点加入集群

    在k8s-node2,k8s-node3上直接执行,可以看到是否成功的日志

      kubeadm join 192.168.10.134:6443 --token kt62dw.q99dfynu1kuf4wgy --discovery-token-ca-cert-hash sha256:5404bcccc1ade37e9d80831ce82590e6079c1a3ea52a941f3077b40ba19f2c68
    
    

    验证节点信息,这个有点延迟;

    image.png

    总结

    以上,既完成了搭建一个master ,2个工作节点的简单的k8s 集群。master 节点主要跑的是核心的k8s组件,node节点用来跑k8s 之外的业务或者其他功能组件。

    k8s master 是核心,所以,高可用的话master 是多个的。

    另外,部署k8s dashboard UI 控制面板。

    部署k8s 的dashboard 非常简单;
    官方部署yaml
    大概率遇到RBAC权限问题,可以授予cluster-admin 权限即可。
    然后他的service 类型改成我们nodePort 形式

    dashboard 比较简单,不在描述。

    下一层: k8s安装过程的背后

    相关文章

      网友评论

        本文标题:锻骨境—第1层: 搭建k8s 环境

        本文链接:https://www.haomeiwen.com/subject/wpgqmctx.html