美文网首页
K8S部署及镜像环境突破

K8S部署及镜像环境突破

作者: 佑岷 | 来源:发表于2019-07-25 18:00 被阅读0次

这边拿到了3台机器,一台做Master、另外两台做node, /etc/hosts配置添加:

100.65.16.82 master 
100.65.16.32 node1
100.65.16.6  node2

1. 环境

1.1 首先升级系统(CentOS 7.1 → CentOS 7.6):
yum update

cat /etc/redhat-release 
CentOS Linux release 7.6.1810 (Core)
1.2 关闭防火墙(否则需要为各个端口开权限):
systemctl stop firewalld
systemctl disable firewalld

禁用selinux(安全增强型 Linux(Security-Enhanced Linux)简称 SELinux,是一个 Linux 内核模块,也是 Linux 的一个安全子系统):

setenforce 0

vi /etc/selinux/config
SELINUX=disabled
1.3 开启桥接及路由功能且关闭swap:
swapoff -a

vim /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1

vm.swappiness=0 # 关闭swap

执行,使之立即生效:
modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf
1.4 IPVS依赖模块配置:
vim /etc/sysconfig/modules/ipvs.modules 
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4

执行,使之生效:
chmod 755 /etc/sysconfig/modules/ipvs.modules 
bash /etc/sysconfig/modules/ipvs.modules 
lsmod | grep -e ip_vs -e nf_conntrack_ipv4

2. 安装Docker(Docker version 18.09.8)

安装的是docker-ce(必须):

yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

选择安装自己需要的版本:

[root@LFA-L0170086 ~]# yum list docker-ce.x86_64 --showduplicates|sort -r
......
docker-ce.x86_64            3:18.09.6-3.el7                    docker-ce-stable 
docker-ce.x86_64            3:18.09.7-3.el7                    @docker-ce-stable
docker-ce.x86_64            3:18.09.7-3.el7                    docker-ce-stable 
docker-ce.x86_64            3:18.09.8-3.el7                    docker-ce-stable 
docker-ce.x86_64            3:19.03.0-3.el7                    docker-ce-stable 
......

安装docker:

yum makecache fast

yum install -y --setopt=obsoletes=0 docker-ce-18.09.7-3.el7  #--setopt=obsoletes=0 安装之前版本
systemctl start docker
systemctl enable docker

修改cgroupdriver:

vim /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}

重启docker使之生效:systemctl restart docker

3. 安装K8S

3.1 添加k8s yum repo(每个节点):
vim /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
gpgcheck=1
enabled=1

3.2 安装kubeadm、kubectl、kubelet(每个节点):
yum makecache fast
yum install -y kubelet kubeadm kubectl
3.3 kubelet开机启动(每个节点):
systemctl enable kubelet.service
3.4 kubeadm初始化

-- kubeadm config print init-defaults可以查看默认配置
创建kubeadm.yaml文件:

vim /etc/kubernetes/kubeadm.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 100.65.16.82
  bindPort: 6443
nodeRegistration:
  name: master
  taints:
  - effect: PreferNoSchedule
    key: node-role.kubernetes.io/master
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.15.0
networking:
  podSubnet: 10.68.0.0/16

初始化:

kubeadm init --config kubeadm.yaml --ignore-preflight-errors=Swap

会报错,大意是说:无法从k8s.gcr.io拉取镜像,那么曲线救国(在各个节点上执行拉取):

vim /etc/kubernetes/kubeimage.sh
#!/bin/bash
aliyun='registry.cn-hangzhou.aliyuncs.com/google_containers/'
gcr='k8s.gcr.io/'
images=(kube-apiserver:v1.15.0 kube-controller-manager:v1.15.0 kube-scheduler:v1.15.0 kube-proxy:v1.15.0 pause:3.1 etcd:3.3.10 coredns:1.3.1)
for name in ${images[@]}
do
    #docker pull $aliyun$name # pull image
    #docker tag $aliyun$name $gcr$name # rename image
    docker rmi $aliyun$name # rename image
done

然后执行:bash /etc/kubernetes/kubeimage.sh

会有日志输出:

[root@LFA-L0170086 kubernetes]# kubeadm init --config kubeadm.yaml --ignore-preflight-errors=Swap
[init] Using Kubernetes version: v1.15.0
[preflight] Running pre-flight checks
    [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 100.65.16.82]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [100.65.16.82 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [100.65.16.82 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 24.503123 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:PreferNoSchedule]
[bootstrap-token] Using token: jet1pg.kibnnqxfr1u4tb5e
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 100.65.16.82:6443 --token jet1pg.kibnnqxfr1u4tb5e \
    --discovery-token-ca-cert-hash sha256:32508e031653953ac6ecda0e61baa7c5f81900b173f0ac44db0037cd4efab6a0 

上面的日志最好记录下来 kubeadm join 在添加node时有用,配置kubectl:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

检查组件的状态:

kubectl get cs (等同于kubectl get componentstatuses)
返回:
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"} 

重置kubeadm init 可以通过如下指令:

kubeadm reset
ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1
rm -rf /var/lib/cni/
3.5 安装网络:flannel
cd /etc/kubernetes/
curl -O https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f  kube-flannel.yml

如果有多个网卡需要在配置文件中指定:

containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-amd64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        - --iface=eth1

若已经安装完毕,可以edit pod:

kubectl edit pod kube-flannel-ds-amd64-lnn4x -n kube-system
--ifcae=${eth0} #eth0 内网卡

可以通过kubectl get pod -A(-A新版本可以用,也可以用--all-namespaces) 查看各个pod的状态:

[root@LFA-L0170086 kubernetes]# kubectl get pod -A
NAMESPACE     NAME                                             READY   STATUS    RESTARTS   AGE
kube-system   coredns-5c98db65d4-6gf2r                         1/1     Running   13         2d21h
kube-system   coredns-5c98db65d4-74h2m                         1/1     Running   12         2d21h
kube-system   etcd-master                                      1/1     Running   7          2d21h
kube-system   grafana-grafana-565fb55c66-nhk2b                 0/1     Pending   0          21h
kube-system   heapster-5b77c7b876-gf77h                        2/2     Running   0          22h
kube-system   influxdb-influxdb-779bc8b7c8-hdlj5               1/1     Running   0          21h
kube-system   kube-apiserver-master                            1/1     Running   7          2d21h
kube-system   kube-controller-manager-master                   1/1     Running   8          2d21h
kube-system   kube-flannel-ds-amd64-7bq7t                      1/1     Running   1          2d6h
kube-system   kube-flannel-ds-amd64-d4vs2                      1/1     Running   11         2d20h
kube-system   kube-flannel-ds-amd64-lnn4x                      1/1     Running   1          2d5h
kube-system   kube-proxy-748bh                                 1/1     Running   1          2d5h
kube-system   kube-proxy-ffxfx                                 1/1     Running   1          2d5h
kube-system   kube-proxy-q29bj                                 1/1     Running   8          2d5h
kube-system   kube-scheduler-master                            1/1     Running   7          2d21h
kube-system   kubernetes-dashboard-7dd458ffcf-mtmf7            1/1     Running   0          26h
kube-system   metrics-server-564b58f9b6-hb99l                  1/1     Running   45         29h
kube-system   nginx-ingress-controller-6f7b68fbcc-k7qrt        1/1     Running   11         2d
kube-system   nginx-ingress-default-backend-858c4f5574-kkmkl   1/1     Running   9          47h
kube-system   tiller-deploy-5565869f5d-sdj5r                   1/1     Running   9          2d1h
3.6 添加node

将init成功状态日志下的kubeadm join在node1、node2上面执行即可:

kubeadm join 100.65.16.82:6443 --token jet1pg.kibnnqxfr1u4tb5e \
    --discovery-token-ca-cert-hash sha256:32508e031653953ac6ecda0e61baa7c5f81900b173f0ac44db0037cd4efab6a0 

若过期,可以参见另外一篇文章重新生成 PS: 若忘了写的话 百度亦可~
通过下面指令可以查看节点状态:

[root@LFA-L0170086 kubernetes]# kubectl get node
NAME     STATUS   ROLES    AGE     VERSION
master   Ready    master   2d21h   v1.15.1
node1    Ready    <none>   2d6h    v1.15.1
node2    Ready    <none>   2d6h    v1.15.1

可以通过如下指令删除节点:

master:
kubectl drain node2 --delete-local-data --force --ignore-daemonsets
kubectl delete node node2

nod2:
kubeadm reset
ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1
rm -rf /var/lib/cni/

node1:
kubectl delete node node2
3.7 开启ipvs
kubectl edit cm kube-proxy -n kube-system #(cm是 configmap)
改为:
mode: "ipvs"

重启各个proxy。

相关文章

网友评论

      本文标题:K8S部署及镜像环境突破

      本文链接:https://www.haomeiwen.com/subject/cfhcrctx.html