美文网首页Docker容器程序员Kubernetes
使用Kubeadm构建Kubernetes集群

使用Kubeadm构建Kubernetes集群

作者: OrangeLoveMilan | 来源:发表于2017-12-28 14:52 被阅读765次

目录

  • 写在前面
  • 环境
  • 本次安装的版本
  • 最低要求
  • 安装前准备
  • 安装docker
  • 安装kubeadm相关
  • 初始化集群
  • 安装网络插件
  • 添加节点
  • 总结

写在前面

参考k8s中文社区的文档,搭建一下K8s的集群。使用Kubeadm工具,由于yum源在国外,踩了一些坑,给出完善的搭建文档。
参考自:
https://www.kubernetes.org.cn/3357.html

环境

ip role cpu&mem sys
192.168.109.154 master&node1 4c8G centos7.4
192.168.109.155 node2 4c8G centos7.4
192.168.109.156 node3 4c8G centos7.4

本次安装的版本

  • 系统版本:Centos7
  • 内核版本:3.10.0-693.5.2.el7.x86_64
  • Kubeadm版本: 1.9.0
  • kubernetes版本: 1.9
  • Dockers版本:17.03.2

最低要求

  • 支持的操作系统:
    • Ubuntu 16.04+
    • Debian 9
    • CentOS 7
    • RHEL 7
    • Fedora 25/26 (best-effort)
    • HypriotOS v1.0.1+
  • 大于2G的内存
  • 至少2个逻辑 CPU
  • 节点之间网络互通
  • 每个节点具有唯一的主机名、Mac 地址
  • 开放相关端口
  • 关闭swap(很重要)

安装前准备

所有节点:
1、关闭selinux
sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
setenforce 0
2、添加hosts,我这里由于是openstack环境,所以填的内网

cat >> /ets/hosts <<END
k8s01 1.1.1.62
k8s02 1.1.1.63
k8s03 1.1.1.64
END

3、配置代理
代理ip 192.168.190.161:1080 本机的ip加代理端口
我是使用的shadowsocks,协议是socks5
echo proxy=socks5://192.168.190.161:1080 >>/etc/yum.conf
privoxy 包可以在 EPEL 源找到,没有依赖,可以直接下载包安装。我这里是使用的本地的yum源
rpm -ivh http://192.168.109.147:8080/epel/7/x86_64/epel/Packages/p/privoxy-3.0.26-1.el7.x86_64.rpm

cat <<END >>/etc/privoxy/config
forward-socks5 / 192.168.190.161:1080 .
forward 127.*.*.*/ .
forward 192.168.*.*/ .
forward 1.1.*.*/ .
END

sed -i 's/^listen-address.*/listen-address 127.0.0.1:8118/' /etc/privoxy/config

sync && init 6
systemctl restart privoxy

安装docker

Kubernetes1.9版本官方只测试到了Docker17.03版本,所以安装docker17.03的版本

三台机器都执行:

yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum list docker-ce --showduplicates | sort -r
yum install -y \
    --setopt=obsoletes=0 \
    docker-ce-17.03.2.ce-1.el7.centos \
    docker-ce-selinux-17.03.2.ce-1.el7.centos
systemctl enable docker && systemctl start docker
cat << EOF > /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
mkdir /etc/systemd/system/docker.service.d/
cat <<END>/etc/systemd/system/docker.service.d/http-proxy.conf
[Service]
Environment="HTTP_PROXY=http://127.0.0.1:8118/"
Environment="HTTPS_PROXY=https://127.0.0.1:8118/"
END

systemctl daemon-reload && systemctl restart docker

安装kubeadm相关

三台机器均执行:

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
yum install -y kubelet kubeadm kubectl
sed -i 's/^proxy=/#proxy=/g' /etc/yum.conf
systemctl enable kubelet && sudo systemctl start kubelet

初始化集群

Master节点执行:

kubeadm init \
    --kubernetes-version=v1.9.0 \
    --pod-network-cidr=10.244.0.0/16 \
    --apiserver-advertise-address=1.1.1.62
[init] Using Kubernetes version: v1.9.0
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
    [WARNING Hostname]: hostname "k8s01" could not be reached
    [WARNING Hostname]: hostname "k8s01" lookup k8s01 on 114.114.114.114:53: no such host
    [WARNING FileExisting-crictl]: crictl not found in system path
[preflight] Starting the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [k8s01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 1.1.1.62]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 28.001613 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node k8s01 as master by adding a label and a taint
[markmaster] Master k8s01 tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: aca507.4e62dddb9bb6a149
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token aca507.4e62dddb9bb6a149 1.1.1.62:6443 --discovery-token-ca-cert-hash sha256:c252c41b41d25dad17d056fabf416f05305f4ae035031e33b31b70d575d35a76
echo export KUBECONFIG=/etc/kubernetes/admin.conf >>/root/.bash_profile && source /root/.bash_profile

kubectl get cs

# 配置 Master 也分担 Node 角色
kubectl taint nodes --all node-role.kubernetes.io/master-
kubectl get nodes

安装网络插件Flannel

Master节点执行:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml
# 等待几分钟后查看状态,均为 Running
[root@k8s01 ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                            READY     STATUS              RESTARTS   AGE
kube-system   etcd-k8s01                      1/1       Running             0          13m
kube-system   kube-apiserver-k8s01            1/1       Running             0          13m
kube-system   kube-controller-manager-k8s01   1/1       Running             0          13m
kube-system   kube-dns-6f4fd4bdf-htw7p        0/3       ContainerCreating   0          20m
kube-system   kube-flannel-ds-4sb4s           1/1       Running             5          16m
kube-system   kube-flannel-ds-cqkps           0/1       CrashLoopBackOff    6          16m
kube-system   kube-flannel-ds-lx84m           1/1       Running             0          18m
kube-system   kube-proxy-54bjt                1/1       Running             0          16m
kube-system   kube-proxy-bhsls                1/1       Running             0          21m
kube-system   kube-proxy-wdhw9                1/1       Running             0          16m
kube-system   kube-scheduler-k8s01            1/1       Running             0          13m

kubectl get pods --all-namespaces

添加node

Node节点执行:
kubeadm join --token aca507.4e62dddb9bb6a149 1.1.1.62:6443 --discovery-token-ca-cert-hash sha256:c252c41b41d25dad17d056fabf416f05305f4ae035031e33b31b70d575d35a76

Master节点执行:

# 需要等待 3~5 分钟,节点才会 Ready
[root@k8s-01 ~]# kubectl get nodes -o wide
NAME      STATUS    ROLES     AGE       VERSION   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION              CONTAINER-RUNTIME
k8s-1     Ready     master    12h       v1.9.0    <none>        CentOS Linux 7 (Core)   3.10.0-693.5.2.el7.x86_64   docker://17.3.2
k8s-2     Ready     <none>    12h       v1.9.0    <none>        CentOS Linux 7 (Core)   3.10.0-693.5.2.el7.x86_64   docker://17.3.2
k8s-3     Ready     <none>    12h       v1.9.0    <none>        CentOS Linux 7 (Core)   3.10.0-693.5.2.el7.x86_64   docker://17.3.2

kubectl get nodes

总结

至此,测试环境的K8s集群环境就搭建好了,就可以拥抱K8s了

相关文章

网友评论

    本文标题:使用Kubeadm构建Kubernetes集群

    本文链接:https://www.haomeiwen.com/subject/ywsmgxtx.html