安装完Kubernetes集群后,想看一下所有pod的运行状态,结果发现有几个pods一直处于ContainerCreating状态,虽然不太了解,但凭个人感觉也发现不太正常。
查看pods状态
$ sudo kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-58d6b7c8d7-m66pf 0/1 ContainerCreating 0 35m
kube-system coredns-58d6b7c8d7-prngb 0/1 ContainerCreating 0 35m
kube-system etcd-kube-node1 1/1 Running 2 34m
kube-system kube-apiserver-kube-node1 1/1 Running 2 32m
kube-system kube-controller-manager-kube-node1 1/1 Running 2 35m
kube-system kube-flannel-ds-amd64-7lblq 1/1 Running 0 29m
查看Pods详情
$ sudo kubectl describe pods coredns-58d6b7c8d7-m66pf --namespace=kube-system
# 省略部分错误信息
Warning FailedCreatePodSandBox 24m kubelet, kube-node2 Failed create pod sandbox:
rpc error: code = Unknown desc = [failed to set up sandbox container "368d7ee2bcb5bc2dbdec304131ff25746962537956a47360d9ccfd535b21c76a"
network for pod "kubernetes-dashboard-6dccb458d5-x7xft":
NetworkPlugin cni failed to set up pod "kubernetes-dashboard-6dccb458d5-x7xft_kube-system" network:
open /proc/sys/net/ipv6/conf/eth0/accept_dad: no such file or directory,
failed to clean up sandbox container "368d7ee2bcb5bc2dbdec304131ff25746962537956a47360d9ccfd535b21c76a" network for pod "kubernetes-dashboard-6dccb458d5-x7xft":
NetworkPlugin cni failed to teardown pod "kubernetes-dashboard-6dccb458d5-x7xft_kube-system" network:
failed to get IP addresses for "eth0": <nil>]
原因是slave上未启用IPV6
启用IPV6
sudo sed -i 's\ipv6.disable=1\ipv6.disable=0\g' /etc/default/grub
sudo grub2-mkconfig -o /boot/grub2/grub.cfg
sudo reboot
网友评论