keywords
1 node(s) had taints that the pod didn't tolerate, 2 node(s) had volume node affinity conflict.
background
<wiz_code_mirror><pre class=" CodeMirror-line " role="presentation">helm delete tidb-cluster --purge</pre></wiz_code_mirror>
销毁tidb cluster 之后,再次部署tidbcluster后发现 tikv都是pending,describe发现如下
<wiz_code_mirror><pre class=" CodeMirror-line " role="presentation"> Warning FailedScheduling <unknown> tidb-scheduler 0/4 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 node(s) had volume node affinity conflict.</pre>
<pre class=" CodeMirror-line " role="presentation"> Warning FailedScheduling <unknown> tidb-scheduler 0/4 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 node(s) had volume node affinity conflict.</pre>
<pre class=" CodeMirror-line " role="presentation"> Warning HAScheduling 70s (x2 over 70s) tidb-scheduler pods "tidb-cluster-tikv-3" not found</pre></wiz_code_mirror>
1个节点存在pod无法容忍的污染,2个节点存在卷节点亲和冲突
analyze
是之前数据卷中的数据没清理导致的
solve
删除之前的数据
<wiz_code_mirror><pre class=" CodeMirror-line " role="presentation">kubectl get pv -l app.kubernetes.io/namespace=tidb -o name | xargs -I {} kubectl patch {} -p '{"spec":{"persistentVolumeReclaimPolicy":"Delete"}}' && \</pre>
<pre class=" CodeMirror-line " role="presentation">kubectl delete pvc --namespace tidb --all</pre></wiz_code_mirror>
删除之前的数据后 ,再重新 helm install tidb-cluster
网友评论