功能
监控node的健康状态,异常后打上相应的noExection 污点,由TaintManager驱逐异常node上的污点。
Kube-controller-manager支持了很多参数配置NodeLifecycleController,由于驱逐pod属于高危动作,很可能会引起集群崩溃,业务服务不可用,如node-master间网络异常,但pod能提供正常服务。如apiserver使用的外部lb异常,或者apiserver本身异常,都可能发生不可预知的问题。但是这个能力不能完全不用,当node真正异常时,需要相应的容错能力。本文将NodeLifecycleController所有的参数在源码层面进行了相关分析和解释,希望对大家有帮助
相关配置
NodeLifecycleControllerConfiguration
// NodeLifecycleControllerConfiguration contains elements describing NodeLifecycleController.
type NodeLifecycleControllerConfiguration struct {
// If set to true enables NoExecute Taints and will evict all not-tolerating
// Pod running on Nodes tainted with this kind of Taints.
// 对应--enable-taint-manager 默认为 true,如果为 true,则表示NodeController 将会启动 TaintManager,当已经调度到该 node 上的 pod 不能容忍 node 的 taint 时,由 TaintManager 负责驱逐此类 pod,若不开启该特性则已调度到该 node 上的 pod 会继续存在
EnableTaintManager bool
// nodeEvictionRate is the number of nodes per second on which pods are deleted in case of node failure when a zone is healthy
// 通过--node-eviction-rate设置, 默认 0.1,表示当集群下某个 zone 为 unhealthy 时,每秒应该剔除的 node 数量,默认即每 10s 剔除1个 node
NodeEvictionRate float32
// secondaryNodeEvictionRate is the number of nodes per second on which pods are deleted in case of node failure when a zone is unhealthy
// 通过 --secondary-node-eviction-rate设置,默认为 0.01,表示如果某个 zone 下的 unhealthy 节点的百分比超过 --unhealthy-zone-threshold (默认为 0.55)时,驱逐速率将会减小,如果集群较小(小于等于 --large-cluster-size-threshold 个 节点 - 默认为 50),驱逐操作将会停止,否则驱逐速率将降为每秒 --secondary-node-eviction-rate 个(默认为 0.01);
SecondaryNodeEvictionRate float32
// nodeStartupGracePeriod is the amount of time which we allow starting a node to
// be unresponsive before marking it unhealthy.
// --node-startup-grace-period 默认 60s,在 node 启动完成前标记节点为unhealthy 的允许无响应时间;
NodeStartupGracePeriod metav1.Duration
// NodeMonitorGracePeriod is the amount of time which we allow a running node to be
// unresponsive before marking it unhealthy. Must be N times more than kubelet's
// nodeStatusUpdateFrequency, where N means number of retries allowed for kubelet
// to post node status.
// 通过--node-monitor-grace-period 设置,默认 40s,表示在标记某个 node为 unhealthy 前,允许40s内该node无响应
NodeMonitorGracePeriod metav1.Duration
// podEvictionTimeout is the grace period for deleting pods on failed nodes.
// 通过--pod-eviction-timeout 设置,默认 5 分钟,表示在强制删除 node 上的 pod 时,容忍 pod 时间;没有启动TaintBasedEvictions才有效,所以不会用到
PodEvictionTimeout metav1.Duration
// secondaryNodeEvictionRate is implicitly overridden to 0 for clusters smaller than or equal to largeClusterSizeThreshold
// 通过--large-cluster-size-threshold 设置,默认为 50,当该 zone 的节点超过该阈值时,则认为该 zone 是一个大集群;对于小于或等于largeClusterSizeThreshold的集群,secondarynodeevtionrate将降级为0,即不进行驱逐
LargeClusterSizeThreshold int32
// Zone is treated as unhealthy in nodeEvictionRate and secondaryNodeEvictionRate when at least
// unhealthyZoneThreshold (no less than 3) of Nodes in the zone are NotReady
// --unhealthy-zone-threshold, 不健康zone阈值,会影响什么时候开启二级驱赶速率,默认为0.55,即当该zone中节点宕机数目超过55%,而认为该zone不健康
UnhealthyZoneThreshold float32
}
NodeMonitorPeriod
// KubeCloudSharedConfiguration contains elements shared by both kube-controller manager
// and cloud-controller manager, but not genericconfig.
type KubeCloudSharedConfiguration struct {
// nodeMonitorPeriod is the period for syncing NodeStatus in NodeController.
// 通过--node-monitor-period 设置,默认为 5s,表示在 NodeController 中同步NodeStatus 的周期,多长时间Controller检查一次。这个值应该小于--node-monitor-grace-period
NodeMonitorPeriod metav1.Duration
}
代码
代码来自1.21
startNodeLifecycleController->
NewNodeLifecycleController->
lifecycleController.Run->
nc.taintManager.Run->
nc.doNodeProcessingPassWorker->
nc.doPodProcessingWorker->
nc.doNoExecuteTaintingPass(EnableTaintManager)/nc.doEvictionPass->
nc.monitorNodeHealth->
taintManager
主要功能:负责删除pod
- 监听集群中所有的Node和pod,当node上存在taints并且该node上的pod不能容忍所有taint,或者pod配置了tolerationSeconds,并且倒计时完成,则delete 该pod
- 相关参数,apiserver准入控制添加的,时间可配置
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
- --default-not-ready-toleration-seconds // apiserver admit阶段默认给非daemontsetpod添加,默认300s。
-
--default-unreachable-toleration-seconds // apiserver admit默认给非daemontsetpod添加,默认300s
image.png
doNodeProcessingPassWorker
对该node执行doNoScheduleTaintingPass–根据node status里的condition设置taint(noschedule)
// map {NodeConditionType: {ConditionStatus: TaintKey}}
// represents which NodeConditionType under which ConditionStatus should be
// tainted with which TaintKey
// for certain NodeConditionType, there are multiple {ConditionStatus,TaintKey} pairs
nodeConditionToTaintKeyStatusMap = map[v1.NodeConditionType]map[v1.ConditionStatus]string{
v1.NodeReady: {
v1.ConditionFalse: v1.TaintNodeNotReady,
v1.ConditionUnknown: v1.TaintNodeUnreachable,
},
v1.NodeMemoryPressure: {
v1.ConditionTrue: v1.TaintNodeMemoryPressure,
},
v1.NodeDiskPressure: {
v1.ConditionTrue: v1.TaintNodeDiskPressure,
},
v1.NodeNetworkUnavailable: {
v1.ConditionTrue: v1.TaintNodeNetworkUnavailable,
},
v1.NodePIDPressure: {
v1.ConditionTrue: v1.TaintNodePIDPressure,
},
}
image.png
doPodProcessingWorker
主要的功能是当node ReadyCondition不是true时,将pod的ReadyCondition更新为false.这里也比较重要,node notreay后pod都会被设置成not ready,service对象的endpoint的pod就会被摘除,依赖service的服务需要关注这块。
// pod的spec.NodeName不是nil并且pod的spec.NodeName发生变化则会加入到podUpdateQueue。
func (nc *Controller) podUpdated(oldPod, newPod *v1.Pod) {
if newPod == nil {
return
}
if len(newPod.Spec.NodeName) != 0 && (oldPod == nil || newPod.Spec.NodeName != oldPod.Spec.NodeName) {
podItem := podUpdateItem{newPod.Namespace, newPod.Name}
nc.podUpdateQueue.Add(podItem)
}
}
image.png
doNoExecuteTaintingPass
处理基于taint的evictions方式,驱逐的时候是不会做限速的,所以这里要现实添加taint的速度(monitorNodeHealth侧实现限速)。真正的驱逐pod还是在taintManager中。RateLimitedTimedQueue令牌桶限速队列
- 从zoneNoExecuteTainter中获得一个zone的node队列,从队列中获取一个node
- 如果node ready condition为false,移除“node.kubernetes.io/unreachable”的taint,添加“node.kubernetes.io/not-ready” 的taint,Effect为NoExecute。
- 如果node ready condition为unknown,移除“node.kubernetes.io/not-ready” 的taint,添加“node.kubernetes.io/unreachable” 的taint,Effect为NoExecute。
-
nc.zoneNoExecuteTainter[k].Try() 过程中限速会起作用,限速是由monitorNodeHealth->handleDisruption设置的
image.png
func (nc *Controller) doNoExecuteTaintingPass() {
nc.evictorLock.Lock()
defer nc.evictorLock.Unlock()
for k := range nc.zoneNoExecuteTainter {
// Function should return 'false' and a time after which it should be retried, or 'true' if it shouldn't (it succeeded).
nc.zoneNoExecuteTainter[k].Try(func(value scheduler.TimedValue) (bool, time.Duration) {
node, err := nc.nodeLister.Get(value.Value)
if apierrors.IsNotFound(err) {
klog.Warningf("Node %v no longer present in nodeLister!", value.Value)
return true, 0
} else if err != nil {
klog.Warningf("Failed to get Node %v from the nodeLister: %v", value.Value, err)
// retry in 50 millisecond
return false, 50 * time.Millisecond
}
_, condition := nodeutil.GetNodeCondition(&node.Status, v1.NodeReady)
// Because we want to mimic NodeStatus.Condition["Ready"] we make "unreachable" and "not ready" taints mutually exclusive.
taintToAdd := v1.Taint{}
oppositeTaint := v1.Taint{}
switch condition.Status {
case v1.ConditionFalse:
taintToAdd = *NotReadyTaintTemplate
oppositeTaint = *UnreachableTaintTemplate
case v1.ConditionUnknown:
taintToAdd = *UnreachableTaintTemplate
oppositeTaint = *NotReadyTaintTemplate
default:
// It seems that the Node is ready again, so there's no need to taint it.
klog.V(4).Infof("Node %v was in a taint queue, but it's ready now. Ignoring taint request.", value.Value)
return true, 0
}
result := nodeutil.SwapNodeControllerTaint(nc.kubeClient, []*v1.Taint{&taintToAdd}, []*v1.Taint{&oppositeTaint}, node)
if result {
//count the evictionsNumber
zone := utilnode.GetZoneKey(node)
evictionsNumber.WithLabelValues(zone).Inc()
}
return result, 0
})
}
}
monitorNodeHealth
每隔nodeMonitorPeriod周期,执行一次monitorNodeHealth,维护node状态和zone的状态,当 node 处于异常状态时更新 node 的 taint 。根据集群不同状态设置zone的速率。
NodeLifecycleController 会为每一个 node 划分一个 zoneStates,不同的zoneStates 分别对应着不同的驱逐速率
- Initial:新添的node state
- fullyDisrupted:zone 下所有 node 都处于 notReady 状态;
- partiallyDisrupted:notReady node 占比 >= unhealthyZoneThreshold 的值(默认为0.55,通过--unhealthy-zone-threshold设置)且 notReady node 数超过2个;
- normal:以上两种情况之外的;
关键参数 -
--node-monitor-period ,nodeMonitorPeriod这么长时间,monitorNodeHealth执行一次,也就是检查node健康请况的周期
image.png
- 注意:只要node notready,就会将pod更新为notready*
switch {
case currentReadyCondition.Status != v1.ConditionTrue && observedReadyCondition.Status == v1.ConditionTrue:
// Report node event only once when status changed.
nodeutil.RecordNodeStatusChange(nc.recorder, node, "NodeNotReady")
fallthrough
case needsRetry && observedReadyCondition.Status != v1.ConditionTrue:
if err = nodeutil.MarkPodsNotReady(nc.kubeClient, nc.recorder, pods, node.Name); err != nil {
utilruntime.HandleError(fmt.Errorf("unable to mark all pods NotReady on node %v: %v; queuing for retry", node.Name, err))
nc.nodesToRetry.Store(node.Name, struct{}{})
continue
}
}
tryUpdateNodeHealth
-
tryUpdateNodeHealth 会根据当前获取的 node status 更新 nodeHealthMap,nodeHealthMap 保存 node 最近一次的状态。然后根据nodeHealthMap和node最新的状态判断 node是否已经处于 unknown 状态(probeTimestamp是否超过nodeMonitorGracePeriod/nodeStartupGracePeriod),并更新到apiserver。
-
如果node没有上报status,但是lease更新,依然认为node是健康的。
主要参数: -
--node-monitor-grace-period nodeMonitorGracePeriod,改时间内node没有更新状态/lease就会被设置为unknown
-
--node-startup-grace-period nodeStartupGracePeriod,新节点允许的优雅时间往往要长一些
变量解释: -
observedReadyCondition是上个nodeMonitorPeriod周期时node的condition
-
currentReadyCondition是node当前的condition
主要逻辑:
- 从nodeHealthMap获取上一次存储的nodeHealth(nodeHealthData)
type nodeHealthData struct {
probeTimestamp metav1.Time
readyTransitionTimestamp metav1.Time
status *v1.NodeStatus
lease *coordv1.Lease
}
- 从node(apiserver中获取的)获取currentReadyCondition(ReadyCondition),如果是nil说明kubelet或者nodecontroller(也就是NodeLifecycleController)还没有上报过状态,这里能看出controller-manager也会更新node的status。
- node.Status的里没有ReadyCondition,就fake一个,并赋值给observedReadyCondition,因为之前肯定没有值
- nodeHealth也用当前的node.Status创建。这里是一个弄的在中首次更新
- 根据savedCondition、currentReadyCondition、observedLease更新nodeHealth的probeTimestamp
- 判断nodeHealth的probeTimestamp是不是已经超过了gracePeriod(nodeMonitorGracePeriod/nodeStartupGracePeriod),如果超时了就将node 的condition设置为"Unknown"然后更新到apiserver
- 将gracePeriod, observedReadyCondition, currentReadyCondition返回
handleDisruption
根据各zone中unhealthy node的情况(依据zoneToNodeConditions),给 zone 设置不同的驱逐速率。
关键概念:
- allAreFullyDisrupted代表现在所有zone状态stateFullDisruption全挂
- allWasFullyDisrupted为true代表过去所有zone状态stateFullDisruption全挂
主要逻辑:
- 当allAreFullyDisrupted为false,allWasFullyDisrupted为true,之前zone未全挂,现在所有zone全挂:
- 执行markNodeAsReachable删除所有node的taint
- 并将zoneNoExecuteTainter的QPS设置为0,也就是不再打taint。
- 当allAreFullyDisrupted为true,allWasFullyDisrupted为false,过去所有zone全挂,现在所有zone未全挂:
- 更新所有node的nodeHealthMap里的probeTImestamp、readyTransitiontimestamp的时间戳
- 遍历zoneStates,设置QPS,每个zone的每秒给几个弄的加taint
- 当zone的状态为stateNormal,则zoneNoExecuteTainter速率设置为evictionLimiterQPS(--node-eviction-rate)
- 当zone状态为statePartialDisruption,根据zone里的node数量,当node数量大于largeClusterThreshold(--large-cluster-size-threshold),设置zoneNoExecuteTainter速率为SecondEvictionLimiterQPS(--secondary-node-eviction-rate);小于等于largeClusterThreshold,设置zoneNoExecuteTainter速率为0。
- 当zone状态为stateFullDisruption,则zoneNoExecuteTainter速率设置为evictionLimiterQPS。也就是说如果一共有个两个zone,az1不是stateFullDisruption,az2是stateFullDisruption,那么az2不会全部停止驱逐,驱逐速率为evictionLimiterQPS,然后向SecondEvictionLimiterQPS过度。
computeZoneStateFunc
计算zone的state,计算每个zone中notReady的node并将 zone 分为三种:
- fullyDisrupted:所有的node都是notReady
- partiallyDisrupted:notReady超过unhealthyZoneThreshold,notReady大于两个
- normal:以上两种情况之外的情况
// ComputeZoneState returns a slice of NodeReadyConditions for all Nodes in a given zone.
// The zone is considered:
// - fullyDisrupted if there're no Ready Nodes,
// - partiallyDisrupted if at least than nc.unhealthyZoneThreshold percent of Nodes are not Ready,
// - normal otherwise
func (nc *Controller) ComputeZoneState(nodeReadyConditions []*v1.NodeCondition) (int, ZoneState) {
readyNodes := 0
notReadyNodes := 0
for i := range nodeReadyConditions {
if nodeReadyConditions[i] != nil && nodeReadyConditions[i].Status == v1.ConditionTrue {
readyNodes++
} else {
notReadyNodes++
}
}
switch {
case readyNodes == 0 && notReadyNodes > 0:
return notReadyNodes, stateFullDisruption
case notReadyNodes > 2 && float32(notReadyNodes)/float32(notReadyNodes+readyNodes) >= nc.unhealthyZoneThreshold:
return notReadyNodes, statePartialDisruption
default:
return notReadyNodes, stateNormal
}
}
isNodeExcludedFromDisruptionChecks
只是用来排除一些node,使其不参与限速相关的计算。
网友评论