限制CPU和Memory
1.创建一个压力测试的deployment
student@ubuntu:~$ kubectl create deployment hog --image=vish/stress
deployment.apps/hog created
student@ubuntu:~$ kubectl describe deployments. hog
Name: hog
Namespace: default
CreationTimestamp: Thu, 29 Nov 2018 17:18:40 +0800
Labels: app=hog
Annotations: deployment.kubernetes.io/revision: 1
Selector: app=hog
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=hog
Containers:
stress:
Image: vish/stress
Port: <none>
Host Port: <none>
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: hog-566f7db749 (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 11s deployment-controller Scaled up replica set hog-566f7db749 to 1
-
利用上面deployment生成yaml配置文件
student@ubuntu:~$ kubectl get deployments. hog --export -o yaml > stress.yaml
-
添加内存的限制
student@ubuntu:~$ vi stress.yaml
containers:
- image: vish/stress
imagePullPolicy: Always
name: stress
resources:
requests:
memory: 250Mi
limits:
memory: 500Mi
student@ubuntu:~$ kubectl replace -f stress.yaml
deployment.extensions/hog replaced
- 查看新deployment
student@ubuntu:~$ kubectl describe deployments. hog
Name: hog
Namespace: default
CreationTimestamp: Thu, 29 Nov 2018 17:18:40 +0800
Labels: app=hog
Annotations: deployment.kubernetes.io/revision: 2
Selector: app=hog
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=hog
Containers:
stress:
Image: vish/stress
Port: <none>
Host Port: <none>
Limits:
memory: 500Mi
Requests:
memory: 250Mi
...
student@ubuntu:~$ kubectl logs hog-7f88cddb49-fljv2
I1129 09:21:55.634020 1 main.go:26] Allocating "0" memory, in "4Ki" chunks, with a 1ms sleep between allocations
I1129 09:21:55.634174 1 main.go:29] Allocated "0" memory
- 添加args参数,使stress消耗cpu和内存
containers:
- image: vish/stress
imagePullPolicy: Always
name: stress
args:
- -cpus
- "1"
- -mem-total
- "1000Mi"
- -mem-alloc-size
- "100Mi"
- -mem-alloc-sleep
- -"1s"
resources:
requests:
memory: 250Mi
cpu: 250m
limits:
memory: 500Mi
cpu: 500m
student@ubuntu:~$ kubectl create -f hub.yaml
student@ubuntu:~$ kubectl logs hub-66f865dd-pxfxb
I1129 09:30:07.752589 1 main.go:26] Allocating "1000Mi" memory, in "4Ki" chunks, with a 1ms sleep between allocations
I1129 09:30:07.752710 1 main.go:39] Spawning a thread to consume CPU
- top查看stress 内存和cpu使用率
cpu已经被锁定在50%,内存不断增长
Tasks: 244 total, 1 running, 243 sleeping, 0 stopped, 0 zombie
%Cpu(s): 2.1 us, 4.7 sy, 0.0 ni, 93.1 id, 0.0 wa, 0.0 hi, 0.1 si, 0.0 st
KiB Mem : 32946908 total, 28562256 free, 1048332 used, 3336320 buff/cache
KiB Swap: 0 total, 0 free, 0 used. 31385600 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1579 root 20 0 123632 117760 3096 S 48.5 0.4 0:37.50 stress
Tasks: 244 total, 1 running, 243 sleeping, 0 stopped, 0 zombie
%Cpu(s): 1.4 us, 4.6 sy, 0.0 ni, 93.8 id, 0.0 wa, 0.0 hi, 0.1 si, 0.0 st
KiB Mem : 32946908 total, 28505804 free, 1104552 used, 3336552 buff/cache
KiB Swap: 0 total, 0 free, 0 used. 31329340 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1579 root 20 0 184120 177476 3096 S 48.5 0.5 0:56.45 stress
- 内存到达limits设定的500Mi时,触发OOM,被强杀掉
student@ubuntu:/root$ kubectl get pod
NAME READY STATUS RESTARTS AGE
hub-66f865dd-pxfxb 1/1 Running 0 5m42s
student@ubuntu:/root$ kubectl get pod
NAME READY STATUS RESTARTS AGE
hub-66f865dd-pxfxb 0/1 OOMKilled 0 5m45s
student@ubuntu:/root$ kubectl get pod
NAME READY STATUS RESTARTS AGE
hub-66f865dd-pxfxb 1/1 Running 1 5m55s
[ 9956.986728] Call Trace:
[ 9956.986761] [<ffffffff813f1143>] dump_stack+0x63/0x90
[ 9956.986774] [<ffffffff8120958e>] dump_header+0x5a/0x1c5
[ 9956.986786] [<ffffffff81190e7b>] ? find_lock_task_mm+0x3b/0x80
[ 9956.986790] [<ffffffff81191442>] oom_kill_process+0x202/0x3c0
[ 9956.986796] [<ffffffff811fd394>] ? mem_cgroup_iter+0x204/0x390
[ 9956.986800] [<ffffffff811ff3f3>] mem_cgroup_out_of_memory+0x2b3/0x300
[ 9956.986805] [<ffffffff812001c8>] mem_cgroup_oom_synchronize+0x338/0x350
[ 9956.986809] [<ffffffff811fb6f0>] ? kzalloc_node.constprop.48+0x20/0x20
[ 9956.986813] [<ffffffff81191af4>] pagefault_out_of_memory+0x44/0xc0
[ 9956.986820] [<ffffffff8106b2c2>] mm_fault_error+0x82/0x160
[ 9956.986823] [<ffffffff8106b778>] __do_page_fault+0x3d8/0x400
[ 9956.986826] [<ffffffff8106b807>] trace_do_page_fault+0x37/0xe0
[ 9956.986831] [<ffffffff81063ed9>] do_async_page_fault+0x19/0x70
[ 9956.986857] [<ffffffff8182fce8>] async_page_fault+0x28/0x30
[ 9956.986860] Task in /kubepods/burstable/pod59c958fb-f3b9-11e8-9072-52540066b534/a9e14f01ec7f5f77d55334912defa83373b4e6503664367a36b761fe275b655a killed as a result of limit of /kubepods/burstable/pod59c958fb-f3b9-11e8-9072-52540066b534
[ 9956.986871] memory: usage 512000kB, limit 512000kB, failcnt 58
[ 9956.986873] memory+swap: usage 0kB, limit 9007199254740988kB, failcnt 0
[ 9956.986875] kmem: usage 1372kB, limit 9007199254740988kB, failcnt 0
[ 9956.986876] Memory cgroup stats for /kubepods/burstable/pod59c958fb-f3b9-11e8-9072-52540066b534: cache:0KB rss:0KB rss_huge:0KB mapped_file:0KB dirty:0KB writeback:0KB inactive_anon:0KB active_anon:0KB inactive_file:0KB active_file:0KB unevictable:0KB
[ 9956.986900] Memory cgroup stats for /kubepods/burstable/pod59c958fb-f3b9-11e8-9072-52540066b534/2d53428967a810673a9ddcbfe543593aa8967317ae4d635a5c45774e85fa8d01: cache:0KB rss:40KB rss_huge:0KB mapped_file:0KB dirty:0KB writeback:0KB inactive_anon:0KB active_anon:40KB inactive_file:0KB active_file:0KB unevictable:0KB
[ 9956.986917] Memory cgroup stats for /kubepods/burstable/pod59c958fb-f3b9-11e8-9072-52540066b534/a9e14f01ec7f5f77d55334912defa83373b4e6503664367a36b761fe275b655a: cache:0KB rss:510588KB rss_huge:2048KB mapped_file:0KB dirty:0KB writeback:0KB inactive_anon:0KB active_anon:510428KB inactive_file:0KB active_file:0KB unevictable:0KB
[ 9956.986933] [ pid ] uid tgid total_vm rss nr_ptes nr_pmds swapents oom_score_adj name
[ 9956.987133] [ 1422] 0 1422 257 1 4 2 0 -998 pause
[ 9956.987142] [ 1579] 0 1579 129571 128043 256 5 0 993 stress
[ 9956.987155] Memory cgroup out of memory: Kill process 1579 (stress) score 1995 or sacrifice child
[ 9956.987297] Killed process 1579 (stress) total-vm:518284kB, anon-rss:509076kB, file-rss:3096kB
namspace级别限制resource
1.创建测试namespace
student@ubuntu:~$ kubectl create namespace low-usage-limit
namespace/low-usage-limit created
student@ubuntu:~$ kubectl get namespaces
NAME STATUS AGE
default Active 24h
kube-public Active 24h
kube-system Active 24h
low-usage-limit Active 4s
2.创建limitrange资源
student@ubuntu:~$ vi low-resource-range.yaml
apiVersion: v1
kind: LimitRange
metadata:
name: low-resource-range
spec:
limits:
- default:
cpu: 1
memory: 500Mi
defaultRequest:
cpu: 0.5
memory: 100Mi
type: Container
student@ubuntu:~$ kubectl create -n low-usage-limit -f low-resource-range.yaml
limitrange/low-resource-range created
student@ubuntu:~$ kubectl get limitranges --all-namespaces
NAMESPACE NAME CREATED AT
low-usage-limit low-resource-range 2018-11-29T09:44:53Z
- 该namespace下创建deployment
student@ubuntu:~$ kubectl create deployment hog --image=vish/stress -n low-usage-limit
deployment.apps/hog created
student@ubuntu:~$ kubectl -n low-usage-limit get deployments. hog
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
hog 1 1 1 1 23s
student@ubuntu:~$ kubectl -n low-usage-limit describe pod hog-566f7db749-jjwn8
Name: hog-566f7db749-jjwn8
Namespace: low-usage-limit
Priority: 0
PriorityClassName: <none>
Node: ubuntu/172.30.81.194
Start Time: Thu, 29 Nov 2018 17:47:16 +0800
Labels: app=hog
pod-template-hash=566f7db749
Annotations: cni.projectcalico.org/podIP: 192.168.0.38/32
kubernetes.io/limit-ranger: LimitRanger plugin set: cpu, memory request for container stress; cpu, memory limit for container stress
Status: Running
IP: 192.168.0.38
Controlled By: ReplicaSet/hog-566f7db749
Containers:
stress:
Container ID: docker://3c1d29bd146e7d02d6781cb7e799412141fd10a2f0b0dc580b1496562b1f4179
Image: vish/stress
Image ID: docker-pullable://vish/stress@sha256:b6456a3df6db5e063e1783153627947484a3db387be99e49708c70a9a15e7177
Port: <none>
Host Port: <none>
State: Running
Started: Thu, 29 Nov 2018 17:47:20 +0800
Ready: True
Restart Count: 0
Limits:
cpu: 1
memory: 500Mi
Requests:
cpu: 500m
memory: 100Mi
Environment: <none>
...
节点的维护drain和cordon
- cordon 设置节点不调度
student@ubuntu:~$ kubectl cordon node-193
node/node-193 cordoned
student@ubuntu:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
node-193 Ready,SchedulingDisabled worker 23h v1.12.1
ubuntu Ready master 24h v1.12.1
- uncordon取消不调度
student@ubuntu:/root$ kubectl uncordon node-193
node/node-193 uncordoned
student@ubuntu:/root$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
node-193 Ready worker 23h v1.12.1
ubuntu Ready master 24h v1.12.1
- 节点进入维护模式,上面pod做迁移
student@ubuntu:/root$ kubectl drain node-193
node/node-193 cordoned
error: unable to drain node "node-193", aborting command...
There are pending nodes to be drained:
node-193
error: DaemonSet-managed pods (use --ignore-daemonsets to ignore): calico-node-l7nvf, kube-proxy-gczr5
student@ubuntu:/root$ kubectl drain node-193 --ignore-daemonsets
node/node-193 already cordoned
WARNING: Ignoring DaemonSet-managed pods: calico-node-l7nvf, kube-proxy-gczr5
4.取消维护模式
student@ubuntu:/root$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
node-193 Ready,SchedulingDisabled worker 23h v1.12.1
ubuntu Ready master 24h v1.12.1
student@ubuntu:/root$ kubectl uncordon node-193
node/node-193 uncordoned
student@ubuntu:/root$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
node-193 Ready worker 23h v1.12.1
ubuntu Ready master 24h v1.12.1
网友评论